Sample records for variable lhv model

  1. Comment on 'All quantum observables in a hidden-variable model must commute simultaneously'

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagata, Koji

    Malley discussed [Phys. Rev. A 69, 022118 (2004)] that all quantum observables in a hidden-variable model for quantum events must commute simultaneously. In this comment, we discuss that Malley's theorem is indeed valid for the hidden-variable theoretical assumptions, which were introduced by Kochen and Specker. However, we give an example that the local hidden-variable (LHV) model for quantum events preserves noncommutativity of quantum observables. It turns out that Malley's theorem is not related to the LHV model for quantum events, in general.

  2. Diversity in tooth eruption and life history in humans: illustration from a Pygmy population

    PubMed Central

    Ramirez Rozzi, Fernando

    2016-01-01

    Life history variables (LHV) in primates are closely correlated with the ages of tooth eruption, which are a useful proxy to predict growth and development in extant and extinct species. However, it is not known how tooth eruption ages interact with LHV in polymorphic species such as modern humans. African pygmies are at the one extreme in the range of human size variation. LHV in the Baka pygmies are similar to those in standard populations. We would therefore expect tooth eruption ages to be similar also. This mixed (longitudinal and cross-sectional) study of tooth eruption in Baka individuals of known age reveals that eruption in all tooth classes occurs earlier than in any other human population. Earlier tooth eruption can be related to the particular somatic growth in the Baka but cannot be correlated with LHV. The link between LHV and tooth eruption seems disrupted in H. sapiens, allowing adaptive variations in tooth eruption in response to different environmental constraints while maintaining the unique human life cycle. PMID:27305976

  3. Einstein-Podolsky-Rosen correlations and Bell correlations in the simplest scenario

    NASA Astrophysics Data System (ADS)

    Quan, Quan; Zhu, Huangjun; Fan, Heng; Yang, Wen-Li

    2017-06-01

    Einstein-Podolsky-Rosen (EPR) steering is an intermediate type of quantum nonlocality which sits between entanglement and Bell nonlocality. A set of correlations is Bell nonlocal if it does not admit a local hidden variable (LHV) model, while it is EPR nonlocal if it does not admit a local hidden variable-local hidden state (LHV-LHS) model. It is interesting to know what states can generate EPR-nonlocal correlations in the simplest nontrivial scenario, that is, two projective measurements for each party sharing a two-qubit state. Here we show that a two-qubit state can generate EPR-nonlocal full correlations (excluding marginal statistics) in this scenario if and only if it can generate Bell-nonlocal correlations. If full statistics (including marginal statistics) is taken into account, surprisingly, the same scenario can manifest the simplest one-way steering and the strongest hierarchy between steering and Bell nonlocality. To illustrate these intriguing phenomena in simple setups, several concrete examples are discussed in detail, which facilitates experimental demonstration. In the course of study, we introduce the concept of restricted LHS models and thereby derive a necessary and sufficient semidefinite-programming criterion to determine the steerability of any bipartite state under given measurements. Analytical criteria are further derived in several scenarios of strong theoretical and experimental interest.

  4. Study on the combined sewage sludge pyrolysis and gasification process: mass and energy balance.

    PubMed

    Wang, Zhonghui; Chen, Dezhen; Song, Xueding; Zhao, Lei

    2012-12-01

    A combined pyrolysis and gasification process for sewage sludge was studied in this paper for the purpose of its safe disposal with energy self-balance. Three sewage sludge samples with different dry basis lower heat values (LHV(db)) were used to evaluate the constraints on this combined process. Those samples were pre-dried and then pyrolysed within the temperature range of 400-550 degrees C. Afterwards, the char obtained from pyrolysis was gasified to produce fuel gas. The experimental results showed that the char yield ranged between 37.28 and 53.75 wt% of the dry sludge and it changed with ash content, pyrolysis temperature and LHV(db) of the sewage sludge. The gas from char gasification had a LHV around 5.31-5.65 MJ/Nm3, suggesting it can be utilized to supply energy in the sewage sludge drying and pyrolysis process. It was also found that energy balance in the combined process was affected by the LHV(db) of sewage sludge, moisture content and pyrolysis temperature. Higher LHV(db), lower moisture content and higher pyrolysis temperature benefit energy self-balance. For sewage sludge with a moisture content of 80 wt%, LHV(db) of sewage sludge should be higher than 18 MJ/kg and the pyrolysis temperature should be higher than 450 degrees C to maintain energy self-sufficiency when volatile from the pyrolysis process is the only energy supplier; when the LHV(db) was in the range of 14.65-18 MJ/kg, energy self-balance could be maintained in this combined process with fuel gas from char gasification as a supplementary fuel; auxiliary fuel was always needed if the LHV(db) was lower than 14.65 MJ/kg.

  5. Comparison of biological and genomic characteristics between a newly isolated mink enteritis parvovirus MEV-LHV and an attenuated strain MEV-L.

    PubMed

    Mao, Yaping; Wang, Jigui; Hou, Qiang; Xi, Ji; Zhang, Xiaomei; Bian, Dawei; Yu, Yongle; Wang, Xi; Liu, Weiquan

    2016-06-01

    A virus isolated from mink showing clinical signs of enteritis was identified as a high virulent mink enteritis parvovirus (MEV) based on its biological characteristics in vivo and in vitro. Mink, challenged with this strain named MEV-LHV, exhibited severe pathological lesions as compared to those challenged with attenuated strain MEV-L. MEV-LHV also showed higher infection and replication efficiencies in vitro than MEV-L. Sequence of the complete genome of MEV-LHV was determined and analyzed in comparison with those in GenBank, which revealed that MEV-LHV shared high homology with virulent strain MEV SD12/01, whereas MEV-L was closely related to Abashiri and vaccine strain MEVB, and belonged to a different branch of the phylogenetic tree. The genomes of the two strains differed by insertions and deletions in their palindromic termini and specific unique mutations (especially VP2 300) in coding sequences which may be involved in viral replication and pathogenicity. The results of this study provide a better understanding of the biological and genomic characteristics of MEV and identify certain regions and sites that may be involved in viral replication and pathogenicity.

  6. Microplastics co-gasification with biomass: Modelling syngas characteristics at low temperatures

    NASA Astrophysics Data System (ADS)

    Ramos, Ana; Tavares, Raquel; Rouboa, Abel

    2018-05-01

    To assess the syngas produced through the gasification of microplastics at low temperatures, distinct blends of polyethylene terephthalate (PET) with biomass (vine pruning) were modelled using Aspen Plus. Critical gasification parameters such as co-fuel mixture, temperature and hydrogen production were evaluated, under two different gasifier agents (air and O2). Results have shown that higher PET ratios and higher temperatures (< 1200 °C) lead to enhanced hydrogen yields, for both atmospheres. The calorific content was also seen to increase with growing temperatures, superior LHV being achieved for the mixture with less microplastics fraction (9.2 MJ/Nm3) for both air and O2 environments. A final high-quality syngas was achieved, the dominant requirement determining which parameter to optimize: on one hand, higher H2 contents were seen for the blend with higher microplastic fraction, and on the other higher LHV was achieved for the equimolar mixture.

  7. Modeling the energy content of combustible ship-scrapping waste at Alang-Sosiya, India, using multiple regression analysis.

    PubMed

    Reddy, M Srinivasa; Basha, Shaik; Joshi, H V; Sravan Kumar, V G; Jha, B; Ghosh, P K

    2005-01-01

    Alang-Sosiya is the largest ship-scrapping yard in the world, established in 1982. Every year an average of 171 ships having a mean weight of 2.10 x 10(6)(+/-7.82 x 10(5)) of light dead weight tonnage (LDT) being scrapped. Apart from scrapped metals, this yard generates a massive amount of combustible solid waste in the form of waste wood, plastic, insulation material, paper, glass wool, thermocol pieces (polyurethane foam material), sponge, oiled rope, cotton waste, rubber, etc. In this study multiple regression analysis was used to develop predictive models for energy content of combustible ship-scrapping solid wastes. The scope of work comprised qualitative and quantitative estimation of solid waste samples and performing a sequential selection procedure for isolating variables. Three regression models were developed to correlate the energy content (net calorific values (LHV)) with variables derived from material composition, proximate and ultimate analyses. The performance of these models for this particular waste complies well with the equations developed by other researchers (Dulong, Steuer, Scheurer-Kestner and Bento's) for estimating energy content of municipal solid waste.

  8. Investigations in quantum games using EPR-type set-ups

    NASA Astrophysics Data System (ADS)

    Iqbal, Azhar

    2006-04-01

    Research in quantum games has flourished during recent years. However, it seems that opinion remains divided about their true quantum character and content. For example, one argument says that quantum games are nothing but 'disguised' classical games and that to quantize a game is equivalent to replacing the original game by a different classical game. The present thesis contributes towards the ongoing debate about quantum nature of quantum games by developing two approaches addressing the related issues. Both approaches take Einstein-Podolsky-Rosen (EPR)-type experiments as the underlying physical set-ups to play two-player quantum games. In the first approach, the players' strategies are unit vectors in their respective planes, with the knowledge of coordinate axes being shared between them. Players perform measurements in an EPR-type setting and their payoffs are defined as functions of the correlations, i.e. without reference to classical or quantum mechanics. Classical bimatrix games are reproduced if the input states are classical and perfectly anti-correlated, as for a classical correlation game. However, for a quantum correlation game, with an entangled singlet state as input, qualitatively different solutions are obtained. The second approach uses the result that when the predictions of a Local Hidden Variable (LHV) model are made to violate the Bell inequalities the result is that some probability measures assume negative values. With the requirement that classical games result when the predictions of a LHV model do not violate the Bell inequalities, our analysis looks at the impact which the emergence of negative probabilities has on the solutions of two-player games which are physically implemented using the EPR-type experiments.

  9. Deriving Einstein-Podolsky-Rosen steering inequalities from the few-body Abner Shimony inequalities

    NASA Astrophysics Data System (ADS)

    Zhou, Jie; Meng, Hui-Xian; Jiang, Shu-Han; Xu, Zhen-Peng; Ren, Changliang; Su, Hong-Yi; Chen, Jing-Ling

    2018-04-01

    For the Abner Shimony (AS) inequalities, the simplest unified forms of directions attaining the maximum quantum violation are investigated. Based on these directions, a family of Einstein-Podolsky-Rosen (EPR) steering inequalities is derived from the AS inequalities in a systematic manner. For these inequalities, the local hidden state (LHS) bounds are strictly less than the local hidden variable (LHV) bounds. This means that the EPR steering is a form of quantum nonlocality strictly weaker than Bell nonlocality.

  10. Laboratory Evaluation of Novel Particulate Control Concepts for Jet Engine Test Cells.

    DTIC Science & Technology

    1983-12-01

    HHV = Fuel higher heating value, btu/lb. tH = Heat of reaction, btu/Ib. KE = Kinetic energy, btu/hr. LHV = Lower heating value, btu/lb. M = Mass flow...the fuel bond energy must be the lower heating value ( LHV = AH of combustion with water as a vapor product). Therefore, the HHV must be corrected by... fuel . .- 7 This component is negligible for jet engines operated on uncontaminated turbine fuels . C. ALTERNATIVES AVAILABLE Several alternatives have

  11. Feasibility Study and Development of Modular Appliance Technologies, Centralized Heating (MATCH) Field Kitchen

    DTIC Science & Technology

    1994-07-01

    including standby losses. The required input fuel rate is 261.000 Btu/hr ( LHV ) or 277,700 Btu/hr ( HHV ). The Becker burner used in the system is rated at 2...cost of -$6/gallon. Burning diesel fuel , with 20-percent excess air and a final exhaust temperature of 932°F, requires a fuel LHV input of 261,000 Btu...GPH diesel fuel burning rate, corresponding to 280.000 Btu/hr ( HHV ) input. The flue gases leave the fluid heater at a nominal temperature of 932°F

  12. Artificial neural network based modelling approach for municipal solid waste gasification in a fluidized bed reactor.

    PubMed

    Pandey, Daya Shankar; Das, Saptarshi; Pan, Indranil; Leahy, James J; Kwapinski, Witold

    2016-12-01

    In this paper, multi-layer feed forward neural networks are used to predict the lower heating value of gas (LHV), lower heating value of gasification products including tars and entrained char (LHV p ) and syngas yield during gasification of municipal solid waste (MSW) during gasification in a fluidized bed reactor. These artificial neural networks (ANNs) with different architectures are trained using the Levenberg-Marquardt (LM) back-propagation algorithm and a cross validation is also performed to ensure that the results generalise to other unseen datasets. A rigorous study is carried out on optimally choosing the number of hidden layers, number of neurons in the hidden layer and activation function in a network using multiple Monte Carlo runs. Nine input and three output parameters are used to train and test various neural network architectures in both multiple output and single output prediction paradigms using the available experimental datasets. The model selection procedure is carried out to ascertain the best network architecture in terms of predictive accuracy. The simulation results show that the ANN based methodology is a viable alternative which can be used to predict the performance of a fluidized bed gasifier. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Development of a Hydrogen-Fueled Diver Heater.

    DTIC Science & Technology

    1982-05-01

    HHV ) of 319 B/lb, and a lower heating value ( LHV ) of 270 B/lb. The difference between HHV and LHV is the energy of water con- Sdensation. For an...AO-A115 173 BATTELLE COLUSUIJA LAOS 0O4 F/0 6/17 DEVELOPMNT OF A MYDR(N- FUELED DIVER IEATER. CU) MCAY U P 5 RIEGEL M61331-81-C-00?S I 4KLASSIFIED ML... FUELED DIVER HEATER to I NAVAL COASTAL SYSTEMS CENTER May 1982 by P. S. RIEGEL Contract No. N61331-81-C-0075 it Columbus Laboratories 505 King Avenue JUNO

  14. Solid State Energy Conversion Energy Alliance (SECA)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hennessy, Daniel; Sibisan, Rodica; Rasmussen, Mike

    2011-09-12

    The overall objective is to develop a Solid Oxide Fuel Cell (SOFC) stack that can be economically produced in high volumes and mass customized for different applications in transportation, stationary power generation, and military market sectors. In Phase I, work will be conducted on system design and integration, stack development, and development of reformers for natural gas and gasoline. Specifically, Delphi-Battelle will fabricate and test a 5 kW stationary power generation system consisting of a SOFC stack, a steam reformer for natural gas, and balance-of-plant (BOP) components, having an expected efficiency of ≥ 35 percent (AC/LHV). In Phase II andmore » Phase III, the emphasis will be to improve the SOFC stack, reduce start-up time, improve thermal cyclability, demonstrate operation on diesel fuel, and substantially reduce materials and manufacturing cost by integrating several functions into one component and thus reducing the number of components in the system. In Phase II, Delphi-Battelle will fabricate and demonstrate two SOFC systems: an improved stationary power generation system consisting of an improved SOFC stack with integrated reformation of natural gas, and the BOP components, with an expected efficiency of ≥ 40 percent (AC/LHV), and a mobile 5 kW system for heavy-duty trucks and military power applications consisting of an SOFC stack, reformer utilizing anode tailgate recycle for diesel fuel, and BOP components, with an expected efficiency of ≥ 30 percent (DC/LHV). Finally, in Phase III, Delphi-Battelle will fabricate and test a 5 kW Auxiliary Power Unit (APU) for mass-market automotive application consisting of an optimized SOFC stack, an optimized catalytic partial oxidation (CPO) reformer for gasoline, and BOP components, having an expected efficiency of ≥ 30 percent (DC/LHV) and a factory cost of ≤ $400/kW.« less

  15. Solid State Energy Conversion Energy Alliance (SECA)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hennessy, Daniel; Sibisan, Rodica; Rasmussen, Mike

    2011-09-12

    The overall objective is to develop a solid oxide fuel cell (SOFC) stack that can be economically produced in high volumes and mass customized for different applications in transportation, stationary power generation, and military market sectors. In Phase I, work will be conducted on system design and integration, stack development, and development of reformers for natural gas and gasoline. Specifically, Delphi-Battelle will fabricate and test a 5 kW stationary power generation system consisting of a SOFC stack, a steam reformer for natural gas, and balance-of-plant (BOP) components, having an expected efficiency of 35 percent (AC/LHV). In Phase II and Phasemore » III, the emphasis will be to improve the SOFC stack, reduce start-up time, improve thermal cyclability, demonstrate operation on diesel fuel, and substantially reduce materials and manufacturing cost by integrating several functions into one component and thus reducing the number of components in the system. In Phase II, Delphi-Battelle will fabricate and demonstrate two SOFC systems: an improved stationary power generation system consisting of an improved SOFC stack with integrated reformation of natural gas, and the BOP components, with an expected efficiency of ≥40 percent (AC/LHV), and a mobile 5 kW system for heavy-duty trucks and military power applications consisting of an SOFC stack, reformer utilizing anode tailgate recycle for diesel fuel, and BOP components, with an expected efficiency of ≥30 percent (DC/LHV). Finally, in Phase III, Delphi-Battelle will fabricate and test a 5 kW Auxiliary Power Unit (APU) for mass-market automotive application consisting of an optimized SOFC stack, an optimized catalytic partial oxidation (CPO) reformer for gasoline, and BOP components, having an expected efficiency of 30 percent (DC/LHV) and a factory cost of ≤$400/kW.« less

  16. Roles of three amino acids of capsid proteins in mink enteritis parvovirus replication.

    PubMed

    Mao, Yaping; Su, Jun; Wang, Jigui; Zhang, Xiaomei; Hou, Qiang; Bian, Dawei; Liu, Weiquan

    2016-08-15

    Virulent mink enteritis parvovirus (MEV) strain MEV-LHV replicated to higher titers in feline F81 cells than attenuated strain MEV-L. Phylogenetic and sequence analyses of the VP2 gene of MEV-LHV, MEV-L and other strains in GenBank revealed two evolutionary branches separating virulent and attenuated strains. Three residues, 101, 232 and 411, differed between virulent and attenuated strains but were conserved within the two branches. Site-directed mutagenesis of the VP2 gene of infectious plasmids of attenuated strain MEV-L respectively replacing residues 101 Ile and 411 Ala with Thr and Glu of virulent strains (MEV-L I101T and MEV-L A411E) increased replication efficiency but still to lower levels than MEV-LHV. However, viruses with mutation of residue 232 (MEV-L I232V and MEV-L I101T/I232V/A411E) decreased viral transcription and replication levels. The three VP2 residues 101, 232 and 411, located on or near the capsid surface, played different roles in the infection processes of MEV. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Refuse Derived Fuel (RDF) production and gasification in a pilot plant integrated with an Otto cycle ICE through Aspen plus™ modelling: Thermodynamic and economic viability.

    PubMed

    Násner, Albany Milena Lozano; Lora, Electo Eduardo Silva; Palacio, José Carlos Escobar; Rocha, Mateus Henrique; Restrepo, Julian Camilo; Venturini, Osvaldo José; Ratner, Albert

    2017-11-01

    This work deals with the development of a Refuse Derived Fuel (RDF) gasification pilot plant using air as a gasification agent. A downdraft fixed bed reactor is integrated with an Otto cycle Internal Combustion Engine (ICE). Modelling was carried out using the Aspen Plus™ software to predict the ideal operational conditions for maximum efficiency. Thermodynamics package used in the simulation comprised the Non-Random Two-Liquid (NRTL) model and the Hayden-O'Connell (HOC) equation of state. As expected, the results indicated that the Equivalence Ratio (ER) has a direct influence over the gasification temperature and the composition of the Raw Produced Gas (RPG), and effects of ER over the Lower Heating Value (LHV) and Cold Gasification Efficiency (CGE) of the RPG are also discussed. A maximum CGE efficiency of 57-60% was reached for ER values between 0.25 and 0.3, also an average reactor temperature values in the range of 680-700°C, with a peak LHV of 5.8MJ/Nm 3 . RPG was burned in an ICE, reaching an electrical power of 50kW el . The economic assessment of the pilot plant implementation was also performed, showing the project is feasible, with power above 120kW el with an initial investment of approximately US$ 300,000. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Effect of proton-conduction in electrolyte on electric efficiency of multi-stage solid oxide fuel cells

    PubMed Central

    Matsuzaki, Yoshio; Tachikawa, Yuya; Somekawa, Takaaki; Hatae, Toru; Matsumoto, Hiroshige; Taniguchi, Shunsuke; Sasaki, Kazunari

    2015-01-01

    Solid oxide fuel cells (SOFCs) are promising electrochemical devices that enable the highest fuel-to-electricity conversion efficiencies under high operating temperatures. The concept of multi-stage electrochemical oxidation using SOFCs has been proposed and studied over the past several decades for further improving the electrical efficiency. However, the improvement is limited by fuel dilution downstream of the fuel flow. Therefore, evolved technologies are required to achieve considerably higher electrical efficiencies. Here we present an innovative concept for a critically-high fuel-to-electricity conversion efficiency of up to 85% based on the lower heating value (LHV), in which a high-temperature multi-stage electrochemical oxidation is combined with a proton-conducting solid electrolyte. Switching a solid electrolyte material from a conventional oxide-ion conducting material to a proton-conducting material under the high-temperature multi-stage electrochemical oxidation mechanism has proven to be highly advantageous for the electrical efficiency. The DC efficiency of 85% (LHV) corresponds to a net AC efficiency of approximately 76% (LHV), where the net AC efficiency refers to the transmission-end AC efficiency. This evolved concept will yield a considerably higher efficiency with a much smaller generation capacity than the state-of-the-art several tens-of-MW-class most advanced combined cycle (MACC). PMID:26218470

  19. Effect of proton-conduction in electrolyte on electric efficiency of multi-stage solid oxide fuel cells.

    PubMed

    Matsuzaki, Yoshio; Tachikawa, Yuya; Somekawa, Takaaki; Hatae, Toru; Matsumoto, Hiroshige; Taniguchi, Shunsuke; Sasaki, Kazunari

    2015-07-28

    Solid oxide fuel cells (SOFCs) are promising electrochemical devices that enable the highest fuel-to-electricity conversion efficiencies under high operating temperatures. The concept of multi-stage electrochemical oxidation using SOFCs has been proposed and studied over the past several decades for further improving the electrical efficiency. However, the improvement is limited by fuel dilution downstream of the fuel flow. Therefore, evolved technologies are required to achieve considerably higher electrical efficiencies. Here we present an innovative concept for a critically-high fuel-to-electricity conversion efficiency of up to 85% based on the lower heating value (LHV), in which a high-temperature multi-stage electrochemical oxidation is combined with a proton-conducting solid electrolyte. Switching a solid electrolyte material from a conventional oxide-ion conducting material to a proton-conducting material under the high-temperature multi-stage electrochemical oxidation mechanism has proven to be highly advantageous for the electrical efficiency. The DC efficiency of 85% (LHV) corresponds to a net AC efficiency of approximately 76% (LHV), where the net AC efficiency refers to the transmission-end AC efficiency. This evolved concept will yield a considerably higher efficiency with a much smaller generation capacity than the state-of-the-art several tens-of-MW-class most advanced combined cycle (MACC).

  20. Optimal design of solid oxide fuel cell, ammonia-water single effect absorption cycle and Rankine steam cycle hybrid system

    NASA Astrophysics Data System (ADS)

    Mehrpooya, Mehdi; Dehghani, Hossein; Ali Moosavian, S. M.

    2016-02-01

    A combined system containing solid oxide fuel cell-gas turbine power plant, Rankine steam cycle and ammonia-water absorption refrigeration system is introduced and analyzed. In this process, power, heat and cooling are produced. Energy and exergy analyses along with the economic factors are used to distinguish optimum operating point of the system. The developed electrochemical model of the fuel cell is validated with experimental results. Thermodynamic package and main parameters of the absorption refrigeration system are validated. The power output of the system is 500 kW. An optimization problem is defined in order to finding the optimal operating point. Decision variables are current density, temperature of the exhaust gases from the boiler, steam turbine pressure (high and medium), generator temperature and consumed cooling water. Results indicate that electrical efficiency of the combined system is 62.4% (LHV). Produced refrigeration (at -10 °C) and heat recovery are 101 kW and 22.1 kW respectively. Investment cost for the combined system (without absorption cycle) is about 2917 kW-1.

  1. Improvement of Early Antenatal Care Initiation: The Effects of Training Local Health Volunteers in the Community.

    PubMed

    Liabsuetrakul, Tippawan; Oumudee, Nurlisa; Armeeroh, Masuenah; Nima, Niamina; Duerahing, Nurosanah

    2018-01-01

    Although antenatal care (ANC) coverage has been increasing in low- and middle-income countries, the adherence to the ANC initiation standards at gestational age <12 weeks was inadequate including Thailand. The study aimed to improve the rate of early ANC initiation by training the existing local health volunteers (LHVs) in 3 southernmost provinces of Thailand. A clustered nonrandomized intervention study was conducted from November 2012 to February 2014. One district of each province was selected to be the study intervention districts for that province. A total of 124 LHVs in the intervention districts participated in the knowledge-counseling intervention. It was organized as half-day workshop using 2 training modules each comprising a 30-minute lecture followed by counseling practice in pairs for 1 hour. Outcome was the rate of early ANC initiation among women giving birth, and its association with intervention, meeting an LHV, and months after training was analyzed. Of 6677 women, 3178 and 3499 women were in the control and intervention groups, respectively. Rates of early ANC were significantly improved after the intervention (adjusted odds ratio [OR]: 1.29, 95% confidence interval [CI]: 1.17-1.43, P < .001) and meeting an LHV (adjusted OR: 2.06, 95% CI: 1.86-2.29, P < .001), but lower at 6 months after training (adjusted OR: 0.76, 95% CI: 0.60-0.96, P = .002). Almost all women (99.7%) in the intervention group who met an LHV reported that they were encouraged to attend early ANC. Training LHVs in communities by knowledge-counseling intervention significantly improved early ANC initiation, but the magnitude of change was still limited.

  2. Improvement of Early Antenatal Care Initiation

    PubMed Central

    Oumudee, Nurlisa; Armeeroh, Masuenah; Nima, Niamina; Duerahing, Nurosanah

    2018-01-01

    Background: Although antenatal care (ANC) coverage has been increasing in low- and middle-income countries, the adherence to the ANC initiation standards at gestational age <12 weeks was inadequate including Thailand. The study aimed to improve the rate of early ANC initiation by training the existing local health volunteers (LHVs) in 3 southernmost provinces of Thailand. Methods: A clustered nonrandomized intervention study was conducted from November 2012 to February 2014. One district of each province was selected to be the study intervention districts for that province. A total of 124 LHVs in the intervention districts participated in the knowledge–counseling intervention. It was organized as half-day workshop using 2 training modules each comprising a 30-minute lecture followed by counseling practice in pairs for 1 hour. Outcome was the rate of early ANC initiation among women giving birth, and its association with intervention, meeting an LHV, and months after training was analyzed. Results: Of 6677 women, 3178 and 3499 women were in the control and intervention groups, respectively. Rates of early ANC were significantly improved after the intervention (adjusted odds ratio [OR]: 1.29, 95% confidence interval [CI]: 1.17-1.43, P < .001) and meeting an LHV (adjusted OR: 2.06, 95% CI: 1.86-2.29, P < .001), but lower at 6 months after training (adjusted OR: 0.76, 95% CI: 0.60-0.96, P = .002). Almost all women (99.7%) in the intervention group who met an LHV reported that they were encouraged to attend early ANC. Conclusion: Training LHVs in communities by knowledge–counseling intervention significantly improved early ANC initiation, but the magnitude of change was still limited. PMID:29657959

  3. Hydro, wind and solar power as a base for a 100% renewable energy supply for South and Central America.

    PubMed

    Barbosa, Larissa de Souza Noel Simas; Bogdanov, Dmitrii; Vainikka, Pasi; Breyer, Christian

    2017-01-01

    Power systems for South and Central America based on 100% renewable energy (RE) in the year 2030 were calculated for the first time using an hourly resolved energy model. The region was subdivided into 15 sub-regions. Four different scenarios were considered: three according to different high voltage direct current (HVDC) transmission grid development levels (region, country, area-wide) and one integrated scenario that considers water desalination and industrial gas demand supplied by synthetic natural gas via power-to-gas (PtG). RE is not only able to cover 1813 TWh of estimated electricity demand of the area in 2030 but also able to generate the electricity needed to fulfil 3.9 billion m3 of water desalination and 640 TWhLHV of synthetic natural gas demand. Existing hydro dams can be used as virtual batteries for solar and wind electricity storage, diminishing the role of storage technologies. The results for total levelized cost of electricity (LCOE) are decreased from 62 €/MWh for a highly decentralized to 56 €/MWh for a highly centralized grid scenario (currency value of the year 2015). For the integrated scenario, the levelized cost of gas (LCOG) and the levelized cost of water (LCOW) are 95 €/MWhLHV and 0.91 €/m3, respectively. A reduction of 8% in total cost and 5% in electricity generation was achieved when integrating desalination and power-to-gas into the system.

  4. Hydro, wind and solar power as a base for a 100% renewable energy supply for South and Central America

    PubMed Central

    Barbosa, Larissa de Souza Noel Simas; Bogdanov, Dmitrii; Vainikka, Pasi; Breyer, Christian

    2017-01-01

    Power systems for South and Central America based on 100% renewable energy (RE) in the year 2030 were calculated for the first time using an hourly resolved energy model. The region was subdivided into 15 sub-regions. Four different scenarios were considered: three according to different high voltage direct current (HVDC) transmission grid development levels (region, country, area-wide) and one integrated scenario that considers water desalination and industrial gas demand supplied by synthetic natural gas via power-to-gas (PtG). RE is not only able to cover 1813 TWh of estimated electricity demand of the area in 2030 but also able to generate the electricity needed to fulfil 3.9 billion m3 of water desalination and 640 TWhLHV of synthetic natural gas demand. Existing hydro dams can be used as virtual batteries for solar and wind electricity storage, diminishing the role of storage technologies. The results for total levelized cost of electricity (LCOE) are decreased from 62 €/MWh for a highly decentralized to 56 €/MWh for a highly centralized grid scenario (currency value of the year 2015). For the integrated scenario, the levelized cost of gas (LCOG) and the levelized cost of water (LCOW) are 95 €/MWhLHV and 0.91 €/m3, respectively. A reduction of 8% in total cost and 5% in electricity generation was achieved when integrating desalination and power-to-gas into the system. PMID:28329023

  5. Pyrolysis of automotive shredder residue in a bench scale rotary kiln.

    PubMed

    Notarnicola, Michele; Cornacchia, Giacinto; De Gisi, Sabino; Di Canio, Francesco; Freda, Cesare; Garzone, Pietro; Martino, Maria; Valerio, Vito; Villone, Antonio

    2017-07-01

    Automotive shredder residue (ASR) can create difficulties when managing, with its production increasing. It is made of different type of plastics, foams, elastomers, wood, glasses and textiles. For this reason, it is complicated to dispose of in a cost effective way, while also respecting the stringent environmental restrictions. Among thermal treatments, pyrolysis seems to offer an environmentally attractive method for the treatment of ASR; it also allows for the recovery of valuable secondary materials/fuels such as pyrolysis oils, chars, and gas. While, there is a great deal of significant research on ASR pyrolysis, the literature on higher scale pyrolysis experiments is limited. To improve current literature, the aim of the study was to investigate the pyrolysis of ASR in a bench scale rotary kiln. The Italian ASR was separated by dry-sieving into two particle size fractions: d<30mm and d>30mm. Both the streams were grounded, pelletized and then pyrolyzed in a continuous bench scale rotary kiln at 450, 550 and 650°C. The mass flow rate of the ASR pellets was 200-350g/h and each test ran for about 4-5h. The produced char, pyrolysis oil and syngas were quantified to determine product distribution. They were thoroughly analyzed with regard to their chemical and physical properties. The results show how higher temperatures increase the pyrolysis gas yield (44wt% at 650°C) as well as its heating value. The low heating value (LHV) of syngas ranges between 18 and 26MJ/Nm 3 dry. The highest pyrolysis oil yield (33wt.%) was observed at 550°C and its LHV ranges between 12.5 and 14.5MJ/kg. Furthermore, only two out of the six produced chars respect the LHV limit set by the Italian environmental regulations for landfilling. The obtained results in terms of product distribution and their chemical-physical analyses provide useful information for plant scale-up. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Characterization of Korean solid recovered fuels (SRFs): an analysis and comparison of SRFs.

    PubMed

    Choi, Yeon-Seok; Han, Soyoung; Choi, Hang-Seok; Kim, Seock-Joon

    2012-04-01

    To date, Korea has used four species of solid recovered fuels (SRFs) which have been certified by the Environmental Ministry of Korea: refuse-derived fuel (RDF), refused plastic fuel (RPF), tyre-derived fuel (TDF), and wood chip fuel (WCF). These fuels have been used in many industrial boilers. In this study, seven regulatory properties associated with each of the four species: particle size, moisture and ash content, lower heating value (LHV), total chlorine, sulfur, and heavy metals content (Pb, As, Cd, Hg, Cr) were analysed. These properties are the main regulation criteria for the usage and transfer of SRFs in Korea. Different properties of each SRF were identified on the basis of data collected over the last 3 years in Korea, and the manufacturing process problem associated with the production of SRFs were considered. It was found that the high moisture content of SRFs (especially WCF) could directly lead to the low LHV of SRFs and that the poor screening and sorting of raw materials could cause defective SRF products with high ash or chlorine contents. The information obtained from this study could contribute to the manufacturing of SRF with good quality.

  7. Predicting the ultimate potential of natural gas SOFC power cycles with CO2 capture - Part A: Methodology and reference cases

    NASA Astrophysics Data System (ADS)

    Campanari, Stefano; Mastropasqua, Luca; Gazzani, Matteo; Chiesa, Paolo; Romano, Matteo C.

    2016-08-01

    Driven by the search for the highest theoretical efficiency, in the latest years several studies investigated the integration of high temperature fuel cells in natural gas fired power plants, where fuel cells are integrated with simple or modified Brayton cycles and/or with additional bottoming cycles, and CO2 can be separated via chemical or physical separation, oxy-combustion and cryogenic methods. Focusing on Solid Oxide Fuel Cells (SOFC) and following a comprehensive review and analysis of possible plant configurations, this work investigates their theoretical potential efficiency and proposes two ultra-high efficiency plant configurations based on advanced intermediate-temperature SOFCs integrated with a steam turbine or gas turbine cycle. The SOFC works at atmospheric or pressurized conditions and the resulting power plant exceeds 78% LHV efficiency without CO2 capture (as discussed in part A of the work) and 70% LHV efficiency with substantial CO2 capture (part B). The power plants are simulated at the 100 MW scale with a complete set of realistic assumptions about fuel cell (FC) performance, plant components and auxiliaries, presenting detailed energy and material balances together with a second law analysis.

  8. Thermodynamic Modeling and Dispatch of Distributed Energy Technologies including Fuel Cell -- Gas Turbine Hybrids

    NASA Astrophysics Data System (ADS)

    McLarty, Dustin Fogle

    Distributed energy systems are a promising means by which to reduce both emissions and costs. Continuous generators must be responsive and highly efficiency to support building dynamics and intermittent on-site renewable power. Fuel cell -- gas turbine hybrids (FC/GT) are fuel-flexible generators capable of ultra-high efficiency, ultra-low emissions, and rapid power response. This work undertakes a detailed study of the electrochemistry, chemistry and mechanical dynamics governing the complex interaction between the individual systems in such a highly coupled hybrid arrangement. The mechanisms leading to the compressor stall/surge phenomena are studied for the increased risk posed to particular hybrid configurations. A novel fuel cell modeling method introduced captures various spatial resolutions, flow geometries, stack configurations and novel heat transfer pathways. Several promising hybrid configurations are analyzed throughout the work and a sensitivity analysis of seven design parameters is conducted. A simple estimating method is introduced for the combined system efficiency of a fuel cell and a turbine using component performance specifications. Existing solid oxide fuel cell technology is capable of hybrid efficiencies greater than 75% (LHV) operating on natural gas, and existing molten carbonate systems greater than 70% (LHV). A dynamic model is calibrated to accurately capture the physical coupling of a FC/GT demonstrator tested at UC Irvine. The 2900 hour experiment highlighted the sensitivity to small perturbations and a need for additional control development. Further sensitivity studies outlined the responsiveness and limits of different control approaches. The capability for substantial turn-down and load following through speed control and flow bypass with minimal impact on internal fuel cell thermal distribution is particularly promising to meet local demands or provide dispatchable support for renewable power. Advanced control and dispatch heuristics are discussed using a case study of the UCI central plant. Thermal energy storage introduces a time horizon into the dispatch optimization which requires novel solution strategies. Highly efficient and responsive generators are required to meet the increasingly dynamic loads of today's efficient buildings and intermittent local renewable wind and solar power. Fuel cell gas turbine hybrids will play an integral role in the complex and ever-changing solution to local electricity production.

  9. Heat Exchanger Design and Testing for a 6-Inch Rotating Detonation Engine

    DTIC Science & Technology

    2013-03-01

    Engine Research Facility HHV Higher heating value LHV Lower heating value PDE Pulsed detonation engine RDE Rotating detonation engine RTD...the combustion community are pulse detonation engines ( PDEs ) and rotating detonation engines (RDEs). 1.1 Differences between Pulsed and Rotating ...steadier than that of a PDE (2, 3). (2) (3) Figure 1. Unrolled rotating detonation wave from high-speed video (4) Another difference that

  10. Impact Testing of the H1224A Shipping/Storage Container

    DTIC Science & Technology

    1994-05-01

    may not provide significant ener- gy absorption for the re - entry vehicle midsection but can provide some confinement of potentially damaged...Horizontal Low-Velocity impact test LHV Longitudinal High-Velocity impact test HHV Horizontal High-Velocity impact test RV Re - entry Vehicle midsection mass...Also, integration of these pulses showed that only a much shorter dura- tion pulse was necessary to slow the re - entry vehicle midsection velocity

  11. Soil Moisture Limitations on Monitoring Boreal Forest Regrowth Using Spaceborne L-Band SAR Data

    NASA Technical Reports Server (NTRS)

    Kasischke, Eric S.; Tanase, Mihai A.; Bourgeau-Chavez, Laura L.; Borr, Matthew

    2011-01-01

    A study was carried out to investigate the utility of L-band SAR data for estimating aboveground biomass in sites with low levels of vegetation regrowth. Data to estimate biomass were collected from 59 sites located in fire-disturbed black spruce forests in interior Alaska. PALSAR L-band data (HH and HV polarizations) collected on two dates in the summer/fall of 2007 and one date in the summer of 2009 were used. Significant linear correlations were found between the log of aboveground biomass (range of 0.02 to 22.2 t ha-1) and (L-HH) and (L-HV) for the data collected on each of the three dates, with the highest correlation found using the LHV data collected when soil moisture was highest. Soil moisture, however, did change the correlations between L-band and aboveground biomass, and the analyses suggest that the influence of soil moisture is biomass dependent. The results indicate that to use L-band SAR data for mapping aboveground biomass and monitoring forest regrowth will require development of approaches to account for the influence that variations in soil moisture have on L-band microwave backscatter, which can be particularly strong when low levels of aboveground biomass occur

  12. Life cycle carbon footprint of shale gas: review of evidence and implications.

    PubMed

    Weber, Christopher L; Clavin, Christopher

    2012-06-05

    The recent increase in the production of natural gas from shale deposits has significantly changed energy outlooks in both the US and world. Shale gas may have important climate benefits if it displaces more carbon-intensive oil or coal, but recent attention has discussed the potential for upstream methane emissions to counteract this reduced combustion greenhouse gas emissions. We examine six recent studies to produce a Monte Carlo uncertainty analysis of the carbon footprint of both shale and conventional natural gas production. The results show that the most likely upstream carbon footprints of these types of natural gas production are largely similar, with overlapping 95% uncertainty ranges of 11.0-21.0 g CO(2)e/MJ(LHV) for shale gas and 12.4-19.5 g CO(2)e/MJ(LHV) for conventional gas. However, because this upstream footprint represents less than 25% of the total carbon footprint of gas, the efficiency of producing heat, electricity, transportation services, or other function is of equal or greater importance when identifying emission reduction opportunities. Better data are needed to reduce the uncertainty in natural gas's carbon footprint, but understanding system-level climate impacts of shale gas, through shifts in national and global energy markets, may be more important and requires more detailed energy and economic systems assessments.

  13. Coal-to-methanol: an engineering evaluation of Texaco gasification and ICI methanol-synthesis route. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buckingham, P.A.; Cobb, D.D.; Leavitt, A.A.

    1981-08-01

    This report presents the results of a technical and economic evaluation of producing methanol from bituminous coal using Texaco coal gasification and ICI methanol synthesis. The scope of work included the development of an overall configuration for a large plant comprising coal preparation, air separation, coal gasification, shift conversion, COS hydrolysis, acid gas removal, methanol synthesis, methanol refining, and all required utility systems and off-site facilities. Design data were received from both Texaco and ICI while a design and cost estimate were received from Lotepro covering the Rectisol acid gas removal unit. The plant processes 14,448 tons per day (drymore » basis) of Illinois No. 6 bituminous coal and produces 10,927 tons per day of fuel-grade methanol. An overall thermal efficiency of 57.86 percent was calculated on an HHV basis and 52.64 percent based on LHV. Total plant investment at an Illinois plant site was estimated to be $1159 million dollars in terms of 1979 investment. Using EPRI's economic premises, the first-year product costs were calculated to $4.74 per million Btu (HHV) which is equivalent to $30.3 cents per gallon and $5.37 per million Btu (LHV).« less

  14. Analysing biomass torrefaction supply chain costs.

    PubMed

    Svanberg, Martin; Olofsson, Ingemar; Flodén, Jonas; Nordin, Anders

    2013-08-01

    The objective of the present work was to develop a techno-economic system model to evaluate how logistics and production parameters affect the torrefaction supply chain costs under Swedish conditions. The model consists of four sub-models: (1) supply system, (2) a complete energy and mass balance of drying, torrefaction and densification, (3) investment and operating costs of a green field, stand-alone torrefaction pellet plant, and (4) distribution system to the gate of an end user. The results show that the torrefaction supply chain reaps significant economies of scale up to a plant size of about 150-200 kiloton dry substance per year (ktonDS/year), for which the total supply chain costs accounts to 31.8 euro per megawatt hour based on lower heating value (€/MWhLHV). Important parameters affecting total cost are amount of available biomass, biomass premium, logistics equipment, biomass moisture content, drying technology, torrefaction mass yield and torrefaction plant capital expenditures (CAPEX). Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Test and Evaluation of the Heat Recovery Incinerator System at Naval Station, Mayport, Florida.

    DTIC Science & Technology

    1981-05-01

    co m m~~C 0 -4 𔃺V 0.4 𔃺 Cl .4* C 0% ’ 039 TABLE 4-4. HEATING VALUES AND MOISTURE CONTENT OF DECEMBER REFUSE HHV LHV Moisture Basis (Btu/lb) (Btu/lb...36 Solid waste characteristics ..... ......... 36 Auxiliary fuel characteristics. ..... ....... 36 Ash characteristics ...... ............ 38 Bottom...49 4-15 Average Fuel and Flue Gas Analysis .. ........... ... 49 4-16 Air and Fuel Inputs ...... ................... ... 50 4

  16. Ramgen Power Systems-Supersonic Component Technology for Military Engine Applications

    DTIC Science & Technology

    2006-11-01

    turbine efficiency power (kW) LHV efficiency HHV efficiency notes **Current Design Point 0.45 1700 1013 84.4% 220.1 35.4% 31.8% - Rampressor...tor (such as a standalone power-only mode device), or to a fuel cell in a hybrid configuration. This paper presents the development of the RPS gas...turbine technology and potential applications to the two specific engine cycle configurations, i.e., an indirect fuel cell / RPS turbine hybrid-cycle

  17. Non-contrast-enhanced MR portography and hepatic venography with time-spatial labeling inversion pulses: comparison of imaging with the short tau inversion recovery method and the chemical shift selective method.

    PubMed

    Shimizu, Hironori; Isoda, Hiroyoshi; Ohno, Tsuyoshi; Yamashita, Rikiya; Kawahara, Seiya; Furuta, Akihiro; Fujimoto, Koji; Kido, Aki; Kusahara, Hiroshi; Togashi, Kaori

    2015-01-01

    To compare and evaluate images of non-contrast enhanced magnetic resonance (MR) portography and hepatic venography acquired with two different fat suppression methods, the chemical shift selective (CHESS) method and short tau inversion recovery (STIR) method. Twenty-two healthy volunteers were examined using respiratory-triggered three-dimensional true steady-state free-precession with two time-spatial labeling inversion pulses. The CHESS or STIR methods were used for fat suppression. The relative signal-to-noise ratio and contrast-to-noise ratio (CNR) were quantified, and the quality of visualization was scored. Image acquisition was successfully conducted in all volunteers. The STIR method significantly improved the CNRs of MR portography and hepatic venography. The image quality scores of main portal vein and right portal vein were higher with the STIR method, but there were no significant differences. The image quality scores of right hepatic vein, middle hepatic vein, and left hepatic vein (LHV) were all higher, and the visualization of LHV was significantly better (p<0.05). The STIR method contributes to further suppression of the background signal and improves visualization of the portal and hepatic veins. The results support using non-contrast-enhanced MR portography and hepatic venography in clinical practice. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Development of a soldier-portable fuel cell power system. Part I: A bread-board methanol fuel processor

    NASA Astrophysics Data System (ADS)

    Palo, Daniel R.; Holladay, Jamie D.; Rozmiarek, Robert T.; Guzman-Leong, Consuelo E.; Wang, Yong; Hu, Jianli; Chin, Ya-Huei; Dagle, Robert A.; Baker, Eddie G.

    A 15-W e portable power system is being developed for the US Army that consists of a hydrogen-generating fuel reformer coupled to a proton-exchange membrane fuel cell. In the first phase of this project, a methanol steam reformer system was developed and demonstrated. The reformer system included a combustor, two vaporizers, and a steam reforming reactor. The device was demonstrated as a thermally independent unit over the range of 14-80 W t output. Assuming a 14-day mission life and an ultimate 1-kg fuel processor/fuel cell assembly, a base case was chosen to illustrate the expected system performance. Operating at 13 W e, the system yielded a fuel processor efficiency of 45% (LHV of H 2 out/LHV of fuel in) and an estimated net efficiency of 22% (assuming a fuel cell efficiency of 48%). The resulting energy density of 720 Wh/kg is several times the energy density of the best lithium-ion batteries. Some immediate areas of improvement in thermal management also have been identified, and an integrated fuel processor is under development. The final system will be a hybrid, containing a fuel reformer, a fuel cell, and a rechargeable battery. The battery will provide power for start-up and added capacity for times of peak power demand.

  19. Development of a Soldier-Portable Fuel Cell Power System, Part I: A Bread-Board Methanol Fuel Processor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palo, Daniel R.; Holladay, Jamelyn D.; Rozmiarek, Robert T.

    A 15-We portable power system is being developed for the US Army, comprised of a hydrogen-generating fuel reformer coupled to a hydrogen-converting fuel cell. As a first phase of this project, a methanol steam reformer system was developed and demonstrated. The reformer system included a combustor, two vaporizers, and a steam-reforming reactor. The device was demonstrated as a thermally independent unit over the range of 14 to 80 Wt output. Assuming a 14-day mission life and an ultimate 1-kg fuel processor/fuel cell assembly, a base case was chosen to illustrate the expected system performance. Operating at 13 We, the systemmore » yielded a fuel processor efficiency of 45% (LHV of H2 out/LHV of fuel in) and an estimated net efficiency of 22% (assuming a fuel cell efficiency of 48%). The resulting energy density of 720 W-hr/kg is several times the energy density of the best lithium-ion batteries. Some immediate areas of improvement in thermal management also have been identified and an integrated fuel processor is under development. The final system will be a hybrid, containing a fuel reformer, fuel cell, and rechargeable battery. The battery will provide power for startup and added capacity for times of peak power demand.« less

  20. Characterization of High Damping Fe-Cr-Mo and Fe-Cr-Al Alloys for Naval Ships Application.

    DTIC Science & Technology

    1988-03-01

    austenitic , and martensitic. The high damping Fe-Cr-based alloys are closely related to ferritic stainless steels . Ferritic stainless steel consists of an Fe...cm reveme it Prectiaq #no ’uenf r oy o.o(a tflrowf U S9GO..P Damping; Ship Silencing; Ferritic Stainless Steels ; Ti-Ni 7 LhV I,. Cintunue on roere .r...decreased. E. METALLURGY OF THE IRON-CHROMIUM ALLOY SYSTEM 1. Physical Properties Stainless steels are divided into three main classes: ferritic

  1. Modeling biomass gasification in circulating fluidized beds

    NASA Astrophysics Data System (ADS)

    Miao, Qi

    In this thesis, the modeling of biomass gasification in circulating fluidized beds was studied. The hydrodynamics of a circulating fluidized bed operating on biomass particles were first investigated, both experimentally and numerically. Then a comprehensive mathematical model was presented to predict the overall performance of a 1.2 MWe biomass gasification and power generation plant. A sensitivity analysis was conducted to test its response to several gasifier operating conditions. The model was validated using the experimental results obtained from the plant and two other circulating fluidized bed biomass gasifiers (CFBBGs). Finally, an ASPEN PLUS simulation model of biomass gasification was presented based on minimization of the Gibbs free energy of the reaction system at chemical equilibrium. Hydrodynamics plays a crucial role in defining the performance of gas-solid circulating fluidized beds (CFBs). A 2-dimensional mathematical model was developed considering the hydrodynamic behavior of CFB gasifiers. In the modeling, the CFB riser was divided into two regions: a dense region at the bottom and a dilute region at the top of the riser. Kunii and Levenspiel (1991)'s model was adopted to express the vertical solids distribution with some other assumptions. Radial distributions of bed voidage were taken into account in the upper zone by using Zhang et al. (1991)'s correlation. For model validation purposes, a cold model CFB was employed, in which sawdust was transported with air as the fluidizing agent. A comprehensive mathematical model was developed to predict the overall performance of a 1.2 MWe biomass gasification and power generation demonstration plant in China. Hydrodynamics as well as chemical reaction kinetics were considered. The fluidized bed riser was divided into two distinct sections: (a) a dense region at the bottom of the bed where biomass undergoes mainly heterogeneous reactions and (b) a dilute region at the top where most of homogeneous reactions occur in gas phase. Each section was divided into a number of small cells, over which mass and energy balances were applied. Due to the high heating rate in circulating fluidized bed, the pyrolysis was considered instantaneous. A number of homogeneous and heterogeneous reactions were considered in the model. Mass transfer resistance was considered negligible since the reactions were under kinetic control due to good gas-solid mixing. The model is capable of predicting the bed temperature distribution along the gasifier, the concentration and distribution of each species in the vertical direction of the bed, the composition and lower heating value (LHV) of produced gas, the gasification efficiency, the overall carbon conversion and the produced gas production rate. A sensitivity analysis was performed to test its response to several gasifier operating conditions. The model sensitivity analysis showed that equivalence ratio (ER), bed temperature, fluidization velocity, biomass feed rate and moisture content had various effects on the gasifier performance. However, the model was more sensitive to variations in ER and bed temperature. The model was validated using the experimental results obtained from the demonstration plant. The reactor was operated on rice husk at various ERs, fluidization velocities and biomass feed rates. The model gave reasonable predictions. The model was also validated by comparing the simulation results with two other different size CFBBGs using different biomass feedstock, and it was concluded that the developed model can be applied to other CFBBGs using various biomass fuels and having comparable reactor geometries. A thermodynamic model was developed under ASPEN PLUS environment. Using the approach of Gibbs free energy minimization, the model was essentially independent of kinetic parameters. A sensitivity analysis was performed on the model to test its response to operating variables, including ER and biomass moisture content. The results showed that the ER has the most effect on the product gas composition and LHV. The simulation results were compared with the experimental data obtained from the demonstration plant. Keywords: Biomass gasification; Mathematical model; Circulating fluidized bed; Hydrodynamics; Kinetics; Sensitivity analysis; Validation; Equivalence ratio; Temperature; Feed rate; Moisture; Syngas composition; Lower heating value; Gasification efficiency; Carbon conversion

  2. Bio-drying and size sorting of municipal solid waste with high water content for improving energy recovery.

    PubMed

    Shao, Li-Ming; Ma, Zhong-He; Zhang, Hua; Zhang, Dong-Qing; He, Pin-Jing

    2010-07-01

    Bio-drying can enhance the sortability and heating value of municipal solid waste (MSW), consequently improving energy recovery. Bio-drying followed by size sorting was adopted for MSW with high water content to improve its combustibility and reduce potential environmental pollution during the follow-up incineration. The effects of bio-drying and waste particle size on heating values, acid gas and heavy metal emission potential were investigated. The results show that, the water content of MSW decreased from 73.0% to 48.3% after bio-drying, whereas its lower heating value (LHV) increased by 157%. The heavy metal concentrations increased by around 60% due to the loss of dry materials mainly resulting from biodegradation of food residues. The bio-dried waste fractions with particle size higher than 45 mm were mainly composed of plastics and papers, and were preferable for the production of refuse derived fuel (RDF) in view of higher LHV as well as lower heavy metal concentration and emission. However, due to the higher chlorine content and HCl emission potential, attention should be paid to acid gas and dioxin pollution control. Although LHVs of the waste fractions with size <45 mm increased by around 2x after bio-drying, they were still below the quality standards for RDF and much higher heavy metal pollution potential was observed. Different incineration strategies could be adopted for different particle size fractions of MSW, regarding to their combustibility and pollution property. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  3. Test Report for NG Sensors GTX-1000

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manginell, Ronald P.

    2014-12-01

    This report describes initial testing of the NG Sensor GTX-1000 natural gas monitoring system. This testing showed that the retention time, peak area stability and heating value repeatability of the GTX-1000 were promising for natural gas measurements in the field or at the well head. The repeatability can be less than 0.25% for LHV and HHV for the Airgas standard tested in this report, which is very promising for a first generation prototype. Ultimately this system should be capable of 0.1% repeatability in heating value at significant size and power reductions compared with competing systems.

  4. Large size biogas-fed Solid Oxide Fuel Cell power plants with carbon dioxide management: Technical and economic optimization

    NASA Astrophysics Data System (ADS)

    Curletti, F.; Gandiglio, M.; Lanzini, A.; Santarelli, M.; Maréchal, F.

    2015-10-01

    This article investigates the techno-economic performance of large integrated biogas Solid Oxide Fuel Cell (SOFC) power plants. Both atmospheric and pressurized operation is analysed with CO2 vented or captured. The SOFC module produces a constant electrical power of 1 MWe. Sensitivity analysis and multi-objective optimization are the mathematical tools used to investigate the effects of Fuel Utilization (FU), SOFC operating temperature and pressure on the plant energy and economic performances. FU is the design variable that most affects the plant performance. Pressurized SOFC with hybridization with a gas turbine provides a notable boost in electrical efficiency. For most of the proposed plant configurations, the electrical efficiency ranges in the interval 50-62% (LHV biogas) when a trade-off of between energy and economic performances is applied based on Pareto charts obtained from multi-objective plant optimization. The hybrid SOFC is potentially able to reach an efficiency above 70% when FU is 90%. Carbon capture entails a penalty of more 10 percentage points in pressurized configurations mainly due to the extra energy burdens of captured CO2 pressurization and oxygen production and for the separate and different handling of the anode and cathode exhausts and power recovery from them.

  5. Combustion characteristics of an SI engine fueled with biogas fuel

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Long, Wuqiang; Song, Peng

    2017-04-01

    An experimental research of the effect of H2 substitution and CO2 dilution on CH4 combustion has been carried out on a spark ignition engine. The results show that H2 addition could improve BMEP, thermal efficiency, CO and THC emissions. NOX emissions increased for higher low heating value (LHV) of H2 than CH4. CO2 dilution could effective reduce NOX emission of H2-CH4 combustion. Although engine performance, thermal efficiency and exhaust get unacceptable under high fuel dilution ratio (F.D.R.) conditions, it could be solved by decreasing F.D.R. and/or increasing hydrogen substitution ratio (H.S.R.).

  6. Performance of a Small Internal Combustion Engine Using N-Heptane and Iso-Octane

    DTIC Science & Technology

    2010-03-01

    evaluate the ON effects on a FUJI BF34-EI, small 4-stroke spark ignition engine as preliminary steps to using a military grade JP-8 jet turbine fuel ...K) Pcrit (MPa) HHV (kJ/kg) LHV (kJ/kg) n-Heptane C7H16 100.20 371.60 537.70 2.62 48,456 44,926 i-Octane C8H18 114.22 398.40 567.50 2.40 48,275 44,791...meter the fuel . The carburetor is equipped with both a high speed and low speed fuel jet . It is unknown what engine speed it switches from one to

  7. GT200 getting better than 34% efficiency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farmer, R.

    1980-01-01

    Design features are described for the GT200, a 50-Hz machine blend of high temperature advanced aircraft rotating components and heavy frame industrial gas turbine structure. It includes a twin spool as generator with a two-stage power turbine giving nominal performance of 85,000 kW ISO peak output with a 10,120 Btu per kW-h heat rate on LHV distillate. It is desgined for base, intermediate, or peak load operation simple or combined cycle. Stal-Laval in Sweden developed it and sold the first unit to the Swedish State Power Board in July 1977. The unit was installed at the Stallbocka Station.

  8. Development of a 100 kW plasma torch for plasma assisted combustion of low heating value fuels

    NASA Astrophysics Data System (ADS)

    Takali, S.; Fabry, F.; Rohani, V.; Cauneau, F.; Fulcheri, L.

    2014-11-01

    Most thermal power plants need an auxiliary power source to (i) heat-up the boiler during start up phases before reaching autonomy power and (ii) sustain combustion at low load. This supplementary power is commonly provided with high LHV fossil fuel burners which increases operational expenses and disables the use of anti-pollutant filters. A Promising alternative is under development and consists in high temperature plasma assisted AC electro-burners. In this paper, the development of a new 100 kW three phase plasma torch with graphite electrodes is detailed. This plasma torch is working at atmospheric pressure with air as plasma gas and has three-phase power supply and working at 680 Hz. The nominal air flow rate is 60 Nm3.h-1 and the outlet gas temperature is above 2 500 K. At the beginning, graphite electrodes erosion by oxidizing medium was studied and controlling parameters were identified through parametric set of experiments and tuned for optimal electrodes life time. Then, a new 3-phase plasma torch design was modelled and simulated on ANSYS platform. The characteristics of the plasma flow and its interaction with the environing elements of the torch are detailed hereafter.

  9. Development of Residential SOFC Cogeneration System

    NASA Astrophysics Data System (ADS)

    Ono, Takashi; Miyachi, Itaru; Suzuki, Minoru; Higaki, Katsuki

    2011-06-01

    Since 2001 Kyocera has been developing 1kW class Solid Oxide Fuel Cell (SOFC) for power generation system. We have developed a cell, stack, module and system. Since 2004, Kyocera and Osaka Gas Co., Ltd. have been developed SOFC residential co-generation system. From 2007, we took part in the "Demonstrative Research on Solid Oxide Fuel Cells" Project conducted by New Energy Foundation (NEF). Total 57 units of 0.7kW class SOFC cogeneration systems had been installed at residential houses. In spite of residential small power demand, the actual electric efficiency was about 40%(netAC,LHV), and high CO2 reduction performance was achieved by these systems. Hereafter, new joint development, Osaka Gas, Toyota Motors, Kyocera and Aisin Seiki, aims early commercialization of residential SOFC CHP system.

  10. Entanglement and nonlocality in multi-particle systems

    NASA Astrophysics Data System (ADS)

    Reid, Margaret D.; He, Qiong-Yi; Drummond, Peter D.

    2012-02-01

    Entanglement, the Einstein-Podolsky-Rosen (EPR) paradox and Bell's failure of local-hiddenvariable (LHV) theories are three historically famous forms of "quantum nonlocality". We give experimental criteria for these three forms of nonlocality in multi-particle systems, with the aim of better understanding the transition from microscopic to macroscopic nonlocality. We examine the nonlocality of N separated spin J systems. First, we obtain multipartite Bell inequalities that address the correlation between spin values measured at each site, and then we review spin squeezing inequalities that address the degree of reduction in the variance of collective spins. The latter have been particularly useful as a tool for investigating entanglement in Bose-Einstein condensates (BEC). We present solutions for two topical quantum states: multi-qubit Greenberger-Horne-Zeilinger (GHZ) states, and the ground state of a two-well BEC.

  11. Advanced energy system program

    NASA Astrophysics Data System (ADS)

    Trester, K.

    1987-06-01

    The ogjectives are to design, develop, and demonstrate a natural-gas-fueled, highly recuperated, 50 kw Brayton-cycle cogeneration system for commercial, institutional, and multifamily residential applications. Recent marketing studies have shown that the Advanced Energy System (AES), with its many cost-effective features, has the potential to offer significant reductions in annual electrical and thermal energy costs to the consumer. Specific advantates of the system that result in low cost ownership are high electrical efficiency (34 percent, LHV), low maintenance, high reliability and long life (20 years). Significant technical features include: an integral turbogenerator with shaft-speed permanent magnet generator; a rotating assembly supported by compliant foil air bearings; a formed-tubesheet plate/fin recuperator with 91 percent effectiveness; and a bi-directional power conditioner to ultilize the generator for system startup. The planned introduction of catalytic combustion will further enhance the economic and ecological attractiveness.

  12. Natural Gas and Cellulosic Biomass: A Clean Fuel Combination? Determining the Natural Gas Blending Wall in Biofuel Production.

    PubMed

    M Wright, Mark; Seifkar, Navid; Green, William H; Román-Leshkov, Yuriy

    2015-07-07

    Natural gas has the potential to increase the biofuel production output by combining gas- and biomass-to-liquids (GBTL) processes followed by naphtha and diesel fuel synthesis via Fischer-Tropsch (FT). This study reflects on the use of commercial-ready configurations of GBTL technologies and the environmental impact of enhancing biofuels with natural gas. The autothermal and steam-methane reforming processes for natural gas conversion and the gasification of biomass for FT fuel synthesis are modeled to estimate system well-to-wheel emissions and compare them to limits established by U.S. renewable fuel mandates. We show that natural gas can enhance FT biofuel production by reducing the need for water-gas shift (WGS) of biomass-derived syngas to achieve appropriate H2/CO ratios. Specifically, fuel yields are increased from less than 60 gallons per ton to over 100 gallons per ton with increasing natural gas input. However, GBTL facilities would need to limit natural gas use to less than 19.1% on a LHV energy basis (7.83 wt %) to avoid exceeding the emissions limits established by the Renewable Fuels Standard (RFS2) for clean, advanced biofuels. This effectively constitutes a blending limit that constrains the use of natural gas for enhancing the biomass-to-liquids (BTL) process.

  13. Application of biomass pyrolytic polygeneration technology using retort reactors.

    PubMed

    Yang, Haiping; Liu, Biao; Chen, Yingquan; Chen, Wei; Yang, Qing; Chen, Hanping

    2016-01-01

    To introduce application status and illustrate the good utilisation potential of biomass pyrolytic polygeneration using retort reactors, the properties of major products and the economic viability of commercial factories were investigated. The capacity of one factory was about 3000t of biomass per year, which was converted into 1000t of charcoal, 950,000Nm(3) of biogas, 270t of woody tar, and 950t of woody vinegar. Charcoal and fuel gas had LHV of 31MJ/kg and 12MJ/m(3), respectively, indicating their potential for use as commercial fuels. The woody tar was rich in phenols, while woody vinegar contained large quantities of water and acetic acid. The economic analysis showed that the factory using this technology could be profitable, and the initial investment could be recouped over the factory lifetime. This technology offered a promising means of converting abundant agricultural biomass into high-value products. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Lassa fever or lassa hemorrhagic fever risk to humans from rodent-borne zoonoses.

    PubMed

    El-Bahnasawy, Mamdouh M; Megahed, Laila Abdel-Mawla; Abdalla Saleh, Hala Ahmed; Morsy, Tosson A

    2015-04-01

    Viral hemorrhagic fevers (VHFs) typically manifest as rapidly progressing acute febrile syndromes with profound hemorrhagic manifestations and very high fatality rates. Lassa fever, an acute hemorrhagic fever characterized by fever, muscle aches, sore throat, nausea, vomiting, diarrhea and chest and abdominal pain. Rodents are important reservoirs of rodent-borne zoonosis worldwide. Transmission rodents to humans occur by aerosol spread, either from the genus Mastomys rodents' excreta (multimammate rat) or through the close contact with infected patients (nosocomial infection). Other rodents of the genera Rattus, Mus, Lemniscomys, and Praomys are incriminated rodents hosts. Now one may ask do the rodents' ectoparasites play a role in Lassa virus zoonotic transmission. This paper summarized the update knowledge on LHV; hopping it might be useful to the clinicians, nursing staff, laboratories' personals as well as those concerned zoonoses from rodents and rodent control.

  15. Space Radar Image of the Lost City of Ubar

    NASA Image and Video Library

    1999-01-27

    This is a radar image of the region around the site of the lost city of Ubar in southern Oman, on the Arabian Peninsula. The ancient city was discovered in 1992 with the aid of remote sensing data. Archeologists believe Ubar existed from about 2800 B.C. to about 300 A.D. and was a remote desert outpost where caravans were assembled for the transport of frankincense across the desert. This image was acquired on orbit 65 of space shuttle Endeavour on April 13, 1994 by the Spaceborne Imaging Radar C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR). The SIR-C image shown is centered at 18.4 degrees north latitude and 53.6 degrees east longitude. The image covers an area about 50 by 100 kilometers (31 miles by 62 miles). The image is constructed from three of the available SIR-C channels and displays L-band, HH (horizontal transmit and receive) data as red, C-band HH as blue, and L-band HV (horizontal transmit, vertical receive) as green. The prominent magenta colored area is a region of large sand dunes, which are bright reflectors at both L-and C-band. The prominent green areas (L-HV) are rough limestone rocks, which form a rocky desert floor. A major wadi, or dry stream bed, runs across the middle of the image and is shown largely in white due to strong radar scattering in all channels displayed (L and C HH, L-HV). The actual site of the fortress of the lost city of Ubar, currently under excavation, is near the Wadi close to the center of the image. The fortress is too small to be detected in this image. However, tracks leading to the site, and surrounding tracks, appear as prominent, but diffuse, reddish streaks. These tracks have been used in modern times, but field investigations show many of these tracks were in use in ancient times as well. Mapping of these tracks on regional remote sensing images was a key to recognizing the site as Ubar in 1992. This image, and ongoing field investigations, will help shed light on a little known early civilization. http://photojournal.jpl.nasa.gov/catalog/PIA01721

  16. Santa Clara County Planar Solid Oxide Fuel Cell Demonstration Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fred Mitlitsky; Sara Mulhauser; David Chien

    2009-11-14

    The Santa Clara County Planar Solid Oxide Fuel Cell (PSOFC) project demonstrated the technical viability of pre-commercial PSOFC technology at the County 911 Communications headquarters, as well as the input fuel flexibility of the PSOFC. PSOFC operation was demonstrated on natural gas and denatured ethanol. The Santa Clara County Planar Solid Oxide Fuel Cell (PSOFC) project goals were to acquire, site, and demonstrate the technical viability of a pre-commercial PSOFC technology at the County 911 Communications headquarters. Additional goals included educating local permit approval authorities, and other governmental entities about PSOFC technology, existing fuel cell standards and specific code requirements.more » The project demonstrated the Bloom Energy (BE) PSOFC technology in grid parallel mode, delivering a minimum 15 kW over 8760 operational hours. The PSOFC system demonstrated greater than 81% electricity availability and 41% electrical efficiency (LHV net AC), providing reliable, stable power to a critical, sensitive 911 communications system that serves geographical boundaries of the entire Santa Clara County. The project also demonstrated input fuel flexibility. BE developed and demonstrated the capability to run its prototype PSOFC system on ethanol. BE designed the hardware necessary to deliver ethanol into its existing PSOFC system. Operational parameters were determined for running the system on ethanol, natural gas (NG), and a combination of both. Required modeling was performed to determine viable operational regimes and regimes where coking could occur.« less

  17. Optimal design and operation of solid oxide fuel cell systems for small-scale stationary applications

    NASA Astrophysics Data System (ADS)

    Braun, Robert Joseph

    The advent of maturing fuel cell technologies presents an opportunity to achieve significant improvements in energy conversion efficiencies at many scales; thereby, simultaneously extending our finite resources and reducing "harmful" energy-related emissions to levels well below that of near-future regulatory standards. However, before realization of the advantages of fuel cells can take place, systems-level design issues regarding their application must be addressed. Using modeling and simulation, the present work offers optimal system design and operation strategies for stationary solid oxide fuel cell systems applied to single-family detached dwellings. A one-dimensional, steady-state finite-difference model of a solid oxide fuel cell (SOFC) is generated and verified against other mathematical SOFC models in the literature. Fuel cell system balance-of-plant components and costs are also modeled and used to provide an estimate of system capital and life cycle costs. The models are used to evaluate optimal cell-stack power output, the impact of cell operating and design parameters, fuel type, thermal energy recovery, system process design, and operating strategy on overall system energetic and economic performance. Optimal cell design voltage, fuel utilization, and operating temperature parameters are found using minimization of the life cycle costs. System design evaluations reveal that hydrogen-fueled SOFC systems demonstrate lower system efficiencies than methane-fueled systems. The use of recycled cell exhaust gases in process design in the stack periphery are found to produce the highest system electric and cogeneration efficiencies while achieving the lowest capital costs. Annual simulations reveal that efficiencies of 45% electric (LHV basis), 85% cogenerative, and simple economic paybacks of 5--8 years are feasible for 1--2 kW SOFC systems in residential-scale applications. Design guidelines that offer additional suggestions related to fuel cell-stack sizing and operating strategy (base-load or load-following and cogeneration or electric-only) are also presented.

  18. Development of the hybrid sulfur cycle for use with concentrated solar heat. I. Conceptual design

    DOE PAGES

    Gorensek, Maximilian B.; Corgnale, Claudio; Summers, William A.

    2017-07-27

    We propose a detailed conceptual design of a solar hybrid sulfur (HyS) cycle. Numerous design tradeoffs, including process operating conditions and strategies, methods of integration with solar energy sources, and solar design options were considered. A baseline design was selected, and process flowsheets were developed. Pinch analyses were performed to establish the limiting energy efficiency. Detailed material and energy balances were completed, and a full stream table prepared. Design assumptions include use of: location in the southwest US desert, falling particle concentrated solar receiver, indirect heat transfer via pressurized helium, continuous operation with thermal energy storage, liquid-fed electrolyzer with PBImore » membrane, and bayonet-type acid decomposer. Thermochemical cycle efficiency for the HyS process was estimated to be 35.0%, LHV basis. The solar-to-hydrogen (STH) energy conversion ratio was 16.9%. This thus exceeds the Year 2015 DOE STCH target of STH >10%, and shows promise for meeting the Year 2020 target of 20%.« less

  19. High quality fuel gas from biomass pyrolysis with calcium oxide.

    PubMed

    Zhao, Baofeng; Zhang, Xiaodong; Chen, Lei; Sun, Laizhi; Si, Hongyu; Chen, Guanyi

    2014-03-01

    The removal of CO2 and tar in fuel gas produced by biomass thermal conversion has aroused more attention due to their adverse effects on the subsequent fuel gas application. High quality fuel gas production from sawdust pyrolysis with CaO was studied in this paper. The results of pyrolysis-gas chromatography/mass spectrometry (Py-GC/MS) experiments indicate that the mass ratio of CaO to sawdust (Ca/S) remarkably affects the behavior of sawdust pyrolysis. On the basis of Py-GC/MS results, one system of a moving bed pyrolyzer coupled with a fluid bed combustor has been developed to produce high quality fuel gas. The lower heating value (LHV) of the fuel gas was above 16MJ/Nm(3) and the content of tar was under 50mg/Nm(3), which is suitable for gas turbine application to generate electricity and heat. Therefore, this technology may be a promising route to achieve high quality fuel gas for biomass utilization. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Successful living donor liver transplant in a child with Abernethy malformation with biliary atresia, ventricular septal defect and intrapulmonary shunting.

    PubMed

    Singhal, Ashish; Srivastava, Ajitabh; Goyal, Neerav; Vij, Vivek; Wadhawan, Manav; Bera, Motilal; Gupta, Subash

    2009-12-01

    Congenital portosystemic shunts are the anomalies in which the mesenteric venous drainage bypasses the liver and drains directly into the systemic circulation. This is a report of a rare case of LDLT in a four-yr old male child suffering with biliary atresia (post-failed Kasai procedure) associated with (i) a large congenital CEPSh from the spleno-mesentric confluence to the LHV, (ii) intrapulmonary shunts, (iii) perimembranous VSD. The left lobe graft was procured from the mother of the child. Recipient IVC and the shunt vessel were preserved during the hepatectomy, and the caval and shunt clamping were remarkably short while performing the HV and portal anastomosis. Post-operative course was uneventful; intrapulmonary shunts regressed within three months after transplantation and currently after 18 months following transplant child is doing well with normal liver functions. CEPSh has been extensively discussed and all the published cases of liver transplantation for CEPSh were reviewed.

  1. Development of the hybrid sulfur cycle for use with concentrated solar heat. I. Conceptual design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorensek, Maximilian B.; Corgnale, Claudio; Summers, William A.

    We propose a detailed conceptual design of a solar hybrid sulfur (HyS) cycle. Numerous design tradeoffs, including process operating conditions and strategies, methods of integration with solar energy sources, and solar design options were considered. A baseline design was selected, and process flowsheets were developed. Pinch analyses were performed to establish the limiting energy efficiency. Detailed material and energy balances were completed, and a full stream table prepared. Design assumptions include use of: location in the southwest US desert, falling particle concentrated solar receiver, indirect heat transfer via pressurized helium, continuous operation with thermal energy storage, liquid-fed electrolyzer with PBImore » membrane, and bayonet-type acid decomposer. Thermochemical cycle efficiency for the HyS process was estimated to be 35.0%, LHV basis. The solar-to-hydrogen (STH) energy conversion ratio was 16.9%. This thus exceeds the Year 2015 DOE STCH target of STH >10%, and shows promise for meeting the Year 2020 target of 20%.« less

  2. Beneficial synergetic effect on gas production during co-pyrolysis of sewage sludge and biomass in a vacuum reactor.

    PubMed

    Zhang, Weijiang; Yuan, Chengyong; Xu, Jiao; Yang, Xiao

    2015-05-01

    A vacuum fixed bed reactor was used to pyrolyze sewage sludge, biomass (rice husk) and their blend under high temperature (900°C). Pyrolytic products were kept in the vacuum reactor during the whole pyrolysis process, guaranteeing a long contact time (more than 2h) for their interactions. Remarkable synergetic effect on gas production was observed. Gas yield of blend fuel was evidently higher than that of both parent fuels. The syngas (CO and H2) content and gas lower heating value (LHV) were obviously improved as well. It was highly possible that sewage sludge provided more CO2 and H2O during co-pyrolysis, promoting intense CO2-char and H2O-char gasification, which benefited the increase of gas yield and lower heating value. The beneficial synergetic effect, as a result, made this method a feasible one for gas production. Copyright © 2015. Published by Elsevier Ltd.

  3. Potential method for gas production: high temperature co-pyrolysis of lignite and sewage sludge with vacuum reactor and long contact time.

    PubMed

    Yang, Xiao; Yuan, Chengyong; Xu, Jiao; Zhang, Weijiang

    2015-03-01

    Lignite and sewage sludge were co-pyrolyzed in a vacuum reactor with high temperature (900°C) and long contact time (more than 2h). Beneficial synergetic effect on gas yield was clearly observed. Gas yield of blend fuel was evidently higher than that of both parent fuels. The gas volume yield, gas lower heating value (LHV), fixed carbon conversion and H2/CO ratio were 1.42 Nm(3)/kg(blend fuel), 10.57 MJ/Nm(3), 96.64% and 0.88% respectively, which indicated this new method a feasible one for gas production. It was possible that sewage sludge acted as gasification agents (CO2 and H2O) and catalyst (alkali and alkaline earth metals) provider during co-pyrolysis, promoting CO2-char and H2O-char gasification which, as a result, invited the improvement of gas volume yield, gas lower heating value and fixed carbon conversion. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. The RDF/SRF torrefaction: An effect of temperature on characterization of the product - Carbonized Refuse Derived Fuel.

    PubMed

    Białowiec, Andrzej; Pulka, Jakub; Stępień, Paweł; Manczarski, Piotr; Gołaszewski, Janusz

    2017-12-01

    The influence of Refuse Derived Fuel (RDF)/Solid Recovery Fuel (SRF) torrefaction temperature on product characteristic was investigated. RDF/SRF thermal treatment experiment was conducted with 1-h residence time, under given temperatures: 200, 220, 240, 260, 280 and 300°C. Sawdust was used as reference material. The following parameters of torrefaction char from sawdust and Carbonized Refuse Derived Fuel (CRDF) from RDF/SRF were measured: moisture, calorific value, ash content, volatile compounds and sulfur content. Sawdust biochar was confirmed as a good quality solid fuel, due to significant fuel property increase. The study also indicated that RDF torrefaction reduced moisture significantly from 22.9% to 1.4% and therefore increased lower heating value (LHV) from 19.6 to 25.3MJ/kg. Results suggest that RDF torrefaction may be a good method for increasing attractiveness of RDF as an energy source, and it could help unify RDF properties on the market. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Torrefaction of agriculture straws and its application on biomass pyrolysis poly-generation.

    PubMed

    Chen, Yingquan; Yang, Haiping; Yang, Qing; Hao, Hongmeng; Zhu, Bo; Chen, Hanping

    2014-03-01

    This study investigated the properties of corn stalk and cotton stalk after torrefaction, and the effects of torrefaction on product properties obtained under the optimal condition of biomass pyrolysis polygeneration. The color of the torrefied biomass chars darkened, and the grindability was upgraded, with finer particles formed and grinding energy consumption reduced. The moisture and oxygen content significantly decreased whereas the carbon content increased considerably. It was found that torrefaction had different effects on the char, liquid oil and biogas from biomass pyrolysis polygeneration. Compared to raw straws, the output of chars from pyrolysis of torrefied straws increased and the quality of chars as a solid fuel had no significant change, while the output of liquid oil and biogas decreased. The liquid oil contained more concentrated phenols with less water content below 40wt.%, and the biogas contained more concentrated H2 and CH4 with higher LHV up to 15MJ/nm(3). Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. The catalytic pyrolysis of food waste by microwave heating.

    PubMed

    Liu, Haili; Ma, Xiaoqian; Li, Longjun; Hu, ZhiFeng; Guo, Pingsheng; Jiang, Yuhui

    2014-08-01

    This study describes a series of experiments that tested the use of microwave pyrolysis for treating food waste. Characteristics including rise in temperature, and the three-phase products, were analyzed at different microwave power levels, after adding 5% (mass basis) metal oxides and chloride salts to the food waste. Results indicated that, the metal oxides MgO, Fe₂O₃ and MnO₂ and the chloride salts CuCl₂ and NaCl can lower the yield of bio-oil and enhance the yield of gas. Meanwhile, the metal oxides MgO and MnO₂ can also lower the low heating value (LHV) of solid residues and increase the pH values of the lower layer bio-oils. However, the chloride salts CuCl₂ and NaCl had the opposite effects. The optimal microwave power for treating food waste was 400W; among the tested catalysts, CuCl₂ was the best catalyst and had the largest energy ratio of production to consumption (ERPC), followed by MnO₂. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Updraft gasification of poultry litter at farm-scale--A case study.

    PubMed

    Taupe, N C; Lynch, D; Wnetrzak, R; Kwapinska, M; Kwapinski, W; Leahy, J J

    2016-04-01

    Farm and animal wastes are increasingly being investigated for thermochemical conversion, such as gasification, due to the urgent necessity of finding new waste treatment options. We report on an investigation of the use of a farm-scale, auto-thermal gasification system for the production of a heating gas using poultry litter (PL) as a feedstock. The gasification process was robust and reliable. The PL's ash melting temperature was 639°C, therefore the reactor temperature was kept around this value. As a result of the low reactor temperature the process performance parameters were low, with a cold gas efficiency (CGE) of 0.26 and a carbon conversion efficiency (CCE) of 0.44. The calorific value of the clean product gas was 3.39 MJ m(-3)N (LHV). The tar was collected as an emulsion containing 87 wt.% water and the extracted organic compounds were identified. The residual char exceeds thresholds for Zn and Cu to obtain European biochar certification; however, has potential to be classified as a pyrogenic carbonaceous material (PCM), which resembles a high nutrient biochar. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Experimental investigation on an entrained flow type biomass gasification system using coconut coir dust as powdery biomass feedstock.

    PubMed

    Senapati, P K; Behera, S

    2012-08-01

    Based on an entrained flow concept, a prototype atmospheric gasification system has been designed and developed in the laboratory for gasification of powdery biomass feedstock such as rice husks, coconut coir dust, saw dust etc. The reactor was developed by adopting L/D (height to diameter) ratio of 10, residence time of about 2s and a turn down ratio (TDR) of 1.5. The experimental investigation was carried out using coconut coir dust as biomass feedstock with a mean operating feed rate of 40 kg/h The effects of equivalence ratio in the range of 0.21-0.3, steam feed at a fixed flow rate of 12 kg/h, preheat on reactor temperature, product gas yield and tar content were investigated. The gasifier could able to attain high temperatures in the range of 976-1100 °C with gas lower heating value (LHV) and peak cold gas efficiency (CGE) of 7.86 MJ/Nm3 and 87.6% respectively. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Biomass waste gasification - Can be the two stage process suitable for tar reduction and power generation?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sulc, Jindrich; Stojdl, Jiri; Richter, Miroslav

    2012-04-15

    Highlights: Black-Right-Pointing-Pointer Comparison of one stage (co-current) and two stage gasification of wood pellets. Black-Right-Pointing-Pointer Original arrangement with grate-less reactor and upward moving bed of the pellets. Black-Right-Pointing-Pointer Two stage gasification leads to drastic reduction of tar content in gas. Black-Right-Pointing-Pointer One stage gasification produces gas with higher LHV at lower overall ER. Black-Right-Pointing-Pointer Content of ammonia in gas is lower in two stage moving bed gasification. - Abstract: A pilot scale gasification unit with novel co-current, updraft arrangement in the first stage and counter-current downdraft in the second stage was developed and exploited for studying effects of two stagemore » gasification in comparison with one stage gasification of biomass (wood pellets) on fuel gas composition and attainable gas purity. Significant producer gas parameters (gas composition, heating value, content of tar compounds, content of inorganic gas impurities) were compared for the two stage and the one stage method of the gasification arrangement with only the upward moving bed (co-current updraft). The main novel features of the gasifier conception include grate-less reactor, upward moving bed of biomass particles (e.g. pellets) by means of a screw elevator with changeable rotational speed and gradual expanding diameter of the cylindrical reactor in the part above the upper end of the screw. The gasifier concept and arrangement are considered convenient for thermal power range 100-350 kW{sub th}. The second stage of the gasifier served mainly for tar compounds destruction/reforming by increased temperature (around 950 Degree-Sign C) and for gasification reaction of the fuel gas with char. The second stage used additional combustion of the fuel gas by preheated secondary air for attaining higher temperature and faster gasification of the remaining char from the first stage. The measurements of gas composition and tar compound contents confirmed superiority of the two stage gasification system, drastic decrease of aromatic compounds with two and higher number of benzene rings by 1-2 orders. On the other hand the two stage gasification (with overall ER = 0.71) led to substantial reduction of gas heating value (LHV = 3.15 MJ/Nm{sup 3}), elevation of gas volume and increase of nitrogen content in fuel gas. The increased temperature (>950 Degree-Sign C) at the entrance to the char bed caused also substantial decrease of ammonia content in fuel gas. The char with higher content of ash leaving the second stage presented only few mass% of the inlet biomass stream.« less

  10. Off-Design Performance Analysis of a Solid-Oxide Fuel Cell/Gas Turbine Hybrid for Auxiliary Aerospace Power

    NASA Technical Reports Server (NTRS)

    Freeh, Joshua E.; Steffen, J., Jr.; Larosiliere, Louis M.

    2005-01-01

    A solid-oxide fuel cell/gas turbine hybrid system for auxiliary aerospace power is analyzed using 0-D and 1-D system-level models. The system is designed to produce 440 kW of net electrical power, sized for a typical long-range 300-passenger civil airplane, at both sea level and cruise flight level (12,500 m). In addition, a part power level of 250 kW is analyzed at the cruise condition, a requirement of the operating power profile. The challenge of creating a balanced system for the three distinct conditions is presented, along with the compromises necessary for each case. A parametric analysis is described for the cruise part power operating point, in which the system efficiency is maximized by varying the air flow rate. The system is compared to an earlier version that was designed solely for cruise operation. The results show that it is necessary to size the turbomachinery, fuel cell, and heat exchangers at sea level full power rather than cruise full power. The resulting estimated mass of the system is 1912 kg, which is significantly higher than the original cruise design point mass, 1396 kg. The net thermal efficiencies with respect to the fuel LHV are calculated to be 42.4 percent at sea level full power, 72.6 percent at cruise full power, and 72.8 percent at cruise part power. The cruise conditions take advantage of pre-compressed air from the on-board Environmental Control System, which accounts for a portion of the unusually high thermal efficiency at those conditions. These results show that it is necessary to include several operating points in the overall assessment of an aircraft power system due to the variations throughout the operating profile.

  11. Energy from poultry waste: An Aspen Plus-based approach to the thermo-chemical processes.

    PubMed

    Cavalaglio, Gianluca; Coccia, Valentina; Cotana, Franco; Gelosia, Mattia; Nicolini, Andrea; Petrozzi, Alessandro

    2018-03-01

    A particular approach to the task of energy conversion of a residual waste material was properly experienced during the implementation of the national funded Enerpoll project. This project is a case study developed in the estate of a poultry farm that is located in a rural area of central Italy (Umbria Region); such a farm was chosen for the research project since it is almost representative of many similar small-sized breeding realties of the Italian regional context. The purpose of the case study was the disposal of a waste material (i.e. poultry manure) and its energy recovery; this task is in agreement with the main objectives of the new Energy Union policy. Considering this background, an innovative gasification plant (300KW thermal power) was chosen and installed for the experimentation. The novelty of the investigated technology is the possibility to achieve the production of thermal energy burning just the produced syngas and not directly the solid residues. This aspect allows to reduce the quantity of nitrogen released in the atmosphere by the exhaust flue gases and conveying it into the solid residues (ashes). A critical aspect of the research program was the optimization of the pretreatment (reduction of the water content) and the dimensional homogenization of the poultry waste before its energy recovery. This physical pretreatment allowed the reduction of the complexity of the matrix to be energy enhanced. Further to the real scale plant monitoring, a complete Aspen Plus v.8.0 model was also elaborated for the prediction of the quality of the produced synthesis gas as a function of both the gasification temperature and the equivalence ratio (ER). The model is an ideal flowchart using as input material just the homogenized and dried material. On the basis of the real monitored thermal power (equal to about 200kW average value in an hour) the model was used for the estimation of the syngas energy content (i.e. LHV) that resulted in the range of 3-5MJ/m 3 for an equivalence ratio (ER) equal to 0.2. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Tar-free fuel gas production from high temperature pyrolysis of sewage sludge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Leguan; Xiao, Bo; Hu, Zhiquan

    2014-01-15

    Highlights: • High temperature pyrolysis of sewage sludge was efficient for producing tar-free fuel gas. • Complete tar removal and volatile matter release were at elevated temperature of 1300 °C. • Sewage sludge was converted to residual solid with high ash content. • 72.60% of energy conversion efficiency for gas production in high temperature pyrolysis. • Investment and costing for tar cleaning were reduced. - Abstract: Pyrolysis of sewage sludge was studied in a free-fall reactor at 1000–1400 °C. The results showed that the volatile matter in the sludge could be completely released to gaseous product at 1300 °C. Themore » high temperature was in favor of H{sub 2} and CO in the produced gas. However, the low heating value (LHV) of the gas decreased from 15.68 MJ/N m{sup 3} to 9.10 MJ/N m{sup 3} with temperature increasing from 1000 °C to 1400 °C. The obtained residual solid was characterized by high ash content. The energy balance indicated that the most heating value in the sludge was in the gaseous product.« less

  13. Elementary analysis and energetic potential of the municipal sewage sludges from the Gdańsk and Kościerzyna WWTPs

    NASA Astrophysics Data System (ADS)

    Ostojski, Arkadiusz

    2018-01-01

    This paper aims to present municipal sewage sludge (MSS) elementary analysis and energetic potential based on measurement of heat of combustion (higher heating value HHV) and calculation of calorific values (lower heating value LHV). The analysis takes into the consideration water content in sewage sludge, at different utilization stages, in wastewater treatment plants in Gdańsk Wschód and Kościerzyna - Pomeranian Voivodeship. The study yielded the following results (in % dry matter): ash 19÷31 %, C - 31÷36 %, H - 5÷6 %, N - 4÷6 %, O - 28÷32 %, S - 1 %. Calorific value of stabilized sludges in Gdańsk was on average 13.8÷15 MJ/kg. In case of sludges not undergoing digestion from Kościerzyna WWTP, the calorific value was at the level of 17.5 MJ/kg. Thus, sewage sludges are good energy carriers. High water content though is the problem, as it lowers the useful effect of heat. There is no alternative for thermal sewage sludge neutralization, which is in conformity with valid Polish National Waste Management Plan (KPGO 2022).

  14. Simulation of co-incineration of sewage sludge with municipal solid waste in a grate furnace incinerator.

    PubMed

    Lin, Hai; Ma, Xiaoqian

    2012-03-01

    Incineration is one of the most important methods in the resource recovery disposal of sewage sludge. The combustion characteristics of sewage sludge and an increasing number of municipal solid waste (MSW) incineration plants provide the possibility of co-incineration of sludge with MSW. Computational fluid dynamics (CFD) analysis was used to verify the feasibility of co-incineration of sludge with MSW, and predict the effect of co-incineration. In this study, wet sludge and semi-dried sludge were separately blended with MSW as mixed fuels, which were at a co-incineration ratios of 5 wt.% (wet basis, the same below), 10 wt.%, 15 wt.%, 20 wt.% and 25 wt.%. The result indicates that co-incineration of 10 wt.% wet sludge with MSW can ensure the furnace temperature, the residence time and other vital items in allowable level, while 20 wt.% of semi-dried sludge can reach the same standards. With lower moisture content and higher low heating value (LHV), semi-dried sludge can be more appropriate in co-incineration with MSW in a grate furnace incinerator. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Automobile Shredder Residues in Italy: characterization and valorization opportunities.

    PubMed

    Fiore, S; Ruffino, B; Zanetti, M C

    2012-08-01

    At the moment Automobile Shredder Residue (ASR) is usually landfilled worldwide, but European draft Directive 2000/53/CE forces the development of alternative solutions, stating the 95%-wt recovery of an End of Life Vehicle (ELV) weight to be fulfilled by 2015. This work describes two industrial tests, each involving 250-300 t of ELVs, in which different pre-shredding operations were performed. The produced ASR materials underwent an extended characterization and some post-shredding processes, consisting of dimensional, magnetic, electrostatic and densimetric separation phases, were tested on laboratory scale, having as main purpose the enhancement of ASR recovery/recycling and the minimization of the landfilled fraction. The gathered results show that accurate depollution and dismantling operations are mandatory to obtain a high quality ASR material which may be recycled/recovered and partially landfilled according to the actual European Union regulations, with particular concern for Lower Heating Value (LHV), heavy metals content and Dissolved Organic Carbon (DOC) as critical parameters. Moreover post-shredding technical solutions foreseeing minimum economic and engineering efforts, therefore realizable in common European ELVs shredding plants, may lead to multi-purposed (material recovery and thermal valorization) opportunities for ASR reuse/recovery. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. Co-pyrolysis of corn cob and waste cooking oil in a fixed bed.

    PubMed

    Chen, Guanyi; Liu, Cong; Ma, Wenchao; Zhang, Xiaoxiong; Li, Yanbin; Yan, Beibei; Zhou, Weihong

    2014-08-01

    Corn cob (CC) and waste cooking oil (WCO) were co-pyrolyzed in a fixed bed. The effects of various temperatures of 500 °C, 550 °C, 600 °C and CC/WCO mass ratios of 1:0, 1:0.1, 1:0.5, 1:1 and 0:1 were investigated, respectively. Results show that co-pyrolysis of CC/WCO produce more liquid and less bio-char than pyrolysis of CC individually. Bio-oil and bio-char yields were found to be largely dependent on temperature and CC/WCO ratios. GC/MS of bio-oil show it consists of different classes and amounts of organic compounds other than that from CC pyrolysis. Temperature of 550 °C and CC/WCO ratio of 1:1 seem to be the optimum considering high bio-oil yields (68.6 wt.%) and good bio-oil properties (HHV of 32.78 MJ/kg). In this case, bio-char of 24.96 MJ/kg appears attractive as a renewable source, while gas with LHV of 16.06 MJ/Nm(3) can be directly used in boilers as fuel. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Sewage sludge drying process integration with a waste-to-energy power plant.

    PubMed

    Bianchini, A; Bonfiglioli, L; Pellegrini, M; Saccani, C

    2015-08-01

    Dewatered sewage sludge from Waste Water Treatment Plants (WWTPs) is encountering increasing problems associated with its disposal. Several solutions have been proposed in the last years regarding energy and materials recovery from sewage sludge. Current technological solutions have relevant limits as dewatered sewage sludge is characterized by a high water content (70-75% by weight), even if mechanically treated. A Refuse Derived Fuel (RDF) with good thermal characteristics in terms of Lower Heating Value (LHV) can be obtained if dewatered sludge is further processed, for example by a thermal drying stage. Sewage sludge thermal drying is not sustainable if the power is fed by primary energy sources, but can be appealing if waste heat, recovered from other processes, is used. A suitable integration can be realized between a WWTP and a waste-to-energy (WTE) power plant through the recovery of WTE waste heat as energy source for sewage sludge drying. In this paper, the properties of sewage sludge from three different WWTPs are studied. On the basis of the results obtained, a facility for the integration of sewage sludge drying within a WTE power plant is developed. Furthermore, energy and mass balances are set up in order to evaluate the benefits brought by the described integration. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Production of charcoal briquettes from biomass for community use

    NASA Astrophysics Data System (ADS)

    Suttibak, S.; Loengbudnark, W.

    2018-01-01

    This article reports of a study on the production of charcoal briquettes from biomass for community use. Manufacture of charcoal briquettes was done using a briquette machine with a screw compressor. The aim of this research was to investigate the effects of biomass type upon the properties and performance of charcoal briquettes. The biomass samples used in this work were sugarcane bagasse (SB), cassava rhizomes (CR) and water hyacinth (WH) harvested in Udon Thani, Thailand. The char from biomass samples was produced in a 200-liter biomass incinerator. The resulting charcoal briquettes were characterized by measuring their properties and performance including moisture content, volatile matter, fixed carbon and ash contents, elemental composition, heating value, density, compressive strength and extinguishing time. The results showed that the charcoal briquettes from CR had more favorable properties and performance than charcoal briquettes from either SB or WH. The lower heating values (LHV) of the charcoal briquettes from SB, CR and WH were 26.67, 26.84 and 16.76 MJ/kg, respectively. The compressive strengths of charcoal briquettes from SB, CR and WH were 54.74, 80.84 and 40.99 kg/cm2, respectively. The results of this research can contribute to the promotion and development of cost-effective uses of agricultural residues. Additionally, it can assist communities in achieving sustainable self-sufficiency, which is in line with our late King Bhumibol’s economic sufficiency philosophy.

  19. Predicting the ultimate potential of natural gas SOFC power cycles with CO2 capture - Part B: Applications

    NASA Astrophysics Data System (ADS)

    Campanari, Stefano; Mastropasqua, Luca; Gazzani, Matteo; Chiesa, Paolo; Romano, Matteo C.

    2016-09-01

    An important advantage of solid oxide fuel cells (SOFC) as future systems for large scale power generation is the possibility of being efficiently integrated with processes for CO2 capture. Focusing on natural gas power generation, Part A of this work assessed the performances of advanced pressurised and atmospheric plant configurations (SOFC + GT and SOFC + ST, with fuel cell integration within a gas turbine or a steam turbine cycle) without CO2 separation. This Part B paper investigates such kind of power cycles when applied to CO2 capture, proposing two ultra-high efficiency plant configurations based on advanced intermediate-temperature SOFCs with internal reforming and low temperature CO2 separation process. The power plants are simulated at the 100 MW scale with a set of realistic assumptions about FC performances, main components and auxiliaries, and show the capability of exceeding 70% LHV efficiency with high CO2 capture (above 80%) and a low specific primary energy consumption for the CO2 avoided (1.1-2.4 MJ kg-1). Detailed results are presented in terms of energy and material balances, and a sensitivity analysis of plant performance is developed vs. FC voltage and fuel utilisation to investigate possible long-term improvements. Options for further improvement of the CO2 capture efficiency are also addressed.

  20. Space Radar Image of Manaus, Brazil

    NASA Image and Video Library

    1999-01-27

    These two images were created using data from the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR). On the left is a false-color image of Manaus, Brazil acquired April 12, 1994, onboard space shuttle Endeavour. In the center of this image is the Solimoes River just west of Manaus before it combines with the Rio Negro to form the Amazon River. The scene is around 8 by 8 kilometers (5 by 5 miles) with north toward the top. The radar image was produced in L-band where red areas correspond to high backscatter at HH polarization, while green areas exhibit high backscatter at HV polarization. Blue areas show low backscatter at VV polarization. The image on the right is a classification map showing the extent of flooding beneath the forest canopy. The classification map was developed by SIR-C/X-SAR science team members at the University of California,Santa Barbara. The map uses the L-HH, L-HV, and L-VV images to classify the radar image into six categories: Red flooded forest Green unflooded tropical rain forest Blue open water, Amazon river Yellow unflooded fields, some floating grasses Gray flooded shrubs Black floating and flooded grasses Data like these help scientists evaluate flood damage on a global scale. Floods are highly episodic and much of the area inundated is often tree-covered. http://photojournal.jpl.nasa.gov/catalog/PIA01712

  1. Experimental study on air-stream gasification of biomass micron fuel (BMF) in a cyclone gasifier.

    PubMed

    Guo, X J; Xiao, B; Zhang, X L; Luo, S Y; He, M Y

    2009-01-01

    Based on biomass micron fuel (BMF) with particle size of less than 250 microm, a cyclone gasifier concept has been considered in our laboratory for biomass gasification. The concept combines and integrates partial oxidation, fast pyrolysis, gasification, and tar cracking, as well as a shift reaction, with the purpose of producing a high quality of gas. In this paper, experiments of BMF air-stream gasification were carried out by the gasifier, with energy for BMF gasification produced by partial combustion of BMF within the gasifier using a hypostoichiometric amount of air. The effects of ER (0.22-0.37) and S/B (0.15-0.59) and biomass particle size on the performances of BMF gasification and the gasification temperature were studied. Under the experimental conditions, the temperature, gas yields, LHV of the gas fuel, carbon conversion efficiency, stream decomposition and gasification efficiency varied in the range of 586-845 degrees C, 1.42-2.21 N m(3)/kg biomass, 3806-4921 kJ/m(3), 54.44%-85.45%, 37.98%-70.72%, and 36.35%-56.55%, respectively. The experimental results showed that the gasification performance was best with ER being 3.7 and S/B being 0.31 and smaller particle, as well as H(2)-content. And the BMF gasification by air and low temperature stream in the cyclone gasifier with the energy self-sufficiency is reliable.

  2. Increasing the electric efficiency of a fuel cell system by recirculating the anodic offgas

    NASA Astrophysics Data System (ADS)

    Heinzel, A.; Roes, J.; Brandt, H.

    The University of Duisburg-Essen and the Center for Fuel Cell Technology (ZBT Duisburg GmbH) have developed a compact multi-fuel steam reformer suitable for natural gas, propane and butane. Fuel processor prototypes based on this concept were built up in the power range from 2.5 to 12.5 kW thermal hydrogen power for different applications and different industrial partners. The fuel processor concept contains all the necessary elements, a prereformer step, a primary reformer, water gas shift reactors, a steam generator, internal heat exchangers, in order to achieve an optimised heat integration and an external burner for heat supply as well as a preferential oxidation step (PrOx) as CO purification. One of the built fuel processors is designed to deliver a thermal hydrogen power output of 2.5 kW according to a PEM fuel cell stack providing about 1 kW electrical power and achieves a thermal efficiency of about 75% (LHV basis after PrOx), while the CO content of the product gas is below 20 ppm. This steam reformer has been combined with a 1 kW PEM fuel cell. Recirculating the anodic offgas results in a significant efficiency increase for the fuel processor. The gross efficiency of the combined system was already clearly above 30% during the first tests. Further improvements are currently investigated and developed at the ZBT.

  3. Competition policy for health care provision in the Netherlands.

    PubMed

    Schut, Frederik T; Varkevisser, Marco

    2017-02-01

    In the Netherlands in 2006 a major health care reform was introduced, aimed at reinforcing regulated competition in the health care sector. Health insurers were provided with strong incentives to compete and more room to negotiate and selectively contract with health care providers. Nevertheless, the bargaining position of health insurers vis-à-vis both GPs and hospitals is still relatively weak. GPs are very well organized in a powerful national interest association (LHV) and effectively exploit the long-standing trust relationship with their patients. They have been very successful in mobilizing public support against unfavorable contracting practices of health insurers and enforcement of the competition act. The rapid establishment of multidisciplinary care groups to coordinate care for patients with chronic diseases further strengthened their position. Due to ongoing horizontal consolidation, hospital markets in the Netherlands have become highly concentrated. Only recently the Dutch competition authority prohibited the first hospital merger. Despite the highly concentrated health insurance market, it is unclear whether insurers will have sufficient countervailing buyer power vis-à-vis GPs and hospitals to effectively fulfill their role as prudent buyer of care, as envisioned in the reform. To prevent further consolidation and anticompetitive coordination, strict enforcement of competition policy is crucially important for safeguarding the potential for effective insurer-provider negotiations about quality and price. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  4. Space Radar Image of the Lost City of Ubar

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This is a radar image of the region around the site of the lost city of Ubar in southern Oman, on the Arabian Peninsula. The ancient city was discovered in 1992 with the aid of remote sensing data. Archeologists believe Ubar existed from about 2800 B.C. to about 300 A.D. and was a remote desert outpost where caravans were assembled for the transport of frankincense across the desert. This image was acquired on orbit 65 of space shuttle Endeavour on April 13, 1994 by the Spaceborne Imaging Radar C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR). The SIR-C image shown is centered at 18.4 degrees north latitude and 53.6 degrees east longitude. The image covers an area about 50 by 100 kilometers (31 miles by 62 miles). The image is constructed from three of the available SIR-C channels and displays L-band, HH (horizontal transmit and receive) data as red, C-band HH as blue, and L-band HV (horizontal transmit, vertical receive) as green. The prominent magenta colored area is a region of large sand dunes, which are bright reflectors at both L-and C-band. The prominent green areas (L-HV) are rough limestone rocks, which form a rocky desert floor. A major wadi, or dry stream bed, runs across the middle of the image and is shown largely in white due to strong radar scattering in all channels displayed (L and C HH, L-HV). The actual site of the fortress of the lost city of Ubar, currently under excavation, is near the Wadi close to the center of the image. The fortress is too small to be detected in this image. However, tracks leading to the site, and surrounding tracks, appear as prominent, but diffuse, reddish streaks. These tracks have been used in modern times, but field investigations show many of these tracks were in use in ancient times as well. Mapping of these tracks on regional remote sensing images was a key to recognizing the site as Ubar in 1992. This image, and ongoing field investigations, will help shed light on a little known early civilization. Spaceborne Imaging Radar-C and X-Band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtange-legenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI), with the Deutsche Forschungsanstalt fuer Luft und Raumfahrt e.v.(DLR), the major partner in science, operations, and data processing of X-SAR.

  5. Do bioclimate variables improve performance of climate envelope models?

    USGS Publications Warehouse

    Watling, James I.; Romañach, Stephanie S.; Bucklin, David N.; Speroterra, Carolina; Brandt, Laura A.; Pearlstine, Leonard G.; Mazzotti, Frank J.

    2012-01-01

    Climate envelope models are widely used to forecast potential effects of climate change on species distributions. A key issue in climate envelope modeling is the selection of predictor variables that most directly influence species. To determine whether model performance and spatial predictions were related to the selection of predictor variables, we compared models using bioclimate variables with models constructed from monthly climate data for twelve terrestrial vertebrate species in the southeastern USA using two different algorithms (random forests or generalized linear models), and two model selection techniques (using uncorrelated predictors or a subset of user-defined biologically relevant predictor variables). There were no differences in performance between models created with bioclimate or monthly variables, but one metric of model performance was significantly greater using the random forest algorithm compared with generalized linear models. Spatial predictions between maps using bioclimate and monthly variables were very consistent using the random forest algorithm with uncorrelated predictors, whereas we observed greater variability in predictions using generalized linear models.

  6. Promises of advanced technology realized at Martin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swanekamp, R.

    1996-09-01

    The 2,488-MW Martin station is a gas/oil-fired facility that embodies today`s demand for flexible operations, technological advances, and reduced production costs. Martin station first rose up from the Everglades in the early 1980s, with the construction of two 814-MW oil-fired steam plants, Units 1 and 2. Natural-gas-firing capability was added to the balanced-draft, natural-circulation boilers in 1986, increasing the station`s fuel flexibility. Martin then leaped into the headlines in the early 1990s when it added combined-cycle (CC) Units 3 and 4. With this 860-MW expansion, FP and L boldly became the fleet leader for the advanced, 2350F-class 7FA gas turbines.more » Further pushing he technology envelope, the CC includes a three-pressure reheat steam system that raises net plant efficiency for Units 3 and 4 to 54%, on a lower-heating-value (LHV) basis. Incorporating the reheat cycle required significant redesign of the gas-turbine/heat-recovery steam generator (HRSG) train, in order to maintain a rapid startup capability without exceeding metallurgical limits. Perhaps even more important than the technological achievements, Martin stands out from the crowd for its people power, which ensured that the promises of advanced technology actually came to fruition. This station`s aggressive, empowered O and M team shows that you can pioneer technology, reduce operating costs, and deliver high availability--all at the same time.« less

  7. Development of Portable Venturi Kiln for Agricultural Waste Utilization by Carbonization Process

    NASA Astrophysics Data System (ADS)

    Agustina, S. E.; Chasanah, N.; Eris, A. P.

    2018-05-01

    Many types of kiln or carbonization equipment have been developed, but most of them were designed for big capacity and some also having low performance. This research aims to develop kiln, especially portable metal kiln, which has higher performance, more environmental- friendly, and can be used for several kinds of biomass or agricultural waste (not exclusive for one kind of biomass) as feeding material. To improve the kiln performance, a venturi drum type of portable kiln has been designed with an optimum capacity of 12.45 kg coconut shells. Basic idea of those design is heat flow improvement causing by venturi effect. The performance test for coconut shell carbonization shows that the carbonization process takes about 60-90 minutes to produce average yields of 23.8%., and the highest temperature of the process was 441 °C. The optimum performance has been achieved in the 4th test, which was producing 24% yield of highest charcoal quality (represented by LHV) in 65 minutes process at average temperature level 485 °C. For pecan shell and palm shell, design modification has been done by adding 6 air inlet holes and 3 ignition column to get better performance. While operation procedure should be modified on loading and air supply, depending on each biomass characteristic. The result of performance test showed that carbonization process of pecan shell produce 17 % yield, and palm shell produce 15% yield. Based on Indonesian Standard (SNI), all charcoal produced in those carbonization has good quality level.

  8. Efficient electrochemical refrigeration power plant using natural gas with ∼100% CO2 capture

    NASA Astrophysics Data System (ADS)

    Al-musleh, Easa I.; Mallapragada, Dharik S.; Agrawal, Rakesh

    2015-01-01

    We propose an efficient Natural Gas (NG) based Solid Oxide Fuel Cell (SOFC) power plant equipped with ∼100% CO2 capture. The power plant uses a unique refrigeration based process to capture and liquefy CO2 from the SOFC exhaust. The capture of CO2 is carried out via condensation and purification using two rectifying columns operating at different pressures. The uncondensed gas mixture, comprising of relatively high purity unconverted fuel, is recycled to the SOFC and found to boost the power generation of the SOFC by 22%, when compared to a stand alone SOFC. If Liquefied Natural Gas (LNG) is available at the plant gate, then the refrigeration available from its evaporation is used for CO2 Capture and Liquefaction (CO2CL). If NG is utilized, then a Mixed Refrigerant (MR) vapor compression cycle is utilized for CO2CL. Alternatively, the necessary refrigeration can be supplied by evaporating the captured liquid CO2 at a lower pressure, which is then compressed to supercritical pressures for pipeline transportation. From rigorous simulations, the power generation efficiency of the proposed processes is found to be 70-76% based on lower heating value (LHV). The benefit of the proposed processes is evident when the efficiency of 73% for a conventional SOFC-Gas turbine power plant without CO2 capture is compared with an equivalent efficiency of 71.2% for the proposed process with CO2CL.

  9. Combustion characteristics of biodried sewage sludge.

    PubMed

    Hao, Zongdi; Yang, Benqin; Jahng, Deokjin

    2018-02-01

    In this study, effects of biodrying on the characteristics of sewage sludge and the subsequent combustion behavior were investigated. 7-Day of biodrying removed 49.78% of water and 23.17% of VS initially contained in the sewage sludge and increased lower heating value (LHV) by 37.87%. Meanwhile, mass contents of C and N decreased from 36.25% and 6.12% to 32.06% and 4.82%, respectively. Surface of the biodried sewage sludge (BDSS) appeared granulated and multi-porous, which was thought to facilitate air transfer during combustion. According to thermogravimetric (TG) analysis coupled with mass spectrometer (MS) with a heating rate of 10 °C/min from 35 °C to 1000 °C, thermally-dried sewage sludge (TDSS) and BDSS lost 74.39% and 67.04% of the initial mass, respectively. In addition, combustibility index (S) of BDSS (8.67 × 10 -8  min -2  K -3 ) was higher than TDSS. TG-MS analyses also showed that less nitrogenous gases were generated from BDSS than TDSS. It was again showed that the average CO and NO concentrations in exit gas from isothermal combustion of BDSS were lower than those from TDSS, especially at low temperatures (≤800 °C). Based on these results, it was concluded that biodrying of sewage sludge was an energy-efficient water-removal method with less emission of air pollutants when BDSS was combusted. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Potential SRF generation from a closed landfill in northern Italy.

    PubMed

    Passamani, Giorgia; Ragazzi, Marco; Torretta, Vincenzo

    2016-01-01

    The aim of this work is to assess the possibility of producing solid recovered fuel (SRF) and "combustible SRF" from a landfill located in the north of Italy, where the waste is placed in cylindrical wrapped bales. Since the use of landfills for the disposal of municipal solid waste has many technical limitations and is subject to strict regulations and given that landfill post-closure care is very expensive, an interesting solution is to recover the bales that are stored in the landfill. The contents of the bales can then be used for energy recovery after specific treatments. Currently the landfill is closed and the local municipal council together with an environmental agency are considering constructing a mechanical biological treatment (MBT) plant for SRF production. The municipal solid waste that is stored in the landfill, the bio-dried material produced by the hypothetically treated waste in a plant for bio-drying, and the SRF obtained after the post-extraction of inert materials, metals and glass from the bio-dried material were characterized according to the quality and classification criteria of regulations in Italy. The analysis highlighted the need to treat the excavated waste in a bio-drying plant and later to remove the inert waste, metals and glass. Thus in compliance with Italian law, the material has a high enough LHV to be considered as "combustible SRF", (i.e. an SRF with enhanced characteristics). Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Gasification: An alternative solution for energy recovery and utilization of vegetable market waste.

    PubMed

    Narnaware, Sunil L; Srivastava, Nsl; Vahora, Samir

    2017-03-01

    Vegetables waste is generally utilized through a bioconversion process or disposed of at municipal landfills, dumping sites or dumped on open land, emitting a foul odor and causing health hazards. The presents study deals with an alternative way to utilize solid vegetable waste through a thermochemical route such as briquetting and gasification for its energy recovery and subsequent power generation. Briquettes of 50 mm diameter were produced from four different types of vegetable waste. The bulk density of briquettes produced was increased 10 to 15 times higher than the density of the dried vegetable waste in loose form. The lower heating value (LHV) of the briquettes ranged from 10.26 MJ kg -1 to 16.60 MJ kg -1 depending on the type of vegetable waste. The gasification of the briquettes was carried out in an open core downdraft gasifier, which resulted in syngas with a calorific value of 4.71 MJ Nm -3 at the gasification temperature between 889°C and 1011°C. A spark ignition, internal combustion engine was run on syngas and could generate a maximum load up to 10 kW e . The cold gas efficiency and the hot gas efficiency of the gasifier were measured at 74.11% and 79.87%, respectively. Energy recovery from the organic vegetable waste was possible through a thermochemical conversion route such as briquetting and subsequent gasification and recovery of the fuel for small-scale power generation.

  12. Defining a Family of Cognitive Diagnosis Models Using Log-Linear Models with Latent Variables

    ERIC Educational Resources Information Center

    Henson, Robert A.; Templin, Jonathan L.; Willse, John T.

    2009-01-01

    This paper uses log-linear models with latent variables (Hagenaars, in "Loglinear Models with Latent Variables," 1993) to define a family of cognitive diagnosis models. In doing so, the relationship between many common models is explicitly defined and discussed. In addition, because the log-linear model with latent variables is a general model for…

  13. On the explaining-away phenomenon in multivariate latent variable models.

    PubMed

    van Rijn, Peter; Rijmen, Frank

    2015-02-01

    Many probabilistic models for psychological and educational measurements contain latent variables. Well-known examples are factor analysis, item response theory, and latent class model families. We discuss what is referred to as the 'explaining-away' phenomenon in the context of such latent variable models. This phenomenon can occur when multiple latent variables are related to the same observed variable, and can elicit seemingly counterintuitive conditional dependencies between latent variables given observed variables. We illustrate the implications of explaining away for a number of well-known latent variable models by using both theoretical and real data examples. © 2014 The British Psychological Society.

  14. Preliminary Multi-Variable Parametric Cost Model for Space Telescopes

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip; Hendrichs, Todd

    2010-01-01

    This slide presentation reviews creating a preliminary multi-variable cost model for the contract costs of making a space telescope. There is discussion of the methodology for collecting the data, definition of the statistical analysis methodology, single variable model results, testing of historical models and an introduction of the multi variable models.

  15. Mixed geographically weighted regression (MGWR) model with weighted adaptive bi-square for case of dengue hemorrhagic fever (DHF) in Surakarta

    NASA Astrophysics Data System (ADS)

    Astuti, H. N.; Saputro, D. R. S.; Susanti, Y.

    2017-06-01

    MGWR model is combination of linear regression model and geographically weighted regression (GWR) model, therefore, MGWR model could produce parameter estimation that had global parameter estimation, and other parameter that had local parameter in accordance with its observation location. The linkage between locations of the observations expressed in specific weighting that is adaptive bi-square. In this research, we applied MGWR model with weighted adaptive bi-square for case of DHF in Surakarta based on 10 factors (variables) that is supposed to influence the number of people with DHF. The observation unit in the research is 51 urban villages and the variables are number of inhabitants, number of houses, house index, many public places, number of healthy homes, number of Posyandu, area width, level population density, welfare of the family, and high-region. Based on this research, we obtained 51 MGWR models. The MGWR model were divided into 4 groups with significant variable is house index as a global variable, an area width as a local variable and the remaining variables vary in each. Global variables are variables that significantly affect all locations, while local variables are variables that significantly affect a specific location.

  16. Evaluation of solar irradiance models for climate studies

    NASA Astrophysics Data System (ADS)

    Ball, William; Yeo, Kok-Leng; Krivova, Natalie; Solanki, Sami; Unruh, Yvonne; Morrill, Jeff

    2015-04-01

    Instruments on satellites have been observing both Total Solar Irradiance (TSI) and Spectral Solar Irradiance (SSI), mainly in the ultraviolet (UV), since 1978. Models were developed to reproduce the observed variability and to compute the variability at wavelengths that were not observed or had an uncertainty too high to determine an accurate rotational or solar cycle variability. However, various models and measurements show different solar cycle SSI variability that lead to different modelled responses of ozone and temperature in the stratosphere, mainly due to the different UV variability in each model, and the global energy balance. The NRLSSI and SATIRE-S models are the most comprehensive reconstructions of solar irradiance variability for the period from 1978 to the present day. But while NRLSSI and SATIRE-S show similar solar cycle variability below 250 nm, between 250 and 400 nm SATIRE-S typically displays 50% larger variability, which is however, still significantly less then suggested by recent SORCE data. Due to large uncertainties and inconsistencies in some observational datasets, it is difficult to determine in a simple way which model is likely to be closer to the true solar variability. We review solar irradiance variability measurements and modelling and employ new analysis that sheds light on the causes of the discrepancies between the two models and with the observations.

  17. First-Order Model Management With Variable-Fidelity Physics Applied to Multi-Element Airfoil Optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, N. M.; Nielsen, E. J.; Lewis, R. M.; Anderson, W. K.

    2000-01-01

    First-order approximation and model management is a methodology for a systematic use of variable-fidelity models or approximations in optimization. The intent of model management is to attain convergence to high-fidelity solutions with minimal expense in high-fidelity computations. The savings in terms of computationally intensive evaluations depends on the ability of the available lower-fidelity model or a suite of models to predict the improvement trends for the high-fidelity problem, Variable-fidelity models can be represented by data-fitting approximations, variable-resolution models. variable-convergence models. or variable physical fidelity models. The present work considers the use of variable-fidelity physics models. We demonstrate the performance of model management on an aerodynamic optimization of a multi-element airfoil designed to operate in the transonic regime. Reynolds-averaged Navier-Stokes equations represent the high-fidelity model, while the Euler equations represent the low-fidelity model. An unstructured mesh-based analysis code FUN2D evaluates functions and sensitivity derivatives for both models. Model management for the present demonstration problem yields fivefold savings in terms of high-fidelity evaluations compared to optimization done with high-fidelity computations alone.

  18. Prediction of thoracic injury severity in frontal impacts by selected anatomical morphomic variables through model-averaged logistic regression approach.

    PubMed

    Zhang, Peng; Parenteau, Chantal; Wang, Lu; Holcombe, Sven; Kohoyda-Inglis, Carla; Sullivan, June; Wang, Stewart

    2013-11-01

    This study resulted in a model-averaging methodology that predicts crash injury risk using vehicle, demographic, and morphomic variables and assesses the importance of individual predictors. The effectiveness of this methodology was illustrated through analysis of occupant chest injuries in frontal vehicle crashes. The crash data were obtained from the International Center for Automotive Medicine (ICAM) database for calendar year 1996 to 2012. The morphomic data are quantitative measurements of variations in human body 3-dimensional anatomy. Morphomics are obtained from imaging records. In this study, morphomics were obtained from chest, abdomen, and spine CT using novel patented algorithms. A NASS-trained crash investigator with over thirty years of experience collected the in-depth crash data. There were 226 cases available with occupants involved in frontal crashes and morphomic measurements. Only cases with complete recorded data were retained for statistical analysis. Logistic regression models were fitted using all possible configurations of vehicle, demographic, and morphomic variables. Different models were ranked by the Akaike Information Criteria (AIC). An averaged logistic regression model approach was used due to the limited sample size relative to the number of variables. This approach is helpful when addressing variable selection, building prediction models, and assessing the importance of individual variables. The final predictive results were developed using this approach, based on the top 100 models in the AIC ranking. Model-averaging minimized model uncertainty, decreased the overall prediction variance, and provided an approach to evaluating the importance of individual variables. There were 17 variables investigated: four vehicle, four demographic, and nine morphomic. More than 130,000 logistic models were investigated in total. The models were characterized into four scenarios to assess individual variable contribution to injury risk. Scenario 1 used vehicle variables; Scenario 2, vehicle and demographic variables; Scenario 3, vehicle and morphomic variables; and Scenario 4 used all variables. AIC was used to rank the models and to address over-fitting. In each scenario, the results based on the top three models and the averages of the top 100 models were presented. The AIC and the area under the receiver operating characteristic curve (AUC) were reported in each model. The models were re-fitted after removing each variable one at a time. The increases of AIC and the decreases of AUC were then assessed to measure the contribution and importance of the individual variables in each model. The importance of the individual variables was also determined by their weighted frequencies of appearance in the top 100 selected models. Overall, the AUC was 0.58 in Scenario 1, 0.78 in Scenario 2, 0.76 in Scenario 3 and 0.82 in Scenario 4. The results showed that morphomic variables are as accurate at predicting injury risk as demographic variables. The results of this study emphasize the importance of including morphomic variables when assessing injury risk. The results also highlight the need for morphomic data in the development of human mathematical models when assessing restraint performance in frontal crashes, since morphomic variables are more "tangible" measurements compared to demographic variables such as age and gender. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Underestimated AMOC Variability and Implications for AMV and Predictability in CMIP Models

    NASA Astrophysics Data System (ADS)

    Yan, Xiaoqin; Zhang, Rong; Knutson, Thomas R.

    2018-05-01

    The Atlantic Meridional Overturning Circulation (AMOC) has profound impacts on various climate phenomena. Using both observations and simulations from the Coupled Model Intercomparison Project Phase 3 and 5, here we show that most models underestimate the amplitude of low-frequency AMOC variability. We further show that stronger low-frequency AMOC variability leads to stronger linkages between the AMOC and key variables associated with the Atlantic multidecadal variability (AMV), and between the subpolar AMV signal and northern hemisphere surface air temperature. Low-frequency extratropical northern hemisphere surface air temperature variability might increase with the amplitude of low-frequency AMOC variability. Atlantic decadal predictability is much higher in models with stronger low-frequency AMOC variability and much lower in models with weaker or without AMOC variability. Our results suggest that simulating realistic low-frequency AMOC variability is very important, both for simulating realistic linkages between AMOC and AMV-related variables and for achieving substantially higher Atlantic decadal predictability.

  20. The Houdini Transformation: True, but Illusory.

    PubMed

    Bentler, Peter M; Molenaar, Peter C M

    2012-01-01

    Molenaar (2003, 2011) showed that a common factor model could be transformed into an equivalent model without factors, involving only observed variables and residual errors. He called this invertible transformation the Houdini transformation. His derivation involved concepts from time series and state space theory. This paper verifies the Houdini transformation on a general latent variable model using algebraic methods. The results show that the Houdini transformation is illusory, in the sense that the Houdini transformed model remains a latent variable model. Contrary to common knowledge, a model that is a path model with only observed variables and residual errors may, in fact, be a latent variable model.

  1. The Houdini Transformation: True, but Illusory

    PubMed Central

    Bentler, Peter M.; Molenaar, Peter C. M.

    2012-01-01

    Molenaar (2003, 2011) showed that a common factor model could be transformed into an equivalent model without factors, involving only observed variables and residual errors. He called this invertible transformation the Houdini transformation. His derivation involved concepts from time series and state space theory. This paper verifies the Houdini transformation on a general latent variable model using algebraic methods. The results show that the Houdini transformation is illusory, in the sense that the Houdini transformed model remains a latent variable model. Contrary to common knowledge, a model that is a path model with only observed variables and residual errors may, in fact, be a latent variable model. PMID:23180888

  2. Hedonic price models with omitted variables and measurement errors: a constrained autoregression-structural equation modeling approach with application to urban Indonesia

    NASA Astrophysics Data System (ADS)

    Suparman, Yusep; Folmer, Henk; Oud, Johan H. L.

    2014-01-01

    Omitted variables and measurement errors in explanatory variables frequently occur in hedonic price models. Ignoring these problems leads to biased estimators. In this paper, we develop a constrained autoregression-structural equation model (ASEM) to handle both types of problems. Standard panel data models to handle omitted variables bias are based on the assumption that the omitted variables are time-invariant. ASEM allows handling of both time-varying and time-invariant omitted variables by constrained autoregression. In the case of measurement error, standard approaches require additional external information which is usually difficult to obtain. ASEM exploits the fact that panel data are repeatedly measured which allows decomposing the variance of a variable into the true variance and the variance due to measurement error. We apply ASEM to estimate a hedonic housing model for urban Indonesia. To get insight into the consequences of measurement error and omitted variables, we compare the ASEM estimates with the outcomes of (1) a standard SEM, which does not account for omitted variables, (2) a constrained autoregression model, which does not account for measurement error, and (3) a fixed effects hedonic model, which ignores measurement error and time-varying omitted variables. The differences between the ASEM estimates and the outcomes of the three alternative approaches are substantial.

  3. Analysis and modeling of wafer-level process variability in 28 nm FD-SOI using split C-V measurements

    NASA Astrophysics Data System (ADS)

    Pradeep, Krishna; Poiroux, Thierry; Scheer, Patrick; Juge, André; Gouget, Gilles; Ghibaudo, Gérard

    2018-07-01

    This work details the analysis of wafer level global process variability in 28 nm FD-SOI using split C-V measurements. The proposed approach initially evaluates the native on wafer process variability using efficient extraction methods on split C-V measurements. The on-wafer threshold voltage (VT) variability is first studied and modeled using a simple analytical model. Then, a statistical model based on the Leti-UTSOI compact model is proposed to describe the total C-V variability in different bias conditions. This statistical model is finally used to study the contribution of each process parameter to the total C-V variability.

  4. A Novel Information-Theoretic Approach for Variable Clustering and Predictive Modeling Using Dirichlet Process Mixtures

    PubMed Central

    Chen, Yun; Yang, Hui

    2016-01-01

    In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic perspective that does not require the assumption of data structure for the identification of nonlinear interdependence among variables. Specifically, we propose the use of mutual information to characterize and measure nonlinear correlation structures among variables. Further, we develop Dirichlet process (DP) models to cluster variables based on the mutual-information measures among variables. Finally, orthonormalized variables in each cluster are integrated with group elastic-net model to improve the performance of predictive modeling. Both simulation and real-world case studies showed that the proposed methodology not only effectively reveals the nonlinear interdependence structures among variables but also outperforms traditional variable clustering algorithms such as hierarchical clustering. PMID:27966581

  5. A Novel Information-Theoretic Approach for Variable Clustering and Predictive Modeling Using Dirichlet Process Mixtures.

    PubMed

    Chen, Yun; Yang, Hui

    2016-12-14

    In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic perspective that does not require the assumption of data structure for the identification of nonlinear interdependence among variables. Specifically, we propose the use of mutual information to characterize and measure nonlinear correlation structures among variables. Further, we develop Dirichlet process (DP) models to cluster variables based on the mutual-information measures among variables. Finally, orthonormalized variables in each cluster are integrated with group elastic-net model to improve the performance of predictive modeling. Both simulation and real-world case studies showed that the proposed methodology not only effectively reveals the nonlinear interdependence structures among variables but also outperforms traditional variable clustering algorithms such as hierarchical clustering.

  6. Local-scale models reveal ecological niche variability in amphibian and reptile communities from two contrasting biogeographic regions

    PubMed Central

    Santos, Xavier; Felicísimo, Ángel M.

    2016-01-01

    Ecological Niche Models (ENMs) are widely used to describe how environmental factors influence species distribution. Modelling at a local scale, compared to a large scale within a high environmental gradient, can improve our understanding of ecological species niches. The main goal of this study is to assess and compare the contribution of environmental variables to amphibian and reptile ENMs in two Spanish national parks located in contrasting biogeographic regions, i.e., the Mediterranean and the Atlantic area. The ENMs were built with maximum entropy modelling using 11 environmental variables in each territory. The contributions of these variables to the models were analysed and classified using various statistical procedures (Mann–Whitney U tests, Principal Components Analysis and General Linear Models). Distance to the hydrological network was consistently the most relevant variable for both parks and taxonomic classes. Topographic variables (i.e., slope and altitude) were the second most predictive variables, followed by climatic variables. Differences in variable contribution were observed between parks and taxonomic classes. Variables related to water availability had the larger contribution to the models in the Mediterranean park, while topography variables were decisive in the Atlantic park. Specific response curves to environmental variables were in accordance with the biogeographic affinity of species (Mediterranean and non-Mediterranean species) and taxonomy (amphibians and reptiles). Interestingly, these results were observed for species located in both parks, particularly those situated at their range limits. Our findings show that ecological niche models built at local scale reveal differences in habitat preferences within a wide environmental gradient. Therefore, modelling at local scales rather than assuming large-scale models could be preferable for the establishment of conservation strategies for herptile species in natural parks. PMID:27761304

  7. Input variable selection for data-driven models of Coriolis flowmeters for two-phase flow measurement

    NASA Astrophysics Data System (ADS)

    Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao

    2017-03-01

    Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction.

  8. Assessing the accuracy and stability of variable selection ...

    EPA Pesticide Factsheets

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used, or stepwise procedures are employed which iteratively add/remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating dataset consists of the good/poor condition of n=1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p=212) of landscape features from the StreamCat dataset. Two types of RF models are compared: a full variable set model with all 212 predictors, and a reduced variable set model selected using a backwards elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors, and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substanti

  9. Empirical spatial econometric modelling of small scale neighbourhood

    NASA Astrophysics Data System (ADS)

    Gerkman, Linda

    2012-07-01

    The aim of the paper is to model small scale neighbourhood in a house price model by implementing the newest methodology in spatial econometrics. A common problem when modelling house prices is that in practice it is seldom possible to obtain all the desired variables. Especially variables capturing the small scale neighbourhood conditions are hard to find. If there are important explanatory variables missing from the model, the omitted variables are spatially autocorrelated and they are correlated with the explanatory variables included in the model, it can be shown that a spatial Durbin model is motivated. In the empirical application on new house price data from Helsinki in Finland, we find the motivation for a spatial Durbin model, we estimate the model and interpret the estimates for the summary measures of impacts. By the analysis we show that the model structure makes it possible to model and find small scale neighbourhood effects, when we know that they exist, but we are lacking proper variables to measure them.

  10. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology.

    PubMed

    Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H

    2017-07-01

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in using RF to develop predictive models with large environmental data sets.

  11. Modeling Hurricane Katrina's merchantable timber and wood damage in south Mississippi using remotely sensed and field-measured data

    NASA Astrophysics Data System (ADS)

    Collins, Curtis Andrew

    Ordinary and weighted least squares multiple linear regression techniques were used to derive 720 models predicting Katrina-induced storm damage in cubic foot volume (outside bark) and green weight tons (outside bark). The large number of models was dictated by the use of three damage classes, three product types, and four forest type model strata. These 36 models were then fit and reported across 10 variable sets and variable set combinations for volume and ton units. Along with large model counts, potential independent variables were created using power transforms and interactions. The basis of these variables was field measured plot data, satellite (Landsat TM and ETM+) imagery, and NOAA HWIND wind data variable types. As part of the modeling process, lone variable types as well as two-type and three-type combinations were examined. By deriving models with these varying inputs, model utility is flexible as all independent variable data are not needed in future applications. The large number of potential variables led to the use of forward, sequential, and exhaustive independent variable selection techniques. After variable selection, weighted least squares techniques were often employed using weights of one over the square root of the pre-storm volume or weight of interest. This was generally successful in improving residual variance homogeneity. Finished model fits, as represented by coefficient of determination (R2), surpassed 0.5 in numerous models with values over 0.6 noted in a few cases. Given these models, an analyst is provided with a toolset to aid in risk assessment and disaster recovery should Katrina-like weather events reoccur.

  12. Short-term to seasonal variability in factors driving primary productivity in a shallow estuary: Implications for modeling production

    NASA Astrophysics Data System (ADS)

    Canion, Andy; MacIntyre, Hugh L.; Phipps, Scott

    2013-10-01

    The inputs of primary productivity models may be highly variable on short timescales (hourly to daily) in turbid estuaries, but modeling of productivity in these environments is often implemented with data collected over longer timescales. Daily, seasonal, and spatial variability in primary productivity model parameters: chlorophyll a concentration (Chla), the downwelling light attenuation coefficient (kd), and photosynthesis-irradiance response parameters (Pmchl, αChl) were characterized in Weeks Bay, a nitrogen-impacted shallow estuary in the northern Gulf of Mexico. Variability in primary productivity model parameters in response to environmental forcing, nutrients, and microalgal taxonomic marker pigments were analysed in monthly and short-term datasets. Microalgal biomass (as Chla) was strongly related to total phosphorus concentration on seasonal scales. Hourly data support wind-driven resuspension as a major source of short-term variability in Chla and light attenuation (kd). The empirical relationship between areal primary productivity and a combined variable of biomass and light attenuation showed that variability in the photosynthesis-irradiance response contributed little to the overall variability in primary productivity, and Chla alone could account for 53-86% of the variability in primary productivity. Efforts to model productivity in similar shallow systems with highly variable microalgal biomass may benefit the most by investing resources in improving spatial and temporal resolution of chlorophyll a measurements before increasing the complexity of models used in productivity modeling.

  13. [Modelling the effect of local climatic variability on dengue transmission in Medellin (Colombia) by means of time series analysis].

    PubMed

    Rúa-Uribe, Guillermo L; Suárez-Acosta, Carolina; Chauca, José; Ventosilla, Palmira; Almanza, Rita

    2013-09-01

    Dengue fever is a major impact on public health vector-borne disease, and its transmission is influenced by entomological, sociocultural and economic factors. Additionally, climate variability plays an important role in the transmission dynamics. A large scientific consensus has indicated that the strong association between climatic variables and disease could be used to develop models to explain the incidence of the disease. To develop a model that provides a better understanding of dengue transmission dynamics in Medellin and predicts increases in the incidence of the disease. The incidence of dengue fever was used as dependent variable, and weekly climatic factors (maximum, mean and minimum temperature, relative humidity and precipitation) as independent variables. Expert Modeler was used to develop a model to better explain the behavior of the disease. Climatic variables with significant association to the dependent variable were selected through ARIMA models. The model explains 34% of observed variability. Precipitation was the climatic variable showing statistically significant association with the incidence of dengue fever, but with a 20 weeks delay. In Medellin, the transmission of dengue fever was influenced by climate variability, especially precipitation. The strong association dengue fever/precipitation allowed the construction of a model to help understand dengue transmission dynamics. This information will be useful to develop appropriate and timely strategies for dengue control.

  14. Population activity statistics dissect subthreshold and spiking variability in V1.

    PubMed

    Bányai, Mihály; Koman, Zsombor; Orbán, Gergő

    2017-07-01

    Response variability, as measured by fluctuating responses upon repeated performance of trials, is a major component of neural responses, and its characterization is key to interpret high dimensional population recordings. Response variability and covariability display predictable changes upon changes in stimulus and cognitive or behavioral state, providing an opportunity to test the predictive power of models of neural variability. Still, there is little agreement on which model to use as a building block for population-level analyses, and models of variability are often treated as a subject of choice. We investigate two competing models, the doubly stochastic Poisson (DSP) model assuming stochasticity at spike generation, and the rectified Gaussian (RG) model tracing variability back to membrane potential variance, to analyze stimulus-dependent modulation of both single-neuron and pairwise response statistics. Using a pair of model neurons, we demonstrate that the two models predict similar single-cell statistics. However, DSP and RG models have contradicting predictions on the joint statistics of spiking responses. To test the models against data, we build a population model to simulate stimulus change-related modulations in pairwise response statistics. We use single-unit data from the primary visual cortex (V1) of monkeys to show that while model predictions for variance are qualitatively similar to experimental data, only the RG model's predictions are compatible with joint statistics. These results suggest that models using Poisson-like variability might fail to capture important properties of response statistics. We argue that membrane potential-level modeling of stochasticity provides an efficient strategy to model correlations. NEW & NOTEWORTHY Neural variability and covariability are puzzling aspects of cortical computations. For efficient decoding and prediction, models of information encoding in neural populations hinge on an appropriate model of variability. Our work shows that stimulus-dependent changes in pairwise but not in single-cell statistics can differentiate between two widely used models of neuronal variability. Contrasting model predictions with neuronal data provides hints on the noise sources in spiking and provides constraints on statistical models of population activity. Copyright © 2017 the American Physiological Society.

  15. Modelling fourier regression for time series data- a case study: modelling inflation in foods sector in Indonesia

    NASA Astrophysics Data System (ADS)

    Prahutama, Alan; Suparti; Wahyu Utami, Tiani

    2018-03-01

    Regression analysis is an analysis to model the relationship between response variables and predictor variables. The parametric approach to the regression model is very strict with the assumption, but nonparametric regression model isn’t need assumption of model. Time series data is the data of a variable that is observed based on a certain time, so if the time series data wanted to be modeled by regression, then we should determined the response and predictor variables first. Determination of the response variable in time series is variable in t-th (yt), while the predictor variable is a significant lag. In nonparametric regression modeling, one developing approach is to use the Fourier series approach. One of the advantages of nonparametric regression approach using Fourier series is able to overcome data having trigonometric distribution. In modeling using Fourier series needs parameter of K. To determine the number of K can be used Generalized Cross Validation method. In inflation modeling for the transportation sector, communication and financial services using Fourier series yields an optimal K of 120 parameters with R-square 99%. Whereas if it was modeled by multiple linear regression yield R-square 90%.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weinstein, R.E.; Goldstein, H.N.; White, J.S.

    It is often more economical to keep existing generation capacity in operation than to build new capacity. Repowering is considered at a number of sites because of the need for added capacity, the poor condition of plant equipment (particularly the boiler), the need for improved environmental performance, the need for shorter licensing period, and other reasons. This paper describes the results of a US Department of Energy (DOE) conceptual design evaluation of an early commercial repowering application of advanced circulating pressurized fluidized bed combustion combined cycle technology (APFBC). The paper provides a review of the DOE study and summarizes themore » preliminary results. This all-coal technology has projected energy efficiency in the 42 to 46% HHV (43 to 48% LHV) range and environmental emissions superior to New Source Performance Standards (NSPS). A DOE-sponsored demonstration program will pioneer the first commercial APFBC demonstration in year 2001. That 170 MWe APFBC CCT demonstration will use all new equipment, and become the City of Lakeland`s C.D. McIntosh, Jr. steam plant Unit 4. This paper`s concept evaluation is for a larger implementation. A modern large frame combustion turbine is used to produce a 300 + MWe class APFBC. At this size, APFBC has a wide application for repowering many existing units in America. Here, APFBC would repower an existing generation station, the Carolina Power and Light Company`s (CP and L) L.V. Suttong steam station. Repowering concepts are presented for APFBC repowering of Unit 2 (252 MWe) and of both Units 1 and 2 in combination (360 MWe total).« less

  17. MicroRNA profiling identifies miR-7-5p and miR-26b-5p as differentially expressed in hypertensive patients with left ventricular hypertrophy

    PubMed Central

    Kaneto, C.M.; Nascimento, J.S.; Moreira, M.C.R.; Ludovico, N.D.; Santana, A.P.; Silva, R.A.A.; Silva-Jardim, I.; Santos, J.L.; Sousa, S.M.B.; Lima, P.S.P.

    2017-01-01

    Recent evidence suggests that cell-derived circulating miRNAs may serve as biomarkers of cardiovascular diseases. However, a few studies have investigated the potential of circulating miRNAs as biomarkers for left ventricular hypertrophy (LVH). In this study, we aimed to characterize the miRNA profiles that could distinguish hypertensive patients with LHV, hypertensive patients without LVH and control subjects, and identify potential miRNAs as biomarkers of LVH. LVH was defined by left ventricular mass indexed to body surface area >125 g/m2 in men and >110 g/m2 in women and patients were classified as hypertensive when presenting a systolic blood pressure of 140 mmHg or more, or a diastolic blood pressure of 90 mmHg or more. We employed miRNA PCR array to screen serum miRNAs profiles of patients with LVH, essential hypertension and healthy subjects. We identified 75 differentially expressed miRNAs, including 49 upregulated miRNAs and 26 downregulated miRNAs between LVH and control patients. We chose 2 miRNAs with significant differences for further testing in 59 patients. RT-PCR analysis of serum samples confirmed that miR-7-5p and miR-26b-5p were upregulated in the serum of LVH hypertensive patients compared with healthy subjects. Our findings suggest that these miRNAs may play a role in the pathogenesis of hypertensive LVH and may represent novel biomarkers for this disease. PMID:29069223

  18. Furniture wood wastes: Experimental property characterisation and burning tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tatano, Fabio; Barbadoro, Luca; Mangani, Giovanna

    2009-10-15

    Referring to the industrial wood waste category (as dominant in the provincial district of Pesaro-Urbino, Marche Region, Italy), this paper deals with the experimental characterisation and the carrying out of non-controlled burning tests (at lab- and pilot-scale) for selected 'raw' and primarily 'engineered' ('composite') wood wastes. The property characterisation has primarily revealed the following aspects: potential influence on moisture content of local weather conditions at outdoor wood waste storage sites; generally, higher ash contents in 'engineered' wood wastes as compared with 'raw' wood wastes; and relatively high energy content values of 'engineered' wood wastes (ranging on the whole from 3675more » to 5105 kcal kg{sup -1} for HHV, and from 3304 to 4634 kcal kg{sup -1} for LHV). The smoke qualitative analysis of non-controlled lab-scale burning tests has primarily revealed: the presence of specific organic compounds indicative of incomplete wood combustion; the presence exclusively in 'engineered' wood burning tests of pyrroles and amines, as well as the additional presence (as compared with 'raw' wood burning) of further phenolic and containing nitrogen compounds; and the potential environmental impact of incomplete industrial wood burning on the photochemical smog phenomenon. Finally, non-controlled pilot-scale burning tests have primarily given the following findings: emission presence of carbon monoxide indicative of incomplete wood combustion; higher nitrogen oxide emission values detected in 'engineered' wood burning tests as compared with 'raw' wood burning test; and considerable generation of the respirable PM{sub 1} fraction during incomplete industrial wood burning.« less

  19. Pyrolysis and gasification of landfilled plastic wastes with Ni-Mg-La/Al2O3 catalyst.

    PubMed

    Kaewpengkrow, Prangtip; Atong, Duangduen; Sricharoenchaikul, Viboon

    2012-12-01

    Pyrolysis and gasification processes were utilized to study the feasibility of producing fuels from landfilled plastic wastes. These wastes were converted in a gasifier at 700-900 degrees C. The equivalence ratio (ER) was varied from 0.4-0.6 with or without addition ofa Ni-Mg-La/Al2O3 catalyst. The pyrolysis and gasification of plastic wastes without catalyst resulted in relatively low H2, CO and other fuel gas products with methane as the major gaseous species. The highest lower heating value (LHV) was obtained at 800 degrees C and for an ER of 0.4, while the maximum cold gas efficiency occurred at 700 degrees C and for an ER of 0.4. The presence of the Ni-Mg-La/Al2O3 catalyst significantly enhanced H2 and CO production as well as increasing the gas energy content to 15.76-19.26 MJ/m3, which is suitable for further usage as quality fuel gas. A higher temperature resulted in more H2 and CO and other product gas yields, while char and liquid (tars) decreased. The maximum gas yield, gas calorific value and cold gas efficiency were achieved when the Ni-Mg-La/Al2O3 catalyst was used at 900 degrees C. In general, addition of prepared catalyst resulted in greater H2, CO and other light hydrocarbon yields from superior conversion of wastes to these gases. Thus, thermochemical treatment of these problematic wastes using pyrolysis and gasification processes is a very attractive alternative for sustainable waste management.

  20. Potential application of gasification to recycle food waste and rehabilitate acidic soil from secondary forests on degraded land in Southeast Asia.

    PubMed

    Yang, Zhanyu; Koh, Shun Kai; Ng, Wei Cheng; Lim, Reuben C J; Tan, Hugh T W; Tong, Yen Wah; Dai, Yanjun; Chong, Clive; Wang, Chi-Hwa

    2016-05-01

    Gasification is recognized as a green technology as it can harness energy from biomass in the form of syngas without causing severe environmental impacts, yet producing valuable solid residues that can be utilized in other applications. In this study, the feasibility of co-gasification of woody biomass and food waste in different proportions was investigated using a fixed-bed downdraft gasifier. Subsequently, the capability of biochar derived from gasification of woody biomass in the rehabilitation of soil from tropical secondary forests on degraded land (adinandra belukar) was also explored through a water spinach cultivation study using soil-biochar mixtures of different ratios. Gasification of a 60:40 wood waste-food waste mixture (w/w) produced syngas with the highest lower heating value (LHV) 5.29 MJ/m(3)-approximately 0.4-4.0% higher than gasification of 70:30 or 80:20 mixtures, or pure wood waste. Meanwhile, water spinach cultivated in a 2:1 soil-biochar mixture exhibited the best growth performance in terms of height (a 4-fold increment), weight (a 10-fold increment) and leaf surface area (a 5-fold increment) after 8 weeks of cultivation, owing to the high porosity, surface area, nutrient content and alkalinity of biochar. It is concluded that gasification may be an alternative technology to food waste disposal through co-gasification with woody biomass, and that gasification derived biochar is suitable for use as an amendment for the nutrient-poor, acidic soil of adinandra belukar. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. An outline of graphical Markov models in dentistry.

    PubMed

    Helfenstein, U; Steiner, M; Menghini, G

    1999-12-01

    In the usual multiple regression model there is one response variable and one block of several explanatory variables. In contrast, in reality there may be a block of several possibly interacting response variables one would like to explain. In addition, the explanatory variables may split into a sequence of several blocks, each block containing several interacting variables. The variables in the second block are explained by those in the first block; the variables in the third block by those in the first and the second block etc. During recent years methods have been developed allowing analysis of problems where the data set has the above complex structure. The models involved are called graphical models or graphical Markov models. The main result of an analysis is a picture, a conditional independence graph with precise statistical meaning, consisting of circles representing variables and lines or arrows representing significant conditional associations. The absence of a line between two circles signifies that the corresponding two variables are independent conditional on the presence of other variables in the model. An example from epidemiology is presented in order to demonstrate application and use of the models. The data set in the example has a complex structure consisting of successive blocks: the variable in the first block is year of investigation; the variables in the second block are age and gender; the variables in the third block are indices of calculus, gingivitis and mutans streptococci and the final response variables in the fourth block are different indices of caries. Since the statistical methods may not be easily accessible to dentists, this article presents them in an introductory form. Graphical models may be of great value to dentists in allowing analysis and visualisation of complex structured multivariate data sets consisting of a sequence of blocks of interacting variables and, in particular, several possibly interacting responses in the final block.

  2. Agricultural disturbance response models for invertebrate and algal metrics from streams at two spatial scales within the U.S.

    USGS Publications Warehouse

    Waite, Ian R.

    2014-01-01

    As part of the USGS study of nutrient enrichment of streams in agricultural regions throughout the United States, about 30 sites within each of eight study areas were selected to capture a gradient of nutrient conditions. The objective was to develop watershed disturbance predictive models for macroinvertebrate and algal metrics at national and three regional landscape scales to obtain a better understanding of important explanatory variables. Explanatory variables in models were generated from landscape data, habitat, and chemistry. Instream nutrient concentration and variables assessing the amount of disturbance to the riparian zone (e.g., percent row crops or percent agriculture) were selected as most important explanatory variable in almost all boosted regression tree models regardless of landscape scale or assemblage. Frequently, TN and TP concentration and riparian agricultural land use variables showed a threshold type response at relatively low values to biotic metrics modeled. Some measure of habitat condition was also commonly selected in the final invertebrate models, though the variable(s) varied across regions. Results suggest national models tended to account for more general landscape/climate differences, while regional models incorporated both broad landscape scale and more specific local-scale variables.

  3. Bayesian Semiparametric Structural Equation Models with Latent Variables

    ERIC Educational Resources Information Center

    Yang, Mingan; Dunson, David B.

    2010-01-01

    Structural equation models (SEMs) with latent variables are widely useful for sparse covariance structure modeling and for inferring relationships among latent variables. Bayesian SEMs are appealing in allowing for the incorporation of prior information and in providing exact posterior distributions of unknowns, including the latent variables. In…

  4. Measurement Model Specification Error in LISREL Structural Equation Models.

    ERIC Educational Resources Information Center

    Baldwin, Beatrice; Lomax, Richard

    This LISREL study examines the robustness of the maximum likelihood estimates under varying degrees of measurement model misspecification. A true model containing five latent variables (two endogenous and three exogenous) and two indicator variables per latent variable was used. Measurement model misspecification considered included errors of…

  5. Modelling the co-evolution of indirect genetic effects and inherited variability.

    PubMed

    Marjanovic, Jovana; Mulder, Han A; Rönnegård, Lars; Bijma, Piter

    2018-03-28

    When individuals interact, their phenotypes may be affected not only by their own genes but also by genes in their social partners. This phenomenon is known as Indirect Genetic Effects (IGEs). In aquaculture species and some plants, however, competition not only affects trait levels of individuals, but also inflates variability of trait values among individuals. In the field of quantitative genetics, the variability of trait values has been studied as a quantitative trait in itself, and is often referred to as inherited variability. Such studies, however, consider only the genetic effect of the focal individual on trait variability and do not make a connection to competition. Although the observed phenotypic relationship between competition and variability suggests an underlying genetic relationship, the current quantitative genetic models of IGE and inherited variability do not allow for such a relationship. The lack of quantitative genetic models that connect IGEs to inherited variability limits our understanding of the potential of variability to respond to selection, both in nature and agriculture. Models of trait levels, for example, show that IGEs may considerably change heritable variation in trait values. Currently, we lack the tools to investigate whether this result extends to variability of trait values. Here we present a model that integrates IGEs and inherited variability. In this model, the target phenotype, say growth rate, is a function of the genetic and environmental effects of the focal individual and of the difference in trait value between the social partner and the focal individual, multiplied by a regression coefficient. The regression coefficient is a genetic trait, which is a measure of cooperation; a negative value indicates competition, a positive value cooperation, and an increasing value due to selection indicates the evolution of cooperation. In contrast to the existing quantitative genetic models, our model allows for co-evolution of IGEs and variability, as the regression coefficient can respond to selection. Our simulations show that the model results in increased variability of body weight with increasing competition. When competition decreases, i.e., cooperation evolves, variability becomes significantly smaller. Hence, our model facilitates quantitative genetic studies on the relationship between IGEs and inherited variability. Moreover, our findings suggest that we may have been overlooking an entire level of genetic variation in variability, the one due to IGEs.

  6. The Effects of Model Misspecification and Sample Size on LISREL Maximum Likelihood Estimates.

    ERIC Educational Resources Information Center

    Baldwin, Beatrice

    The robustness of LISREL computer program maximum likelihood estimates under specific conditions of model misspecification and sample size was examined. The population model used in this study contains one exogenous variable; three endogenous variables; and eight indicator variables, two for each latent variable. Conditions of model…

  7. Relevance of anisotropy and spatial variability of gas diffusivity for soil-gas transport

    NASA Astrophysics Data System (ADS)

    Schack-Kirchner, Helmer; Kühne, Anke; Lang, Friederike

    2017-04-01

    Models of soil gas transport generally do not consider neither direction dependence of gas diffusivity, nor its small-scale variability. However, in a recent study, we could provide evidence for anisotropy favouring vertical gas diffusion in natural soils. We hypothesize that gas transport models based on gas diffusion data measured with soil rings are strongly influenced by both, anisotropy and spatial variability and the use of averaged diffusivities could be misleading. To test this we used a 2-dimensional model of soil gas transport to under compacted wheel tracks to model the soil-air oxygen distribution in the soil. The model was parametrized with data obtained from soil-ring measurements with its central tendency and variability. The model includes vertical parameter variability as well as variation perpendicular to the elongated wheel track. Different parametrization types have been tested: [i)]Averaged values for wheel track and undisturbed. em [ii)]Random distribution of soil cells with normally distributed variability within the strata. em [iii)]Random distributed soil cells with uniformly distributed variability within the strata. All three types of small-scale variability has been tested for [j)] isotropic gas diffusivity and em [jj)]reduced horizontal gas diffusivity (constant factor), yielding in total six models. As expected the different parametrizations had an important influence to the aeration state under wheel tracks with the strongest oxygen depletion in case of uniformly distributed variability and anisotropy towards higher vertical diffusivity. The simple simulation approach clearly showed the relevance of anisotropy and spatial variability in case of identical central tendency measures of gas diffusivity. However, until now it did not consider spatial dependency of variability, that could even aggravate effects. To consider anisotropy and spatial variability in gas transport models we recommend a) to measure soil-gas transport parameters spatially explicit including different directions and b) to use random-field stochastic models to assess the possible effects for gas-exchange models.

  8. A Framework for Multifaceted Evaluation of Student Models

    ERIC Educational Resources Information Center

    Huang, Yun; González-Brenes, José P.; Kumar, Rohit; Brusilovsky, Peter

    2015-01-01

    Latent variable models, such as the popular Knowledge Tracing method, are often used to enable adaptive tutoring systems to personalize education. However, finding optimal model parameters is usually a difficult non-convex optimization problem when considering latent variable models. Prior work has reported that latent variable models obtained…

  9. [Variable selection methods combined with local linear embedding theory used for optimization of near infrared spectral quantitative models].

    PubMed

    Hao, Yong; Sun, Xu-Dong; Yang, Qiang

    2012-12-01

    Variables selection strategy combined with local linear embedding (LLE) was introduced for the analysis of complex samples by using near infrared spectroscopy (NIRS). Three methods include Monte Carlo uninformation variable elimination (MCUVE), successive projections algorithm (SPA) and MCUVE connected with SPA were used for eliminating redundancy spectral variables. Partial least squares regression (PLSR) and LLE-PLSR were used for modeling complex samples. The results shown that MCUVE can both extract effective informative variables and improve the precision of models. Compared with PLSR models, LLE-PLSR models can achieve more accurate analysis results. MCUVE combined with LLE-PLSR is an effective modeling method for NIRS quantitative analysis.

  10. A Polychoric Instrumental Variable (PIV) Estimator for Structural Equation Models with Categorical Variables

    ERIC Educational Resources Information Center

    Bollen, Kenneth A.; Maydeu-Olivares, Albert

    2007-01-01

    This paper presents a new polychoric instrumental variable (PIV) estimator to use in structural equation models (SEMs) with categorical observed variables. The PIV estimator is a generalization of Bollen's (Psychometrika 61:109-121, 1996) 2SLS/IV estimator for continuous variables to categorical endogenous variables. We derive the PIV estimator…

  11. Multi-Wheat-Model Ensemble Responses to Interannual Climate Variability

    NASA Technical Reports Server (NTRS)

    Ruane, Alex C.; Hudson, Nicholas I.; Asseng, Senthold; Camarrano, Davide; Ewert, Frank; Martre, Pierre; Boote, Kenneth J.; Thorburn, Peter J.; Aggarwal, Pramod K.; Angulo, Carlos

    2016-01-01

    We compare 27 wheat models' yield responses to interannual climate variability, analyzed at locations in Argentina, Australia, India, and The Netherlands as part of the Agricultural Model Intercomparison and Improvement Project (AgMIP) Wheat Pilot. Each model simulated 1981e2010 grain yield, and we evaluate results against the interannual variability of growing season temperature, precipitation, and solar radiation. The amount of information used for calibration has only a minor effect on most models' climate response, and even small multi-model ensembles prove beneficial. Wheat model clusters reveal common characteristics of yield response to climate; however models rarely share the same cluster at all four sites indicating substantial independence. Only a weak relationship (R2 0.24) was found between the models' sensitivities to interannual temperature variability and their response to long-termwarming, suggesting that additional processes differentiate climate change impacts from observed climate variability analogs and motivating continuing analysis and model development efforts.

  12. Computational Fluid Dynamics Modeling of a Supersonic Nozzle and Integration into a Variable Cycle Engine Model

    NASA Technical Reports Server (NTRS)

    Connolly, Joseph W.; Friedlander, David; Kopasakis, George

    2015-01-01

    This paper covers the development of an integrated nonlinear dynamic simulation for a variable cycle turbofan engine and nozzle that can be integrated with an overall vehicle Aero-Propulso-Servo-Elastic (APSE) model. A previously developed variable cycle turbofan engine model is used for this study and is enhanced here to include variable guide vanes allowing for operation across the supersonic flight regime. The primary focus of this study is to improve the fidelity of the model's thrust response by replacing the simple choked flow equation convergent-divergent nozzle model with a MacCormack method based quasi-1D model. The dynamic response of the nozzle model using the MacCormack method is verified by comparing it against a model of the nozzle using the conservation element/solution element method. A methodology is also presented for the integration of the MacCormack nozzle model with the variable cycle engine.

  13. Computational Fluid Dynamics Modeling of a Supersonic Nozzle and Integration into a Variable Cycle Engine Model

    NASA Technical Reports Server (NTRS)

    Connolly, Joseph W.; Friedlander, David; Kopasakis, George

    2014-01-01

    This paper covers the development of an integrated nonlinear dynamic simulation for a variable cycle turbofan engine and nozzle that can be integrated with an overall vehicle Aero-Propulso-Servo-Elastic (APSE) model. A previously developed variable cycle turbofan engine model is used for this study and is enhanced here to include variable guide vanes allowing for operation across the supersonic flight regime. The primary focus of this study is to improve the fidelity of the model's thrust response by replacing the simple choked flow equation convergent-divergent nozzle model with a MacCormack method based quasi-1D model. The dynamic response of the nozzle model using the MacCormack method is verified by comparing it against a model of the nozzle using the conservation element/solution element method. A methodology is also presented for the integration of the MacCormack nozzle model with the variable cycle engine.

  14. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method.

    PubMed

    Yang, Jun-He; Cheng, Ching-Hsue; Chan, Chia-Pan

    2017-01-01

    Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir's water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  15. Unitary Response Regression Models

    ERIC Educational Resources Information Center

    Lipovetsky, S.

    2007-01-01

    The dependent variable in a regular linear regression is a numerical variable, and in a logistic regression it is a binary or categorical variable. In these models the dependent variable has varying values. However, there are problems yielding an identity output of a constant value which can also be modelled in a linear or logistic regression with…

  16. The Integration of Continuous and Discrete Latent Variable Models: Potential Problems and Promising Opportunities

    ERIC Educational Resources Information Center

    Bauer, Daniel J.; Curran, Patrick J.

    2004-01-01

    Structural equation mixture modeling (SEMM) integrates continuous and discrete latent variable models. Drawing on prior research on the relationships between continuous and discrete latent variable models, the authors identify 3 conditions that may lead to the estimation of spurious latent classes in SEMM: misspecification of the structural model,…

  17. Human-arm-and-hand-dynamic model with variability analyses for a stylus-based haptic interface.

    PubMed

    Fu, Michael J; Cavuşoğlu, M Cenk

    2012-12-01

    Haptic interface research benefits from accurate human arm models for control and system design. The literature contains many human arm dynamic models but lacks detailed variability analyses. Without accurate measurements, variability is modeled in a very conservative manner, leading to less than optimal controller and system designs. This paper not only presents models for human arm dynamics but also develops inter- and intrasubject variability models for a stylus-based haptic device. Data from 15 human subjects (nine male, six female, ages 20-32) were collected using a Phantom Premium 1.5a haptic device for system identification. In this paper, grip-force-dependent models were identified for 1-3-N grip forces in the three spatial axes. Also, variability due to human subjects and grip-force variation were modeled as both structured and unstructured uncertainties. For both forms of variability, the maximum variation, 95 %, and 67 % confidence interval limits were examined. All models were in the frequency domain with force as input and position as output. The identified models enable precise controllers targeted to a subset of possible human operator dynamics.

  18. Are revised models better models? A skill score assessment of regional interannual variability

    NASA Astrophysics Data System (ADS)

    Sperber, Kenneth R.; Participating AMIP Modelling Groups

    1999-05-01

    Various skill scores are used to assess the performance of revised models relative to their original configurations. The interannual variability of all-India, Sahel and Nordeste rainfall and summer monsoon windshear is examined in integrations performed under the experimental design of the Atmospheric Model Intercomparison Project. For the indices considered, the revised models exhibit greater fidelity at simulating the observed interannual variability. Interannual variability of all-India rainfall is better simulated by models that have a more realistic rainfall climatology in the vicinity of India, indicating the beneficial effect of reducing systematic model error.

  19. Are revised models better models? A skill score assessment of regional interannual variability

    NASA Astrophysics Data System (ADS)

    Participating AMIP Modelling Groups,; Sperber, Kenneth R.

    Various skill scores are used to assess the performance of revised models relative to their original configurations. The interannual variability of all-India, Sahel and Nordeste rainfall and summer monsoon windshear is examined in integrations performed under the experimental design of the Atmospheric Model Intercomparison Project. For the indices considered, the revised models exhibit greater fidelity at simulating the observed interannual variability. Interannual variability of all-India rainfall is better simulated by models that have a more realistic rainfall climatology in the vicinity of India, indicating the beneficial effect of reducing systematic model error.

  20. Sources and Impacts of Modeled and Observed Low-Frequency Climate Variability

    NASA Astrophysics Data System (ADS)

    Parsons, Luke Alexander

    Here we analyze climate variability using instrumental, paleoclimate (proxy), and the latest climate model data to understand more about the sources and impacts of low-frequency climate variability. Understanding the drivers of climate variability at interannual to century timescales is important for studies of climate change, including analyses of detection and attribution of climate change impacts. Additionally, correctly modeling the sources and impacts of variability is key to the simulation of abrupt change (Alley et al., 2003) and extended drought (Seager et al., 2005; Pelletier and Turcotte, 1997; Ault et al., 2014). In Appendix A, we employ an Earth system model (GFDL-ESM2M) simulation to study the impacts of a weakening of the Atlantic meridional overturning circulation (AMOC) on the climate of the American Tropics. The AMOC drives some degree of local and global internal low-frequency climate variability (Manabe and Stouffer, 1995; Thornalley et al., 2009) and helps control the position of the tropical rainfall belt (Zhang and Delworth, 2005). We find that a major weakening of the AMOC can cause large-scale temperature, precipitation, and carbon storage changes in Central and South America. Our results suggest that possible future changes in AMOC strength alone will not be sufficient to drive a large-scale dieback of the Amazonian forest, but this key natural ecosystem is sensitive to dry-season length and timing of rainfall (Parsons et al., 2014). In Appendix B, we compare a paleoclimate record of precipitation variability in the Peruvian Amazon to climate model precipitation variability. The paleoclimate (Lake Limon) record indicates that precipitation variability in western Amazonia is 'red' (i.e., increasing variability with timescale). By contrast, most state-of-the-art climate models indicate precipitation variability in this region is nearly 'white' (i.e., equally variability across timescales). This paleo-model disagreement in the overall structure of the variance spectrum has important consequences for the probability of multi-year drought. Our lake record suggests there is a significant background threat of multi-year, and even decade-length, drought in western Amazonia, whereas climate model simulations indicate most droughts likely last no longer than one to three years. These findings suggest climate models may underestimate the future risk of extended drought in this important region. In Appendix C, we expand our analysis of climate variability beyond South America. We use observations, well-constrained tropical paleoclimate, and Earth system model data to examine the overall shape of the climate spectrum across interannual to century frequencies. We find a general agreement among observations and models that temperature variability increases with timescale across most of the globe outside the tropics. However, as compared to paleoclimate records, climate models generate too little low-frequency variability in the tropics (e.g., Laepple and Huybers, 2014). When we compare the shape of the simulated climate spectrum to the spectrum of a simple autoregressive process, we find much of the modeled surface temperature variability in the tropics could be explained by ocean smoothing of weather noise. Importantly, modeled precipitation tends to be similar to white noise across much of the globe. By contrast, paleoclimate records of various types from around the globe indicate that both temperature and precipitation variability should experience much more low-frequency variability than a simple autoregressive or white-noise process. In summary, state-of-the-art climate models generate some degree of dynamically driven low-frequency climate variability, especially at high latitudes. However, the latest climate models, observations, and paleoclimate data provide us with drastically different pictures of the background climate system and its associated risks. This research has important consequences for improving how we simulate climate extremes as we enter a warmer (and often drier) world in the coming centuries; if climate models underestimate low-frequency variability, we will underestimate the risk of future abrupt change and extreme events, such as megadroughts.

  1. Random parameter models for accident prediction on two-lane undivided highways in India.

    PubMed

    Dinu, R R; Veeraragavan, A

    2011-02-01

    Generalized linear modeling (GLM), with the assumption of Poisson or negative binomial error structure, has been widely employed in road accident modeling. A number of explanatory variables related to traffic, road geometry, and environment that contribute to accident occurrence have been identified and accident prediction models have been proposed. The accident prediction models reported in literature largely employ the fixed parameter modeling approach, where the magnitude of influence of an explanatory variable is considered to be fixed for any observation in the population. Similar models have been proposed for Indian highways too, which include additional variables representing traffic composition. The mixed traffic on Indian highways comes with a lot of variability within, ranging from difference in vehicle types to variability in driver behavior. This could result in variability in the effect of explanatory variables on accidents across locations. Random parameter models, which can capture some of such variability, are expected to be more appropriate for the Indian situation. The present study is an attempt to employ random parameter modeling for accident prediction on two-lane undivided rural highways in India. Three years of accident history, from nearly 200 km of highway segments, is used to calibrate and validate the models. The results of the analysis suggest that the model coefficients for traffic volume, proportion of cars, motorized two-wheelers and trucks in traffic, and driveway density and horizontal and vertical curvatures are randomly distributed across locations. The paper is concluded with a discussion on modeling results and the limitations of the present study. Copyright © 2010 Elsevier Ltd. All rights reserved.

  2. Accounting for stimulus-specific variation in precision reveals a discrete capacity limit in visual working memory

    PubMed Central

    Pratte, Michael S.; Park, Young Eun; Rademaker, Rosanne L.; Tong, Frank

    2016-01-01

    If we view a visual scene that contains many objects, then momentarily close our eyes, some details persist while others seem to fade. Discrete models of visual working memory (VWM) assume that only a few items can be actively maintained in memory, beyond which pure guessing will emerge. Alternatively, continuous resource models assume that all items in a visual scene can be stored with some precision. Distinguishing between these competing models is challenging, however, as resource models that allow for stochastically variable precision (across items and trials) can produce error distributions that resemble random guessing behavior. Here, we evaluated the hypothesis that a major source of variability in VWM performance arises from systematic variation in precision across the stimuli themselves; such stimulus-specific variability can be incorporated into both discrete-capacity and variable-precision resource models. Participants viewed multiple oriented gratings, and then reported the orientation of a cued grating from memory. When modeling the overall distribution of VWM errors, we found that the variable-precision resource model outperformed the discrete model. However, VWM errors revealed a pronounced “oblique effect”, with larger errors for oblique than cardinal orientations. After this source of variability was incorporated into both models, we found that the discrete model provided a better account of VWM errors. Our results demonstrate that variable precision across the stimulus space can lead to an unwarranted advantage for resource models that assume stochastically variable precision. When these deterministic sources are adequately modeled, human working memory performance reveals evidence of a discrete capacity limit. PMID:28004957

  3. Accounting for stimulus-specific variation in precision reveals a discrete capacity limit in visual working memory.

    PubMed

    Pratte, Michael S; Park, Young Eun; Rademaker, Rosanne L; Tong, Frank

    2017-01-01

    If we view a visual scene that contains many objects, then momentarily close our eyes, some details persist while others seem to fade. Discrete models of visual working memory (VWM) assume that only a few items can be actively maintained in memory, beyond which pure guessing will emerge. Alternatively, continuous resource models assume that all items in a visual scene can be stored with some precision. Distinguishing between these competing models is challenging, however, as resource models that allow for stochastically variable precision (across items and trials) can produce error distributions that resemble random guessing behavior. Here, we evaluated the hypothesis that a major source of variability in VWM performance arises from systematic variation in precision across the stimuli themselves; such stimulus-specific variability can be incorporated into both discrete-capacity and variable-precision resource models. Participants viewed multiple oriented gratings, and then reported the orientation of a cued grating from memory. When modeling the overall distribution of VWM errors, we found that the variable-precision resource model outperformed the discrete model. However, VWM errors revealed a pronounced "oblique effect," with larger errors for oblique than cardinal orientations. After this source of variability was incorporated into both models, we found that the discrete model provided a better account of VWM errors. Our results demonstrate that variable precision across the stimulus space can lead to an unwarranted advantage for resource models that assume stochastically variable precision. When these deterministic sources are adequately modeled, human working memory performance reveals evidence of a discrete capacity limit. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  4. Verification of models for ballistic movement time and endpoint variability.

    PubMed

    Lin, Ray F; Drury, Colin G

    2013-01-01

    A hand control movement is composed of several ballistic movements. The time required in performing a ballistic movement and its endpoint variability are two important properties in developing movement models. The purpose of this study was to test potential models for predicting these two properties. Twelve participants conducted ballistic movements of specific amplitudes using a drawing tablet. The measured data of movement time and endpoint variability were then used to verify the models. This study was successful with Hoffmann and Gan's movement time model (Hoffmann, 1981; Gan and Hoffmann 1988) predicting more than 90.7% data variance for 84 individual measurements. A new theoretically developed ballistic movement variability model, proved to be better than Howarth, Beggs, and Bowden's (1971) model, predicting on average 84.8% of stopping-variable error and 88.3% of aiming-variable errors. These two validated models will help build solid theoretical movement models and evaluate input devices. This article provides better models for predicting end accuracy and movement time of ballistic movements that are desirable in rapid aiming tasks, such as keying in numbers on a smart phone. The models allow better design of aiming tasks, for example button sizes on mobile phones for different user populations.

  5. A Bayesian Semiparametric Latent Variable Model for Mixed Responses

    ERIC Educational Resources Information Center

    Fahrmeir, Ludwig; Raach, Alexander

    2007-01-01

    In this paper we introduce a latent variable model (LVM) for mixed ordinal and continuous responses, where covariate effects on the continuous latent variables are modelled through a flexible semiparametric Gaussian regression model. We extend existing LVMs with the usual linear covariate effects by including nonparametric components for nonlinear…

  6. Bayesian Adaptive Lasso for Ordinal Regression with Latent Variables

    ERIC Educational Resources Information Center

    Feng, Xiang-Nan; Wu, Hao-Tian; Song, Xin-Yuan

    2017-01-01

    We consider an ordinal regression model with latent variables to investigate the effects of observable and latent explanatory variables on the ordinal responses of interest. Each latent variable is characterized by correlated observed variables through a confirmatory factor analysis model. We develop a Bayesian adaptive lasso procedure to conduct…

  7. The GISS global climate-middle atmosphere model. II - Model variability due to interactions between planetary waves, the mean circulation and gravity wave drag

    NASA Technical Reports Server (NTRS)

    Rind, D.; Suozzo, R.; Balachandran, N. K.

    1988-01-01

    The variability which arises in the GISS Global Climate-Middle Atmosphere Model on two time scales is reviewed: interannual standard deviations, derived from the five-year control run, and intraseasonal variability as exemplified by statospheric warnings. The model's extratropical variability for both mean fields and eddy statistics appears reasonable when compared with observations, while the tropical wind variability near the stratopause may be excessive possibly, due to inertial oscillations. Both wave 1 and wave 2 warmings develop, with connections to tropospheric forcing. Variability on both time scales results from a complex set of interactions among planetary waves, the mean circulation, and gravity wave drag. Specific examples of these interactions are presented, which imply that variability in gravity wave forcing and drag may be an important component of the variability of the middle atmosphere.

  8. Research on Zheng Classification Fusing Pulse Parameters in Coronary Heart Disease

    PubMed Central

    Guo, Rui; Wang, Yi-Qin; Xu, Jin; Yan, Hai-Xia; Yan, Jian-Jun; Li, Fu-Feng; Xu, Zhao-Xia; Xu, Wen-Jie

    2013-01-01

    This study was conducted to illustrate that nonlinear dynamic variables of Traditional Chinese Medicine (TCM) pulse can improve the performances of TCM Zheng classification models. Pulse recordings of 334 coronary heart disease (CHD) patients and 117 normal subjects were collected in this study. Recurrence quantification analysis (RQA) was employed to acquire nonlinear dynamic variables of pulse. TCM Zheng models in CHD were constructed, and predictions using a novel multilabel learning algorithm based on different datasets were carried out. Datasets were designed as follows: dataset1, TCM inquiry information including inspection information; dataset2, time-domain variables of pulse and dataset1; dataset3, RQA variables of pulse and dataset1; and dataset4, major principal components of RQA variables and dataset1. The performances of the different models for Zheng differentiation were compared. The model for Zheng differentiation based on RQA variables integrated with inquiry information had the best performance, whereas that based only on inquiry had the worst performance. Meanwhile, the model based on time-domain variables of pulse integrated with inquiry fell between the above two. This result showed that RQA variables of pulse can be used to construct models of TCM Zheng and improve the performance of Zheng differentiation models. PMID:23737839

  9. Surgical Risk Preoperative Assessment System (SURPAS): II. Parsimonious Risk Models for Postoperative Adverse Outcomes Addressing Need for Laboratory Variables and Surgeon Specialty-specific Models.

    PubMed

    Meguid, Robert A; Bronsert, Michael R; Juarez-Colunga, Elizabeth; Hammermeister, Karl E; Henderson, William G

    2016-07-01

    To develop parsimonious prediction models for postoperative mortality, overall morbidity, and 6 complication clusters applicable to a broad range of surgical operations in adult patients. Quantitative risk assessment tools are not routinely used for preoperative patient assessment, shared decision making, informed consent, and preoperative patient optimization, likely due in part to the burden of data collection and the complexity of incorporation into routine surgical practice. Multivariable forward selection stepwise logistic regression analyses were used to develop predictive models for 30-day mortality, overall morbidity, and 6 postoperative complication clusters, using 40 preoperative variables from 2,275,240 surgical cases in the American College of Surgeons National Surgical Quality Improvement Program data set, 2005 to 2012. For the mortality and overall morbidity outcomes, prediction models were compared with and without preoperative laboratory variables, and generic models (based on all of the data from 9 surgical specialties) were compared with specialty-specific models. In each model, the cumulative c-index was used to examine the contribution of each added predictor variable. C-indexes, Hosmer-Lemeshow analyses, and Brier scores were used to compare discrimination and calibration between models. For the mortality and overall morbidity outcomes, the prediction models without the preoperative laboratory variables performed as well as the models with the laboratory variables, and the generic models performed as well as the specialty-specific models. The c-indexes were 0.938 for mortality, 0.810 for overall morbidity, and for the 6 complication clusters ranged from 0.757 for infectious to 0.897 for pulmonary complications. Across the 8 prediction models, the first 7 to 11 variables entered accounted for at least 99% of the c-index of the full model (using up to 28 nonlaboratory predictor variables). Our results suggest that it will be possible to develop parsimonious models to predict 8 important postoperative outcomes for a broad surgical population, without the need for surgeon specialty-specific models or inclusion of laboratory variables.

  10. Models that predict standing crop of stream fish from habitat variables: 1950-85.

    Treesearch

    K.D. Fausch; C.L. Hawkes; M.G. Parsons

    1988-01-01

    We reviewed mathematical models that predict standing crop of stream fish (number or biomass per unit area or length of stream) from measurable habitat variables and classified them by the types of independent habitat variables found significant, by mathematical structure, and by model quality. Habitat variables were of three types and were measured on different scales...

  11. Stock price forecasting for companies listed on Tehran stock exchange using multivariate adaptive regression splines model and semi-parametric splines technique

    NASA Astrophysics Data System (ADS)

    Rounaghi, Mohammad Mahdi; Abbaszadeh, Mohammad Reza; Arashi, Mohammad

    2015-11-01

    One of the most important topics of interest to investors is stock price changes. Investors whose goals are long term are sensitive to stock price and its changes and react to them. In this regard, we used multivariate adaptive regression splines (MARS) model and semi-parametric splines technique for predicting stock price in this study. The MARS model as a nonparametric method is an adaptive method for regression and it fits for problems with high dimensions and several variables. semi-parametric splines technique was used in this study. Smoothing splines is a nonparametric regression method. In this study, we used 40 variables (30 accounting variables and 10 economic variables) for predicting stock price using the MARS model and using semi-parametric splines technique. After investigating the models, we select 4 accounting variables (book value per share, predicted earnings per share, P/E ratio and risk) as influencing variables on predicting stock price using the MARS model. After fitting the semi-parametric splines technique, only 4 accounting variables (dividends, net EPS, EPS Forecast and P/E Ratio) were selected as variables effective in forecasting stock prices.

  12. Examining Parallelism of Sets of Psychometric Measures Using Latent Variable Modeling

    ERIC Educational Resources Information Center

    Raykov, Tenko; Patelis, Thanos; Marcoulides, George A.

    2011-01-01

    A latent variable modeling approach that can be used to examine whether several psychometric tests are parallel is discussed. The method consists of sequentially testing the properties of parallel measures via a corresponding relaxation of parameter constraints in a saturated model or an appropriately constructed latent variable model. The…

  13. Development and evaluation of height diameter at breast models for native Chinese Metasequoia.

    PubMed

    Liu, Mu; Feng, Zhongke; Zhang, Zhixiang; Ma, Chenghui; Wang, Mingming; Lian, Bo-Ling; Sun, Renjie; Zhang, Li

    2017-01-01

    Accurate tree height and diameter at breast height (dbh) are important input variables for growth and yield models. A total of 5503 Chinese Metasequoia trees were used in this study. We studied 53 fitted models, of which 7 were linear models and 46 were non-linear models. These models were divided into two groups of single models and multivariate models according to the number of independent variables. The results show that the allometry equation of tree height which has diameter at breast height as independent variable can better reflect the change of tree height; in addition the prediction accuracy of the multivariate composite models is higher than that of the single variable models. Although tree age is not the most important variable in the study of the relationship between tree height and dbh, the consideration of tree age when choosing models and parameters in model selection can make the prediction of tree height more accurate. The amount of data is also an important parameter what can improve the reliability of models. Other variables such as tree height, main dbh and altitude, etc can also affect models. In this study, the method of developing the recommended models for predicting the tree height of native Metasequoias aged 50-485 years is statistically reliable and can be used for reference in predicting the growth and production of mature native Metasequoia.

  14. Development and evaluation of height diameter at breast models for native Chinese Metasequoia

    PubMed Central

    Feng, Zhongke; Zhang, Zhixiang; Ma, Chenghui; Wang, Mingming; Lian, Bo-ling; Sun, Renjie; Zhang, Li

    2017-01-01

    Accurate tree height and diameter at breast height (dbh) are important input variables for growth and yield models. A total of 5503 Chinese Metasequoia trees were used in this study. We studied 53 fitted models, of which 7 were linear models and 46 were non-linear models. These models were divided into two groups of single models and multivariate models according to the number of independent variables. The results show that the allometry equation of tree height which has diameter at breast height as independent variable can better reflect the change of tree height; in addition the prediction accuracy of the multivariate composite models is higher than that of the single variable models. Although tree age is not the most important variable in the study of the relationship between tree height and dbh, the consideration of tree age when choosing models and parameters in model selection can make the prediction of tree height more accurate. The amount of data is also an important parameter what can improve the reliability of models. Other variables such as tree height, main dbh and altitude, etc can also affect models. In this study, the method of developing the recommended models for predicting the tree height of native Metasequoias aged 50–485 years is statistically reliable and can be used for reference in predicting the growth and production of mature native Metasequoia. PMID:28817600

  15. Identification of solid state fermentation degree with FT-NIR spectroscopy: Comparison of wavelength variable selection methods of CARS and SCARS.

    PubMed

    Jiang, Hui; Zhang, Hang; Chen, Quansheng; Mei, Congli; Liu, Guohai

    2015-01-01

    The use of wavelength variable selection before partial least squares discriminant analysis (PLS-DA) for qualitative identification of solid state fermentation degree by FT-NIR spectroscopy technique was investigated in this study. Two wavelength variable selection methods including competitive adaptive reweighted sampling (CARS) and stability competitive adaptive reweighted sampling (SCARS) were employed to select the important wavelengths. PLS-DA was applied to calibrate identified model using selected wavelength variables by CARS and SCARS for identification of solid state fermentation degree. Experimental results showed that the number of selected wavelength variables by CARS and SCARS were 58 and 47, respectively, from the 1557 original wavelength variables. Compared with the results of full-spectrum PLS-DA, the two wavelength variable selection methods both could enhance the performance of identified models. Meanwhile, compared with CARS-PLS-DA model, the SCARS-PLS-DA model achieved better results with the identification rate of 91.43% in the validation process. The overall results sufficiently demonstrate the PLS-DA model constructed using selected wavelength variables by a proper wavelength variable method can be more accurate identification of solid state fermentation degree. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Identification of solid state fermentation degree with FT-NIR spectroscopy: Comparison of wavelength variable selection methods of CARS and SCARS

    NASA Astrophysics Data System (ADS)

    Jiang, Hui; Zhang, Hang; Chen, Quansheng; Mei, Congli; Liu, Guohai

    2015-10-01

    The use of wavelength variable selection before partial least squares discriminant analysis (PLS-DA) for qualitative identification of solid state fermentation degree by FT-NIR spectroscopy technique was investigated in this study. Two wavelength variable selection methods including competitive adaptive reweighted sampling (CARS) and stability competitive adaptive reweighted sampling (SCARS) were employed to select the important wavelengths. PLS-DA was applied to calibrate identified model using selected wavelength variables by CARS and SCARS for identification of solid state fermentation degree. Experimental results showed that the number of selected wavelength variables by CARS and SCARS were 58 and 47, respectively, from the 1557 original wavelength variables. Compared with the results of full-spectrum PLS-DA, the two wavelength variable selection methods both could enhance the performance of identified models. Meanwhile, compared with CARS-PLS-DA model, the SCARS-PLS-DA model achieved better results with the identification rate of 91.43% in the validation process. The overall results sufficiently demonstrate the PLS-DA model constructed using selected wavelength variables by a proper wavelength variable method can be more accurate identification of solid state fermentation degree.

  17. Variable Selection for Regression Models of Percentile Flows

    NASA Astrophysics Data System (ADS)

    Fouad, G.

    2017-12-01

    Percentile flows describe the flow magnitude equaled or exceeded for a given percent of time, and are widely used in water resource management. However, these statistics are normally unavailable since most basins are ungauged. Percentile flows of ungauged basins are often predicted using regression models based on readily observable basin characteristics, such as mean elevation. The number of these independent variables is too large to evaluate all possible models. A subset of models is typically evaluated using automatic procedures, like stepwise regression. This ignores a large variety of methods from the field of feature (variable) selection and physical understanding of percentile flows. A study of 918 basins in the United States was conducted to compare an automatic regression procedure to the following variable selection methods: (1) principal component analysis, (2) correlation analysis, (3) random forests, (4) genetic programming, (5) Bayesian networks, and (6) physical understanding. The automatic regression procedure only performed better than principal component analysis. Poor performance of the regression procedure was due to a commonly used filter for multicollinearity, which rejected the strongest models because they had cross-correlated independent variables. Multicollinearity did not decrease model performance in validation because of a representative set of calibration basins. Variable selection methods based strictly on predictive power (numbers 2-5 from above) performed similarly, likely indicating a limit to the predictive power of the variables. Similar performance was also reached using variables selected based on physical understanding, a finding that substantiates recent calls to emphasize physical understanding in modeling for predictions in ungauged basins. The strongest variables highlighted the importance of geology and land cover, whereas widely used topographic variables were the weakest predictors. Variables suffered from a high degree of multicollinearity, possibly illustrating the co-evolution of climatic and physiographic conditions. Given the ineffectiveness of many variables used here, future work should develop new variables that target specific processes associated with percentile flows.

  18. Statistical validity of using ratio variables in human kinetics research.

    PubMed

    Liu, Yuanlong; Schutz, Robert W

    2003-09-01

    The purposes of this study were to investigate the validity of the simple ratio and three alternative deflation models and examine how the variation of the numerator and denominator variables affects the reliability of a ratio variable. A simple ratio and three alternative deflation models were fitted to four empirical data sets, and common criteria were applied to determine the best model for deflation. Intraclass correlation was used to examine the component effect on the reliability of a ratio variable. The results indicate that the validity, of a deflation model depends on the statistical characteristics of the particular component variables used, and an optimal deflation model for all ratio variables may not exist. Therefore, it is recommended that different models be fitted to each empirical data set to determine the best deflation model. It was found that the reliability of a simple ratio is affected by the coefficients of variation and the within- and between-trial correlations between the numerator and denominator variables. It was recommended that researchers should compute the reliability of the derived ratio scores and not assume that strong reliabilities in the numerator and denominator measures automatically lead to high reliability in the ratio measures.

  19. Effect of climate variables on cocoa black pod incidence in Sabah using ARIMAX model

    NASA Astrophysics Data System (ADS)

    Ling Sheng Chang, Albert; Ramba, Haya; Mohd. Jaaffar, Ahmad Kamil; Kim Phin, Chong; Chong Mun, Ho

    2016-06-01

    Cocoa black pod disease is one of the major diseases affecting the cocoa production in Malaysia and also around the world. Studies have shown that the climate variables have influenced the cocoa black pod disease incidence and it is important to quantify the black pod disease variation due to the effect of climate variables. Application of time series analysis especially auto-regressive moving average (ARIMA) model has been widely used in economics study and can be used to quantify the effect of climate variables on black pod incidence to forecast the right time to control the incidence. However, ARIMA model does not capture some turning points in cocoa black pod incidence. In order to improve forecasting performance, other explanatory variables such as climate variables should be included into ARIMA model as ARIMAX model. Therefore, this paper is to study the effect of climate variables on the cocoa black pod disease incidence using ARIMAX model. The findings of the study showed ARIMAX model using MA(1) and relative humidity at lag 7 days, RHt - 7 gave better R square value compared to ARIMA model using MA(1) which could be used to forecast the black pod incidence to assist the farmers determine timely application of fungicide spraying and culture practices to control the black pod incidence.

  20. Context-invariant quasi hidden variable (qHV) modelling of all joint von Neumann measurements for an arbitrary Hilbert space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loubenets, Elena R.

    We prove the existence for each Hilbert space of the two new quasi hidden variable (qHV) models, statistically noncontextual and context-invariant, reproducing all the von Neumann joint probabilities via non-negative values of real-valued measures and all the quantum product expectations—via the qHV (classical-like) average of the product of the corresponding random variables. In a context-invariant model, a quantum observable X can be represented by a variety of random variables satisfying the functional condition required in quantum foundations but each of these random variables equivalently models X under all joint von Neumann measurements, regardless of their contexts. The proved existence ofmore » this model negates the general opinion that, in terms of random variables, the Hilbert space description of all the joint von Neumann measurements for dimH≥3 can be reproduced only contextually. The existence of a statistically noncontextual qHV model, in particular, implies that every N-partite quantum state admits a local quasi hidden variable model introduced in Loubenets [J. Math. Phys. 53, 022201 (2012)]. The new results of the present paper point also to the generality of the quasi-classical probability model proposed in Loubenets [J. Phys. A: Math. Theor. 45, 185306 (2012)].« less

  1. Analysis of model development strategies: predicting ventral hernia recurrence.

    PubMed

    Holihan, Julie L; Li, Linda T; Askenasy, Erik P; Greenberg, Jacob A; Keith, Jerrod N; Martindale, Robert G; Roth, J Scott; Liang, Mike K

    2016-11-01

    There have been many attempts to identify variables associated with ventral hernia recurrence; however, it is unclear which statistical modeling approach results in models with greatest internal and external validity. We aim to assess the predictive accuracy of models developed using five common variable selection strategies to determine variables associated with hernia recurrence. Two multicenter ventral hernia databases were used. Database 1 was randomly split into "development" and "internal validation" cohorts. Database 2 was designated "external validation". The dependent variable for model development was hernia recurrence. Five variable selection strategies were used: (1) "clinical"-variables considered clinically relevant, (2) "selective stepwise"-all variables with a P value <0.20 were assessed in a step-backward model, (3) "liberal stepwise"-all variables were included and step-backward regression was performed, (4) "restrictive internal resampling," and (5) "liberal internal resampling." Variables were included with P < 0.05 for the Restrictive model and P < 0.10 for the Liberal model. A time-to-event analysis using Cox regression was performed using these strategies. The predictive accuracy of the developed models was tested on the internal and external validation cohorts using Harrell's C-statistic where C > 0.70 was considered "reasonable". The recurrence rate was 32.9% (n = 173/526; median/range follow-up, 20/1-58 mo) for the development cohort, 36.0% (n = 95/264, median/range follow-up 20/1-61 mo) for the internal validation cohort, and 12.7% (n = 155/1224, median/range follow-up 9/1-50 mo) for the external validation cohort. Internal validation demonstrated reasonable predictive accuracy (C-statistics = 0.772, 0.760, 0.767, 0.757, 0.763), while on external validation, predictive accuracy dipped precipitously (C-statistic = 0.561, 0.557, 0.562, 0.553, 0.560). Predictive accuracy was equally adequate on internal validation among models; however, on external validation, all five models failed to demonstrate utility. Future studies should report multiple variable selection techniques and demonstrate predictive accuracy on external data sets for model validation. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Cognitive ability and risk of post-traumatic stress disorder after military deployment: an observational cohort study

    PubMed Central

    Karstoft, Karen-Inge; Vedtofte, Mia S.; Nielsen, Anni B.S.; Osler, Merete; Mortensen, Erik L.; Christensen, Gunhild T.; Andersen, Søren B.

    2017-01-01

    Background Studies of the association between pre-deployment cognitive ability and post-deployment post-traumatic stress disorder (PTSD) have shown mixed results. Aims To study the influence of pre-deployment cognitive ability on PTSD symptoms 6–8 months post-deployment in a large population while controlling for pre-deployment education and deployment-related variables. Method Study linking prospective pre-deployment conscription board data with post-deployment self-reported data in 9695 Danish Army personnel deployed to different war zones in 1997–2013. The association between pre-deployment cognitive ability and post-deployment PTSD was investigated using repeated-measure logistic regression models. Two models with cognitive ability score as the main exposure variable were created (model 1 and model 2). Model 1 was only adjusted for pre-deployment variables, while model 2 was adjusted for both pre-deployment and deployment-related variables. Results When including only variables recorded pre-deployment (cognitive ability score and educational level) and gender (model 1), all variables predicted post-deployment PTSD. When deployment-related variables were added (model 2), this was no longer the case for cognitive ability score. However, when educational level was removed from the model adjusted for deployment-related variables, the association between cognitive ability and post-deployment PTSD became significant. Conclusions Pre-deployment lower cognitive ability did not predict post-deployment PTSD independently of educational level after adjustment for deployment-related variables. Declaration of interest None. Copyright and usage © The Royal College of Psychiatrists 2017. This is an open access article distributed under the terms of the Creative Commons Non-Commercial, No Derivatives (CC BY-NC-ND) license. PMID:29163983

  3. Predicting Power Outages Using Multi-Model Ensemble Forecasts

    NASA Astrophysics Data System (ADS)

    Cerrai, D.; Anagnostou, E. N.; Yang, J.; Astitha, M.

    2017-12-01

    Power outages affect every year millions of people in the United States, affecting the economy and conditioning the everyday life. An Outage Prediction Model (OPM) has been developed at the University of Connecticut for helping utilities to quickly restore outages and to limit their adverse consequences on the population. The OPM, operational since 2015, combines several non-parametric machine learning (ML) models that use historical weather storm simulations and high-resolution weather forecasts, satellite remote sensing data, and infrastructure and land cover data to predict the number and spatial distribution of power outages. A new methodology, developed for improving the outage model performances by combining weather- and soil-related variables using three different weather models (WRF 3.7, WRF 3.8 and RAMS/ICLAMS), will be presented in this study. First, we will present a performance evaluation of each model variable, by comparing historical weather analyses with station data or reanalysis over the entire storm data set. Hence, each variable of the new outage model version is extracted from the best performing weather model for that variable, and sensitivity tests are performed for investigating the most efficient variable combination for outage prediction purposes. Despite that the final variables combination is extracted from different weather models, this ensemble based on multi-weather forcing and multi-statistical model power outage prediction outperforms the currently operational OPM version that is based on a single weather forcing variable (WRF 3.7), because each model component is the closest to the actual atmospheric state.

  4. a Latent Variable Path Analysis Model of Secondary Physics Enrollments in New York State.

    NASA Astrophysics Data System (ADS)

    Sobolewski, Stanley John

    The Percentage of Enrollment in Physics (PEP) at the secondary level nationally has been approximately 20% for the past few decades. For a more scientifically literate citizenry as well as specialists to continue scientific research and development, it is desirable that more students enroll in physics. Some of the predictor variables for physics enrollment and physics achievement that have been identified previously includes a community's socioeconomic status, the availability of physics, the sex of the student, the curriculum, as well as teacher and student data. This study isolated and identified predictor variables for PEP of secondary schools in New York. Data gathered by the State Education Department for the 1990-1991 school year was used. The source of this data included surveys completed by teachers and administrators on student characteristics and school facilities. A data analysis similar to that done by Bryant (1974) was conducted to determine if the relationships between a set of predictor variables related to physics enrollment had changed in the past 20 years. Variables which were isolated included: community, facilities, teacher experience, number of type of science courses, school size and school science facilities. When these variables were isolated, latent variable path diagrams were proposed and verified by the Linear Structural Relations computer modeling program (LISREL). These diagrams differed from those developed by Bryant in that there were more manifest variables used which included achievement scores in the form of Regents exam results. Two criterion variables were used, percentage of students enrolled in physics (PEP) and percent of students enrolled passing the Regents physics exam (PPP). The first model treated school and community level variables as exogenous while the second model treated only the community level variables as exogenous. The goodness of fit indices for the models was 0.77 for the first model and 0.83 for the second model. No dramatic differences were found between the relationship of predictor variables to physics enrollment in 1972 and 1991. New models indicated that smaller school size, enrollment in previous science and math courses and other school variables were more related to high enrollment rather than achievement. Exogenous variables such as community size were related to achievement. It was shown that achievement and enrollment were related to a different set of predictor variables.

  5. Model selection bias and Freedman's paradox

    USGS Publications Warehouse

    Lukacs, P.M.; Burnham, K.P.; Anderson, D.R.

    2010-01-01

    In situations where limited knowledge of a system exists and the ratio of data points to variables is small, variable selection methods can often be misleading. Freedman (Am Stat 37:152-155, 1983) demonstrated how common it is to select completely unrelated variables as highly "significant" when the number of data points is similar in magnitude to the number of variables. A new type of model averaging estimator based on model selection with Akaike's AIC is used with linear regression to investigate the problems of likely inclusion of spurious effects and model selection bias, the bias introduced while using the data to select a single seemingly "best" model from a (often large) set of models employing many predictor variables. The new model averaging estimator helps reduce these problems and provides confidence interval coverage at the nominal level while traditional stepwise selection has poor inferential properties. ?? The Institute of Statistical Mathematics, Tokyo 2009.

  6. The Contribution of Vegetation and Landscape Configuration for Predicting Environmental Change Impacts on Iberian Birds

    PubMed Central

    Triviño, Maria; Thuiller, Wilfried; Cabeza, Mar; Hickler, Thomas; Araújo, Miguel B.

    2011-01-01

    Although climate is known to be one of the key factors determining animal species distributions amongst others, projections of global change impacts on their distributions often rely on bioclimatic envelope models. Vegetation structure and landscape configuration are also key determinants of distributions, but they are rarely considered in such assessments. We explore the consequences of using simulated vegetation structure and composition as well as its associated landscape configuration in models projecting global change effects on Iberian bird species distributions. Both present-day and future distributions were modelled for 168 bird species using two ensemble forecasting methods: Random Forests (RF) and Boosted Regression Trees (BRT). For each species, several models were created, differing in the predictor variables used (climate, vegetation, and landscape configuration). Discrimination ability of each model in the present-day was then tested with four commonly used evaluation methods (AUC, TSS, specificity and sensitivity). The different sets of predictor variables yielded similar spatial patterns for well-modelled species, but the future projections diverged for poorly-modelled species. Models using all predictor variables were not significantly better than models fitted with climate variables alone for ca. 50% of the cases. Moreover, models fitted with climate data were always better than models fitted with landscape configuration variables, and vegetation variables were found to correlate with bird species distributions in 26–40% of the cases with BRT, and in 1–18% of the cases with RF. We conclude that improvements from including vegetation and its landscape configuration variables in comparison with climate only variables might not always be as great as expected for future projections of Iberian bird species. PMID:22216263

  7. Comparison of climate envelope models developed using expert-selected variables versus statistical selection

    USGS Publications Warehouse

    Brandt, Laura A.; Benscoter, Allison; Harvey, Rebecca G.; Speroterra, Carolina; Bucklin, David N.; Romañach, Stephanie; Watling, James I.; Mazzotti, Frank J.

    2017-01-01

    Climate envelope models are widely used to describe potential future distribution of species under different climate change scenarios. It is broadly recognized that there are both strengths and limitations to using climate envelope models and that outcomes are sensitive to initial assumptions, inputs, and modeling methods Selection of predictor variables, a central step in modeling, is one of the areas where different techniques can yield varying results. Selection of climate variables to use as predictors is often done using statistical approaches that develop correlations between occurrences and climate data. These approaches have received criticism in that they rely on the statistical properties of the data rather than directly incorporating biological information about species responses to temperature and precipitation. We evaluated and compared models and prediction maps for 15 threatened or endangered species in Florida based on two variable selection techniques: expert opinion and a statistical method. We compared model performance between these two approaches for contemporary predictions, and the spatial correlation, spatial overlap and area predicted for contemporary and future climate predictions. In general, experts identified more variables as being important than the statistical method and there was low overlap in the variable sets (<40%) between the two methods Despite these differences in variable sets (expert versus statistical), models had high performance metrics (>0.9 for area under the curve (AUC) and >0.7 for true skill statistic (TSS). Spatial overlap, which compares the spatial configuration between maps constructed using the different variable selection techniques, was only moderate overall (about 60%), with a great deal of variability across species. Difference in spatial overlap was even greater under future climate projections, indicating additional divergence of model outputs from different variable selection techniques. Our work is in agreement with other studies which have found that for broad-scale species distribution modeling, using statistical methods of variable selection is a useful first step, especially when there is a need to model a large number of species or expert knowledge of the species is limited. Expert input can then be used to refine models that seem unrealistic or for species that experts believe are particularly sensitive to change. It also emphasizes the importance of using multiple models to reduce uncertainty and improve map outputs for conservation planning. Where outputs overlap or show the same direction of change there is greater certainty in the predictions. Areas of disagreement can be used for learning by asking why the models do not agree, and may highlight areas where additional on-the-ground data collection could improve the models.

  8. Modified Regression Correlation Coefficient for Poisson Regression Model

    NASA Astrophysics Data System (ADS)

    Kaengthong, Nattacha; Domthong, Uthumporn

    2017-09-01

    This study gives attention to indicators in predictive power of the Generalized Linear Model (GLM) which are widely used; however, often having some restrictions. We are interested in regression correlation coefficient for a Poisson regression model. This is a measure of predictive power, and defined by the relationship between the dependent variable (Y) and the expected value of the dependent variable given the independent variables [E(Y|X)] for the Poisson regression model. The dependent variable is distributed as Poisson. The purpose of this research was modifying regression correlation coefficient for Poisson regression model. We also compare the proposed modified regression correlation coefficient with the traditional regression correlation coefficient in the case of two or more independent variables, and having multicollinearity in independent variables. The result shows that the proposed regression correlation coefficient is better than the traditional regression correlation coefficient based on Bias and the Root Mean Square Error (RMSE).

  9. Examples of EOS Variables as compared to the UMM-Var Data Model

    NASA Technical Reports Server (NTRS)

    Cantrell, Simon; Lynnes, Chris

    2016-01-01

    In effort to provide EOSDIS clients a way to discover and use variable data from different providers, a Unified Metadata Model for Variables is being created. This presentation gives an overview of the model and use cases we are handling.

  10. Stochastic Time Models of Syllable Structure

    PubMed Central

    Shaw, Jason A.; Gafos, Adamantios I.

    2015-01-01

    Drawing on phonology research within the generative linguistics tradition, stochastic methods, and notions from complex systems, we develop a modelling paradigm linking phonological structure, expressed in terms of syllables, to speech movement data acquired with 3D electromagnetic articulography and X-ray microbeam methods. The essential variable in the models is syllable structure. When mapped to discrete coordination topologies, syllabic organization imposes systematic patterns of variability on the temporal dynamics of speech articulation. We simulated these dynamics under different syllabic parses and evaluated simulations against experimental data from Arabic and English, two languages claimed to parse similar strings of segments into different syllabic structures. Model simulations replicated several key experimental results, including the fallibility of past phonetic heuristics for syllable structure, and exposed the range of conditions under which such heuristics remain valid. More importantly, the modelling approach consistently diagnosed syllable structure proving resilient to multiple sources of variability in experimental data including measurement variability, speaker variability, and contextual variability. Prospects for extensions of our modelling paradigm to acoustic data are also discussed. PMID:25996153

  11. Assessment of mid-latitude atmospheric variability in CMIP5 models using a process oriented-metric

    NASA Astrophysics Data System (ADS)

    Di Biagio, Valeria; Calmanti, Sandro; Dell'Aquila, Alessandro; Ruti, Paolo

    2013-04-01

    We compare, for the period 1962-2000, an estimate of the northern hemisphere mid-latitude winter atmospheric variability according several global climate models included in the fifth phase of the Climate Model Intercomparison Project (CMIP5) with the results of the models belonging to the previous CMIP3 and with the NCEP-NCAR reanalysis. We use the space-time Hayashi spectra of the 500hPa geopotential height fields to characterize the variability of atmospheric circulation regimes and we introduce an ad hoc integral measure of the variability observed in the Northern Hemisphere on different spectral sub-domains. The overall performance of each model is evaluated by considering the total wave variability as a global scalar measure of the statistical properties of different types of atmospheric disturbances. The variability associated to eastward propagating baroclinic waves and to planetary waves is instead used to describe the performance of each model in terms of specific physical processes. We find that the two model ensembles (CMIP3 and CMIP5) do not show substantial differences in the description of northern hemisphere winter mid-latitude atmospheric variability, although some CMIP5 models display performances superior to their previous versions implemented in CMIP3. Preliminary results for the 21th century RCP 4.5 scenario will be also discussed for the CMIP5 models.

  12. Coupled Effects of non-Newtonian Rheology and Aperture Variability on Flow in a Single Fracture

    NASA Astrophysics Data System (ADS)

    Di Federico, V.; Felisa, G.; Lauriola, I.; Longo, S.

    2017-12-01

    Modeling of non-Newtonian flow in fractured media is essential in hydraulic fracturing and drilling operations, EOR, environmental remediation, and to understand magma intrusions. An important step in the modeling effort is a detailed understanding of flow in a single fracture, as the fracture aperture is spatially variable. A large bibliography exists on Newtonian and non-Newtonian flow in variable aperture fractures. Ultimately, stochastic or deterministic modeling leads to the flowrate under a given pressure gradient as a function of the parameters describing the aperture variability and the fluid rheology. Typically, analytical or numerical studies are performed adopting a power-law (Oswald-de Waele) model. Yet the power-law model, routinely used e.g. for hydro-fracturing modeling, does not characterize real fluids at low and high shear rates. A more appropriate rheological model is provided by e.g. the four-parameter Carreau constitutive equation, which is in turn approximated by the more tractable truncated power-law model. Moreover, fluids of interest may exhibit yield stress, which requires the Bingham or Herschel-Bulkely model. This study employs different rheological models in the context of flow in variable aperture fractures, with the aim of understanding the coupled effect of rheology and aperture spatial variability with a simplified model. The aperture variation, modeled within a stochastic or deterministic framework, is taken to be one-dimensional and i) perpendicular; ii) parallel to the flow direction; for stochastic modeling, the influence of different distribution functions is examined. Results for the different rheological models are compared with those obtained for the pure power-law. The adoption of the latter model leads to overestimation of the flowrate, more so for large aperture variability. The presence of yield stress also induces significant changes in the resulting flowrate for assigned external pressure gradient.

  13. Contrasting determinants for the introduction and establishment success of exotic birds in Taiwan using decision trees models.

    PubMed

    Liang, Shih-Hsiung; Walther, Bruno Andreas; Shieh, Bao-Sen

    2017-01-01

    Biological invasions have become a major threat to biodiversity, and identifying determinants underlying success at different stages of the invasion process is essential for both prevention management and testing ecological theories. To investigate variables associated with different stages of the invasion process in a local region such as Taiwan, potential problems using traditional parametric analyses include too many variables of different data types (nominal, ordinal, and interval) and a relatively small data set with too many missing values. We therefore used five decision tree models instead and compared their performance. Our dataset contains 283 exotic bird species which were transported to Taiwan; of these 283 species, 95 species escaped to the field successfully (introduction success); of these 95 introduced species, 36 species reproduced in the field of Taiwan successfully (establishment success). For each species, we collected 22 variables associated with human selectivity and species traits which may determine success during the introduction stage and establishment stage. For each decision tree model, we performed three variable treatments: (I) including all 22 variables, (II) excluding nominal variables, and (III) excluding nominal variables and replacing ordinal values with binary ones. Five performance measures were used to compare models, namely, area under the receiver operating characteristic curve (AUROC), specificity, precision, recall, and accuracy. The gradient boosting models performed best overall among the five decision tree models for both introduction and establishment success and across variable treatments. The most important variables for predicting introduction success were the bird family, the number of invaded countries, and variables associated with environmental adaptation, whereas the most important variables for predicting establishment success were the number of invaded countries and variables associated with reproduction. Our final optimal models achieved relatively high performance values, and we discuss differences in performance with regard to sample size and variable treatments. Our results showed that, for both the establishment model and introduction model, the number of invaded countries was the most important or second most important determinant, respectively. Therefore, we suggest that future success for introduction and establishment of exotic birds may be gauged by simply looking at previous success in invading other countries. Finally, we found that species traits related to reproduction were more important in establishment models than in introduction models; importantly, these determinants were not averaged but either minimum or maximum values of species traits. Therefore, we suggest that in addition to averaged values, reproductive potential represented by minimum and maximum values of species traits should be considered in invasion studies.

  14. Contrasting determinants for the introduction and establishment success of exotic birds in Taiwan using decision trees models

    PubMed Central

    Liang, Shih-Hsiung; Walther, Bruno Andreas

    2017-01-01

    Background Biological invasions have become a major threat to biodiversity, and identifying determinants underlying success at different stages of the invasion process is essential for both prevention management and testing ecological theories. To investigate variables associated with different stages of the invasion process in a local region such as Taiwan, potential problems using traditional parametric analyses include too many variables of different data types (nominal, ordinal, and interval) and a relatively small data set with too many missing values. Methods We therefore used five decision tree models instead and compared their performance. Our dataset contains 283 exotic bird species which were transported to Taiwan; of these 283 species, 95 species escaped to the field successfully (introduction success); of these 95 introduced species, 36 species reproduced in the field of Taiwan successfully (establishment success). For each species, we collected 22 variables associated with human selectivity and species traits which may determine success during the introduction stage and establishment stage. For each decision tree model, we performed three variable treatments: (I) including all 22 variables, (II) excluding nominal variables, and (III) excluding nominal variables and replacing ordinal values with binary ones. Five performance measures were used to compare models, namely, area under the receiver operating characteristic curve (AUROC), specificity, precision, recall, and accuracy. Results The gradient boosting models performed best overall among the five decision tree models for both introduction and establishment success and across variable treatments. The most important variables for predicting introduction success were the bird family, the number of invaded countries, and variables associated with environmental adaptation, whereas the most important variables for predicting establishment success were the number of invaded countries and variables associated with reproduction. Discussion Our final optimal models achieved relatively high performance values, and we discuss differences in performance with regard to sample size and variable treatments. Our results showed that, for both the establishment model and introduction model, the number of invaded countries was the most important or second most important determinant, respectively. Therefore, we suggest that future success for introduction and establishment of exotic birds may be gauged by simply looking at previous success in invading other countries. Finally, we found that species traits related to reproduction were more important in establishment models than in introduction models; importantly, these determinants were not averaged but either minimum or maximum values of species traits. Therefore, we suggest that in addition to averaged values, reproductive potential represented by minimum and maximum values of species traits should be considered in invasion studies. PMID:28316893

  15. Identifying environmental variables explaining genotype-by-environment interaction for body weight of rainbow trout (Onchorynchus mykiss): reaction norm and factor analytic models.

    PubMed

    Sae-Lim, Panya; Komen, Hans; Kause, Antti; Mulder, Han A

    2014-02-26

    Identifying the relevant environmental variables that cause GxE interaction is often difficult when they cannot be experimentally manipulated. Two statistical approaches can be applied to address this question. When data on candidate environmental variables are available, GxE interaction can be quantified as a function of specific environmental variables using a reaction norm model. Alternatively, a factor analytic model can be used to identify the latent common factor that explains GxE interaction. This factor can be correlated with known environmental variables to identify those that are relevant. Previously, we reported a significant GxE interaction for body weight at harvest in rainbow trout reared on three continents. Here we explore their possible causes. Reaction norm and factor analytic models were used to identify which environmental variables (age at harvest, water temperature, oxygen, and photoperiod) may have caused the observed GxE interaction. Data on body weight at harvest was recorded on 8976 offspring reared in various locations: (1) a breeding environment in the USA (nucleus), (2) a recirculating aquaculture system in the Freshwater Institute in West Virginia, USA, (3) a high-altitude farm in Peru, and (4) a low-water temperature farm in Germany. Akaike and Bayesian information criteria were used to compare models. The combination of days to harvest multiplied with daily temperature (Day*Degree) and photoperiod were identified by the reaction norm model as the environmental variables responsible for the GxE interaction. The latent common factor that was identified by the factor analytic model showed the highest correlation with Day*Degree. Day*Degree and photoperiod were the environmental variables that differed most between Peru and other environments. Akaike and Bayesian information criteria indicated that the factor analytical model was more parsimonious than the reaction norm model. Day*Degree and photoperiod were identified as environmental variables responsible for the strong GxE interaction for body weight at harvest in rainbow trout across four environments. Both the reaction norm and the factor analytic models can help identify the environmental variables responsible for GxE interaction. A factor analytic model is preferred over a reaction norm model when limited information on differences in environmental variables between farms is available.

  16. Identifying environmental variables explaining genotype-by-environment interaction for body weight of rainbow trout (Onchorynchus mykiss): reaction norm and factor analytic models

    PubMed Central

    2014-01-01

    Background Identifying the relevant environmental variables that cause GxE interaction is often difficult when they cannot be experimentally manipulated. Two statistical approaches can be applied to address this question. When data on candidate environmental variables are available, GxE interaction can be quantified as a function of specific environmental variables using a reaction norm model. Alternatively, a factor analytic model can be used to identify the latent common factor that explains GxE interaction. This factor can be correlated with known environmental variables to identify those that are relevant. Previously, we reported a significant GxE interaction for body weight at harvest in rainbow trout reared on three continents. Here we explore their possible causes. Methods Reaction norm and factor analytic models were used to identify which environmental variables (age at harvest, water temperature, oxygen, and photoperiod) may have caused the observed GxE interaction. Data on body weight at harvest was recorded on 8976 offspring reared in various locations: (1) a breeding environment in the USA (nucleus), (2) a recirculating aquaculture system in the Freshwater Institute in West Virginia, USA, (3) a high-altitude farm in Peru, and (4) a low-water temperature farm in Germany. Akaike and Bayesian information criteria were used to compare models. Results The combination of days to harvest multiplied with daily temperature (Day*Degree) and photoperiod were identified by the reaction norm model as the environmental variables responsible for the GxE interaction. The latent common factor that was identified by the factor analytic model showed the highest correlation with Day*Degree. Day*Degree and photoperiod were the environmental variables that differed most between Peru and other environments. Akaike and Bayesian information criteria indicated that the factor analytical model was more parsimonious than the reaction norm model. Conclusions Day*Degree and photoperiod were identified as environmental variables responsible for the strong GxE interaction for body weight at harvest in rainbow trout across four environments. Both the reaction norm and the factor analytic models can help identify the environmental variables responsible for GxE interaction. A factor analytic model is preferred over a reaction norm model when limited information on differences in environmental variables between farms is available. PMID:24571451

  17. The association between histological, macroscopic and magnetic resonance imaging assessed synovitis in end-stage knee osteoarthritis: a cross-sectional study.

    PubMed

    Riis, R G C; Gudbergsen, H; Simonsen, O; Henriksen, M; Al-Mashkur, N; Eld, M; Petersen, K K; Kubassova, O; Bay Jensen, A C; Damm, J; Bliddal, H; Arendt-Nielsen, L; Boesen, M

    2017-02-01

    To investigate the association between magnetic resonance imaging (MRI), macroscopic and histological assessments of synovitis in end-stage knee osteoarthritis (KOA). Synovitis of end-stage osteoarthritic knees was assessed using non-contrast-enhanced (CE), contrast-enhanced magnetic resonance imaging (CE-MRI) and dynamic contrast-enhanced (DCE)-MRI prior to (TKR) and correlated with microscopic and macroscopic assessments of synovitis obtained intraoperatively. Multiple bivariate correlations were used with a pre-specified threshold of 0.70 for significance. Also, multiple regression analyses with different subsets of MRI-variables as explanatory variables and the histology score as outcome variable were performed with the intention to find MRI-variables that best explain the variance in histological synovitis (i.e., highest R 2 ). A stepped approach was taken starting with basic characteristics and non-CE MRI-variables (model 1), after which CE-MRI-variables were added (model 2) with the final model also including DCE-MRI-variables (model 3). 39 patients (56.4% women, mean age 68 years, Kellgren-Lawrence (KL) grade 4) had complete MRI and histological data. Only the DCE-MRI variable MExNvoxel (surrogate of the volume and degree of synovitis) and the macroscopic score showed correlations above the pre-specified threshold for acceptance with histological inflammation. The maximum R 2 -value obtained in Model 1 was R 2  = 0.39. In Model 2, where the CE-MRI-variables were added, the highest R 2  = 0.52. In Model 3, a four-variable model consisting of the gender, one CE-MRI and two DCE-MRI-variables yielded a R 2  = 0.71. DCE-MRI is correlated with histological synovitis in end-stage KOA and the combination of CE and DCE-MRI may be a useful, non-invasive tool in characterising synovitis in KOA. Copyright © 2016 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.

  18. Analysis of the Relationship Between Climate and NDVI Variability at Global Scales

    NASA Technical Reports Server (NTRS)

    Zeng, Fan-Wei; Collatz, G. James; Pinzon, Jorge; Ivanoff, Alvaro

    2011-01-01

    interannual variability in modeled (CASA) C flux is in part caused by interannual variability in Normalized Difference Vegetation Index (NDVI) Fraction of Photosynthetically Active Radiation (FPAR). This study confirms a mechanism producing variability in modeled NPP: -- NDVI (FPAR) interannual variability is strongly driven by climate; -- The climate driven variability in NDVI (FPAR) can lead to much larger fluctuation in NPP vs. the NPP computed from FPAR climatology

  19. Fine-scale habitat modeling of a top marine predator: do prey data improve predictive capacity?

    PubMed

    Torres, Leigh G; Read, Andrew J; Halpin, Patrick

    2008-10-01

    Predators and prey assort themselves relative to each other, the availability of resources and refuges, and the temporal and spatial scale of their interaction. Predictive models of predator distributions often rely on these relationships by incorporating data on environmental variability and prey availability to determine predator habitat selection patterns. This approach to predictive modeling holds true in marine systems where observations of predators are logistically difficult, emphasizing the need for accurate models. In this paper, we ask whether including prey distribution data in fine-scale predictive models of bottlenose dolphin (Tursiops truncatus) habitat selection in Florida Bay, Florida, U.S.A., improves predictive capacity. Environmental characteristics are often used as predictor variables in habitat models of top marine predators with the assumption that they act as proxies of prey distribution. We examine the validity of this assumption by comparing the response of dolphin distribution and fish catch rates to the same environmental variables. Next, the predictive capacities of four models, with and without prey distribution data, are tested to determine whether dolphin habitat selection can be predicted without recourse to describing the distribution of their prey. The final analysis determines the accuracy of predictive maps of dolphin distribution produced by modeling areas of high fish catch based on significant environmental characteristics. We use spatial analysis and independent data sets to train and test the models. Our results indicate that, due to high habitat heterogeneity and the spatial variability of prey patches, fine-scale models of dolphin habitat selection in coastal habitats will be more successful if environmental variables are used as predictor variables of predator distributions rather than relying on prey data as explanatory variables. However, predictive modeling of prey distribution as the response variable based on environmental variability did produce high predictive performance of dolphin habitat selection, particularly foraging habitat.

  20. Incorporating abundance information and guiding variable selection for climate-based ensemble forecasting of species' distributional shifts.

    PubMed

    Tanner, Evan P; Papeş, Monica; Elmore, R Dwayne; Fuhlendorf, Samuel D; Davis, Craig A

    2017-01-01

    Ecological niche models (ENMs) have increasingly been used to estimate the potential effects of climate change on species' distributions worldwide. Recently, predictions of species abundance have also been obtained with such models, though knowledge about the climatic variables affecting species abundance is often lacking. To address this, we used a well-studied guild (temperate North American quail) and the Maxent modeling algorithm to compare model performance of three variable selection approaches: correlation/variable contribution (CVC), biological (i.e., variables known to affect species abundance), and random. We then applied the best approach to forecast potential distributions, under future climatic conditions, and analyze future potential distributions in light of available abundance data and presence-only occurrence data. To estimate species' distributional shifts we generated ensemble forecasts using four global circulation models, four representative concentration pathways, and two time periods (2050 and 2070). Furthermore, we present distributional shifts where 75%, 90%, and 100% of our ensemble models agreed. The CVC variable selection approach outperformed our biological approach for four of the six species. Model projections indicated species-specific effects of climate change on future distributions of temperate North American quail. The Gambel's quail (Callipepla gambelii) was the only species predicted to gain area in climatic suitability across all three scenarios of ensemble model agreement. Conversely, the scaled quail (Callipepla squamata) was the only species predicted to lose area in climatic suitability across all three scenarios of ensemble model agreement. Our models projected future loss of areas for the northern bobwhite (Colinus virginianus) and scaled quail in portions of their distributions which are currently areas of high abundance. Climatic variables that influence local abundance may not always scale up to influence species' distributions. Special attention should be given to selecting variables for ENMs, and tests of model performance should be used to validate the choice of variables.

  1. Summer U.S. Surface Air Temperature Variability: Controlling Factors and AMIP Simulation Biases

    NASA Astrophysics Data System (ADS)

    Merrifield, A.; Xie, S. P.

    2016-02-01

    This study documents and investigates biases in simulating summer surface air temperature (SAT) variability over the continental U.S. in the Coupled Model Intercomparison Project (CMIP5) Atmospheric Model Intercomparison Project (AMIP). Empirical orthogonal function (EOF) and multivariate regression analyses are used to assess the relative importance of circulation and the land surface feedback at setting summer SAT over a 30-year period (1979-2008). In observations, regions of high SAT variability are closely associated with midtropospheric highs and subsidence, consistent with adiabatic theory (Meehl and Tebaldi 2004, Lau and Nath 2012). Preliminary analysis shows the majority of the AMIP models feature high SAT variability over the central U.S., displaced south and/or west of observed centers of action (COAs). SAT COAs in models tend to be concomitant with regions of high sensible heat flux variability, suggesting an excessive land surface feedback in these models modulate U.S. summer SAT. Additionally, tropical sea surface temperatures (SSTs) play a role in forcing the leading EOF mode for summer SAT, in concert with internal atmospheric variability. There is evidence that models respond to different SST patterns than observed. Addressing issues with the bulk land surface feedback and the SST-forced component of atmospheric variability may be key to improving model skill in simulating summer SAT variability over the U.S.

  2. How Well Has Global Ocean Heat Content Variability Been Measured?

    NASA Astrophysics Data System (ADS)

    Nelson, A.; Weiss, J.; Fox-Kemper, B.; Fabienne, G.

    2016-12-01

    We introduce a new strategy that uses synthetic observations of an ensemble of model simulations to test the fidelity of an observational strategy, quantifying how well it captures the statistics of variability. We apply this test to the 0-700m global ocean heat content anomaly (OHCA) as observed with in-situ measurements by the Coriolis Dataset for Reanalysis (CORA), using the Community Climate System Model (CCSM) version 3.5. One-year running mean OHCAs for the years 2005 onward are found to faithfully capture the variability. During these years, synthetic observations of the model are strongly correlated at 0.94±0.06 with the actual state of the model. Overall, sub-annual variability and data before 2005 are significantly affected by the variability of the observing system. In contrast, the sometimes-used weighted integral of observations is not a good indicator of OHCA as variability in the observing system contaminates dynamical variability.

  3. An Efficient Variable Screening Method for Effective Surrogate Models for Reliability-Based Design Optimization

    DTIC Science & Technology

    2014-04-01

    surrogate model generation is difficult for high -dimensional problems, due to the curse of dimensionality. Variable screening methods have been...a variable screening model was developed for the quasi-molecular treatment of ion-atom collision [16]. In engineering, a confidence interval of...for high -level radioactive waste [18]. Moreover, the design sensitivity method can be extended to the variable screening method because vital

  4. Preliminary Multivariable Cost Model for Space Telescopes

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip

    2010-01-01

    Parametric cost models are routinely used to plan missions, compare concepts and justify technology investments. Previously, the authors published two single variable cost models based on 19 flight missions. The current paper presents the development of a multi-variable space telescopes cost model. The validity of previously published models are tested. Cost estimating relationships which are and are not significant cost drivers are identified. And, interrelationships between variables are explored

  5. Effects of short-term variability of meteorological variables on soil temperature in permafrost regions

    NASA Astrophysics Data System (ADS)

    Beer, Christian; Porada, Philipp; Ekici, Altug; Brakebusch, Matthias

    2018-03-01

    Effects of the short-term temporal variability of meteorological variables on soil temperature in northern high-latitude regions have been investigated. For this, a process-oriented land surface model has been driven using an artificially manipulated climate dataset. Short-term climate variability mainly impacts snow depth, and the thermal diffusivity of lichens and bryophytes. These impacts of climate variability on insulating surface layers together substantially alter the heat exchange between atmosphere and soil. As a result, soil temperature is 0.1 to 0.8 °C higher when climate variability is reduced. Earth system models project warming of the Arctic region but also increasing variability of meteorological variables and more often extreme meteorological events. Therefore, our results show that projected future increases in permafrost temperature and active-layer thickness in response to climate change will be lower (i) when taking into account future changes in short-term variability of meteorological variables and (ii) when representing dynamic snow and lichen and bryophyte functions in land surface models.

  6. The Diffusion Model Is Not a Deterministic Growth Model: Comment on Jones and Dzhafarov (2014)

    PubMed Central

    Smith, Philip L.; Ratcliff, Roger; McKoon, Gail

    2015-01-01

    Jones and Dzhafarov (2014) claim that several current models of speeded decision making in cognitive tasks, including the diffusion model, can be viewed as special cases of other general models or model classes. The general models can be made to match any set of response time (RT) distribution and accuracy data exactly by a suitable choice of parameters and so are unfalsifiable. The implication of their claim is that models like the diffusion model are empirically testable only by artificially restricting them to exclude unfalsifiable instances of the general model. We show that Jones and Dzhafarov’s argument depends on enlarging the class of “diffusion” models to include models in which there is little or no diffusion. The unfalsifiable models are deterministic or near-deterministic growth models, from which the effects of within-trial variability have been removed or in which they are constrained to be negligible. These models attribute most or all of the variability in RT and accuracy to across-trial variability in the rate of evidence growth, which is permitted to be distributed arbitrarily and to vary freely across experimental conditions. In contrast, in the standard diffusion model, within-trial variability in evidence is the primary determinant of variability in RT. Across-trial variability, which determines the relative speed of correct responses and errors, is theoretically and empirically constrained. Jones and Dzhafarov’s attempt to include the diffusion model in a class of models that also includes deterministic growth models misrepresents and trivializes it and conveys a misleading picture of cognitive decision-making research. PMID:25347314

  7. Modelling Inter-relationships among water, governance, human development variables in developing countries with Bayesian networks.

    NASA Astrophysics Data System (ADS)

    Dondeynaz, C.; Lopez-Puga, J.; Carmona-Moreno, C.

    2012-04-01

    Improving Water and Sanitation Services (WSS), being a complex and interdisciplinary issue, passes through collaboration and coordination of different sectors (environment, health, economic activities, governance, and international cooperation). This inter-dependency has been recognised with the adoption of the "Integrated Water Resources Management" principles that push for the integration of these various dimensions involved in WSS delivery to ensure an efficient and sustainable management. The understanding of these interrelations appears as crucial for decision makers in the water sector in particular in developing countries where WSS still represent an important leverage for livelihood improvement. In this framework, the Joint Research Centre of the European Commission has developed a coherent database (WatSan4Dev database) containing 29 indicators from environmental, socio-economic, governance and financial aid flows data focusing on developing countries (Celine et al, 2011 under publication). The aim of this work is to model the WatSan4Dev dataset using probabilistic models to identify the key variables influencing or being influenced by the water supply and sanitation access levels. Bayesian Network Models are suitable to map the conditional dependencies between variables and also allows ordering variables by level of influence on the dependent variable. Separated models have been built for water supply and for sanitation because of different behaviour. The models are validated if complying with statistical criteria but either with scientific knowledge and literature. A two steps approach has been adopted to build the structure of the model; Bayesian network is first built for each thematic cluster of variables (e.g governance, agricultural pressure, or human development) keeping a detailed level for interpretation later one. A global model is then built based on significant indicators of each cluster being previously modelled. The structure of the relationships between variable are set a priori according to literature and/or experience in the field (expert knowledge). The statistical validation is verified according to error rate of classification, and the significance of the variables. Sensibility analysis has also been performed to characterise the relative influence of every single variable in the model. Once validated, the models allow the estimation of impact of each variable on the behaviour of the water supply or sanitation providing an interesting mean to test scenarios and predict variables behaviours. The choices made, methods and description of the various models, for each cluster as well as the global model for water supply and sanitation will be presented. Key results and interpretation of the relationships depicted by the models will be detailed during the conference.

  8. Parameters Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model

    NASA Astrophysics Data System (ADS)

    Zuhdi, Shaifudin; Retno Sari Saputro, Dewi; Widyaningsih, Purnami

    2017-06-01

    A regression model is the representation of relationship between independent variable and dependent variable. The dependent variable has categories used in the logistic regression model to calculate odds on. The logistic regression model for dependent variable has levels in the logistics regression model is ordinal. GWOLR model is an ordinal logistic regression model influenced the geographical location of the observation site. Parameters estimation in the model needed to determine the value of a population based on sample. The purpose of this research is to parameters estimation of GWOLR model using R software. Parameter estimation uses the data amount of dengue fever patients in Semarang City. Observation units used are 144 villages in Semarang City. The results of research get GWOLR model locally for each village and to know probability of number dengue fever patient categories.

  9. Factor analysis and multiple regression between topography and precipitation on Jeju Island, Korea

    NASA Astrophysics Data System (ADS)

    Um, Myoung-Jin; Yun, Hyeseon; Jeong, Chang-Sam; Heo, Jun-Haeng

    2011-11-01

    SummaryIn this study, new factors that influence precipitation were extracted from geographic variables using factor analysis, which allow for an accurate estimation of orographic precipitation. Correlation analysis was also used to examine the relationship between nine topographic variables from digital elevation models (DEMs) and the precipitation in Jeju Island. In addition, a spatial analysis was performed in order to verify the validity of the regression model. From the results of the correlation analysis, it was found that all of the topographic variables had a positive correlation with the precipitation. The relations between the variables also changed in accordance with a change in the precipitation duration. However, upon examining the correlation matrix, no significant relationship between the latitude and the aspect was found. According to the factor analysis, eight topographic variables (latitude being the exception) were found to have a direct influence on the precipitation. Three factors were then extracted from the eight topographic variables. By directly comparing the multiple regression model with the factors (model 1) to the multiple regression model with the topographic variables (model 3), it was found that model 1 did not violate the limits of statistical significance and multicollinearity. As such, model 1 was considered to be appropriate for estimating the precipitation when taking into account the topography. In the study of model 1, the multiple regression model using factor analysis was found to be the best method for estimating the orographic precipitation on Jeju Island.

  10. Recent changes in county-level corn yield variability in the United States from observations and crop models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leng, Guoyong

    The United States is responsible for 35% and 60% of global corn supply and exports. Enhanced supply stability through a reduction in the year-to-year variability of US corn yield would greatly benefit global food security. Important in this regard is to understand how corn yield variability has evolved geographically in the history and how it relates to climatic and non-climatic factors. Results showed that year-to-year variation of US corn yield has decreased significantly during 1980-2010, mainly in Midwest Corn Belt, Nebraska and western arid regions. Despite the country-scale decreasing variability, corn yield variability exhibited an increasing trend in South Dakota,more » Texas and Southeast growing regions, indicating the importance of considering spatial scales in estimating yield variability. The observed pattern is partly reproduced by process-based crop models, simulating larger areas experiencing increasing variability and underestimating the magnitude of decreasing variability. And 3 out of 11 models even produced a differing sign of change from observations. Hence, statistical model which produces closer agreement with observations is used to explore the contribution of climatic and non-climatic factors to the changes in yield variability. It is found that climate variability dominate the change trends of corn yield variability in the Midwest Corn Belt, while the ability of climate variability in controlling yield variability is low in southeastern and western arid regions. Irrigation has largely reduced the corn yield variability in regions (e.g. Nebraska) where separate estimates of irrigated and rain-fed corn yield exist, demonstrating the importance of non-climatic factors in governing the changes in corn yield variability. The results highlight the distinct spatial patterns of corn yield variability change as well as its influencing factors at the county scale. I also caution the use of process-based crop models, which have substantially underestimated the change trend of corn yield variability, in projecting its future changes.« less

  11. On the Lack of Stratospheric Dynamical Variability in Low-top Versions of the CMIP5 Models

    NASA Technical Reports Server (NTRS)

    Charlton-Perez, Andrew J.; Baldwin, Mark P.; Birner, Thomas; Black, Robert X.; Butler, Amy H.; Calvo, Natalia; Davis, Nicholas A.; Gerber, Edwin P.; Gillett, Nathan; Hardiman, Steven; hide

    2013-01-01

    We describe the main differences in simulations of stratospheric climate and variability by models within the fifth Coupled Model Intercomparison Project (CMIP5) that have a model top above the stratopause and relatively fine stratospheric vertical resolution (high-top), and those that have a model top below the stratopause (low-top). Although the simulation of mean stratospheric climate by the two model ensembles is similar, the low-top model ensemble has very weak stratospheric variability on daily and interannual time scales. The frequency of major sudden stratospheric warming events is strongly underestimated by the low-top models with less than half the frequency of events observed in the reanalysis data and high-top models. The lack of stratospheric variability in the low-top models affects their stratosphere-troposphere coupling, resulting in short-lived anomalies in the Northern Annular Mode, which do not produce long-lasting tropospheric impacts, as seen in observations. The lack of stratospheric variability, however, does not appear to have any impact on the ability of the low-top models to reproduce past stratospheric temperature trends. We find little improvement in the simulation of decadal variability for the high-top models compared to the low-top, which is likely related to the fact that neither ensemble produces a realistic dynamical response to volcanic eruptions.

  12. Multiresponse semiparametric regression for modelling the effect of regional socio-economic variables on the use of information technology

    NASA Astrophysics Data System (ADS)

    Wibowo, Wahyu; Wene, Chatrien; Budiantara, I. Nyoman; Permatasari, Erma Oktania

    2017-03-01

    Multiresponse semiparametric regression is simultaneous equation regression model and fusion of parametric and nonparametric model. The regression model comprise several models and each model has two components, parametric and nonparametric. The used model has linear function as parametric and polynomial truncated spline as nonparametric component. The model can handle both linearity and nonlinearity relationship between response and the sets of predictor variables. The aim of this paper is to demonstrate the application of the regression model for modeling of effect of regional socio-economic on use of information technology. More specific, the response variables are percentage of households has access to internet and percentage of households has personal computer. Then, predictor variables are percentage of literacy people, percentage of electrification and percentage of economic growth. Based on identification of the relationship between response and predictor variable, economic growth is treated as nonparametric predictor and the others are parametric predictors. The result shows that the multiresponse semiparametric regression can be applied well as indicate by the high coefficient determination, 90 percent.

  13. Quantifying the Model-Related Variability of Biomass Stock and Change Estimates in the Norwegian National Forest Inventory

    Treesearch

    Johannes Breidenbach; Clara Antón-Fernández; Hans Petersson; Ronald E. McRoberts; Rasmus Astrup

    2014-01-01

    National Forest Inventories (NFIs) provide estimates of forest parameters for national and regional scales. Many key variables of interest, such as biomass and timber volume, cannot be measured directly in the field. Instead, models are used to predict those variables from measurements of other field variables. Therefore, the uncertainty or variability of NFI estimates...

  14. Tropical cloud feedbacks and natural variability of climate

    NASA Technical Reports Server (NTRS)

    Miller, R. L.; Del Genio, A. D.

    1994-01-01

    Simulations of natural variability by two general circulation models (GCMs) are examined. One GCM is a sector model, allowing relatively rapid integration without simplification of the model physics, which would potentially exclude mechanisms of variability. Two mechanisms are found in which tropical surface temperature and sea surface temperature (SST) vary on interannual and longer timescales. Both are related to changes in cloud cover that modulate SST through the surface radiative flux. Over the equatorial ocean, SST and surface temperature vary on an interannual timescale, which is determined by the magnitude of the associated cloud cover anomalies. Over the subtropical ocean, variations in low cloud cover drive SST variations. In the sector model, the variability has no preferred timescale, but instead is characterized by a 'red' spectrum with increasing power at longer periods. In the terrestrial GCM, SST variability associated with low cloud anomalies has a decadal timescale and is the dominant form of global temperature variability. Both GCMs are coupled to a mixed layer ocean model, where dynamical heat transports are prescribed, thus filtering out El Nino-Southern Oscillation (ENSO) and thermohaline circulation variability. The occurrence of variability in the absence of dynamical ocean feedbacks suggests that climatic variability on long timescales can arise from atmospheric processes alone.

  15. Climate, soil or both? Which variables are better predictors of the distributions of Australian shrub species?

    PubMed Central

    Esperón-Rodríguez, Manuel; Baumgartner, John B.; Beaumont, Linda J.

    2017-01-01

    Background Shrubs play a key role in biogeochemical cycles, prevent soil and water erosion, provide forage for livestock, and are a source of food, wood and non-wood products. However, despite their ecological and societal importance, the influence of different environmental variables on shrub distributions remains unclear. We evaluated the influence of climate and soil characteristics, and whether including soil variables improved the performance of a species distribution model (SDM), Maxent. Methods This study assessed variation in predictions of environmental suitability for 29 Australian shrub species (representing dominant members of six shrubland classes) due to the use of alternative sets of predictor variables. Models were calibrated with (1) climate variables only, (2) climate and soil variables, and (3) soil variables only. Results The predictive power of SDMs differed substantially across species, but generally models calibrated with both climate and soil data performed better than those calibrated only with climate variables. Models calibrated solely with soil variables were the least accurate. We found regional differences in potential shrub species richness across Australia due to the use of different sets of variables. Conclusions Our study provides evidence that predicted patterns of species richness may be sensitive to the choice of predictor set when multiple, plausible alternatives exist, and demonstrates the importance of considering soil properties when modeling availability of habitat for plants. PMID:28652933

  16. Risk models for post-endoscopic retrograde cholangiopancreatography pancreatitis (PEP): smoking and chronic liver disease are predictors of protection against PEP.

    PubMed

    DiMagno, Matthew J; Spaete, Joshua P; Ballard, Darren D; Wamsteker, Erik-Jan; Saini, Sameer D

    2013-08-01

    We investigated which variables independently associated with protection against or development of postendoscopic retrograde cholangiopancreatography (ERCP) pancreatitis (PEP) and severity of PEP. Subsequently, we derived predictive risk models for PEP. In a case-control design, 6505 patients had 8264 ERCPs, 211 patients had PEP, and 22 patients had severe PEP. We randomly selected 348 non-PEP controls. We examined 7 established- and 9 investigational variables. In univariate analysis, 7 variables predicted PEP: younger age, female sex, suspected sphincter of Oddi dysfunction (SOD), pancreatic sphincterotomy, moderate-difficult cannulation (MDC), pancreatic stent placement, and lower Charlson score. Protective variables were current smoking, former drinking, diabetes, and chronic liver disease (CLD, biliary/transplant complications). Multivariate analysis identified seven independent variables for PEP, three protective (current smoking, CLD-biliary, CLD-transplant/hepatectomy complications) and 4 predictive (younger age, suspected SOD, pancreatic sphincterotomy, MDC). Pre- and post-ERCP risk models of 7 variables have a C-statistic of 0.74. Removing age (seventh variable) did not significantly affect the predictive value (C-statistic of 0.73) and reduced model complexity. Severity of PEP did not associate with any variables by multivariate analysis. By using the newly identified protective variables with 3 predictive variables, we derived 2 risk models with a higher predictive value for PEP compared to prior studies.

  17. Multivariate dynamic Tobit models with lagged observed dependent variables: An effectiveness analysis of highway safety laws.

    PubMed

    Dong, Chunjiao; Xie, Kun; Zeng, Jin; Li, Xia

    2018-04-01

    Highway safety laws aim to influence driver behaviors so as to reduce the frequency and severity of crashes, and their outcomes. For one specific highway safety law, it would have different effects on the crashes across severities. Understanding such effects can help policy makers upgrade current laws and hence improve traffic safety. To investigate the effects of highway safety laws on crashes across severities, multivariate models are needed to account for the interdependency issues in crash counts across severities. Based on the characteristics of the dependent variables, multivariate dynamic Tobit (MVDT) models are proposed to analyze crash counts that are aggregated at the state level. Lagged observed dependent variables are incorporated into the MVDT models to account for potential temporal correlation issues in crash data. The state highway safety law related factors are used as the explanatory variables and socio-demographic and traffic factors are used as the control variables. Three models, a MVDT model with lagged observed dependent variables, a MVDT model with unobserved random variables, and a multivariate static Tobit (MVST) model are developed and compared. The results show that among the investigated models, the MVDT models with lagged observed dependent variables have the best goodness-of-fit. The findings indicate that, compared to the MVST, the MVDT models have better explanatory power and prediction accuracy. The MVDT model with lagged observed variables can better handle the stochasticity and dependency in the temporal evolution of the crash counts and the estimated values from the model are closer to the observed values. The results show that more lives could be saved if law enforcement agencies can make a sustained effort to educate the public about the importance of motorcyclists wearing helmets. Motor vehicle crash-related deaths, injuries, and property damages could be reduced if states enact laws for stricter text messaging rules, higher speeding fines, older licensing age, and stronger graduated licensing provisions. Injury and PDO crashes would be significantly reduced with stricter laws prohibiting the use of hand-held communication devices and higher fines for drunk driving. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Simulating maize yield and biomass with spatial variability of soil field capacity

    USDA-ARS?s Scientific Manuscript database

    Spatial variability in field soil water and other properties is a challenge for system modelers who use only representative values for model inputs, rather than their distributions. In this study, we compared simulation results from a calibrated model with spatial variability of soil field capacity ...

  19. A Composite Likelihood Inference in Latent Variable Models for Ordinal Longitudinal Responses

    ERIC Educational Resources Information Center

    Vasdekis, Vassilis G. S.; Cagnone, Silvia; Moustaki, Irini

    2012-01-01

    The paper proposes a composite likelihood estimation approach that uses bivariate instead of multivariate marginal probabilities for ordinal longitudinal responses using a latent variable model. The model considers time-dependent latent variables and item-specific random effects to be accountable for the interdependencies of the multivariate…

  20. The Robustness of LISREL Estimates in Structural Equation Models with Categorical Variables.

    ERIC Educational Resources Information Center

    Ethington, Corinna A.

    1987-01-01

    This study examined the effect of type of correlation matrix on the robustness of LISREL maximum likelihood and unweighted least squares structural parameter estimates for models with categorical variables. The analysis of mixed matrices produced estimates that closely approximated the model parameters except where dichotomous variables were…

  1. A Generalized Stability Analysis of the AMOC in Earth System Models: Implication for Decadal Variability and Abrupt Climate Change

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fedorov, Alexey V.

    2015-01-14

    The central goal of this research project was to understand the mechanisms of decadal and multi-decadal variability of the Atlantic Meridional Overturning Circulation (AMOC) as related to climate variability and abrupt climate change within a hierarchy of climate models ranging from realistic ocean models to comprehensive Earth system models. Generalized Stability Analysis, a method that quantifies the transient and asymptotic growth of perturbations in the system, is one of the main approaches used throughout this project. The topics we have explored range from physical mechanisms that control AMOC variability to the factors that determine AMOC predictability in the Earth systemmore » models, to the stability and variability of the AMOC in past climates.« less

  2. Variable selection and model choice in geoadditive regression models.

    PubMed

    Kneib, Thomas; Hothorn, Torsten; Tutz, Gerhard

    2009-06-01

    Model choice and variable selection are issues of major concern in practical regression analyses, arising in many biometric applications such as habitat suitability analyses, where the aim is to identify the influence of potentially many environmental conditions on certain species. We describe regression models for breeding bird communities that facilitate both model choice and variable selection, by a boosting algorithm that works within a class of geoadditive regression models comprising spatial effects, nonparametric effects of continuous covariates, interaction surfaces, and varying coefficients. The major modeling components are penalized splines and their bivariate tensor product extensions. All smooth model terms are represented as the sum of a parametric component and a smooth component with one degree of freedom to obtain a fair comparison between the model terms. A generic representation of the geoadditive model allows us to devise a general boosting algorithm that automatically performs model choice and variable selection.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tonnemacher, G.C.; Killen, D.C.; Weinstein, R.E.

    This paper reports on the results of an US Department of Energy (DOE) conceptual design evaluation. This is for an early commercial repowering application of advanced circulating pressurized fluidized bed combustion combined cycle technology (APFBC). Here, APFBC would repower an existing generation station, the Carolina Power and Light Company's (CP and L) L.V. Sutton steam station. Repowering concepts are presented for APFBC repowering of Unit 2 (226 MWe) and both Units 1 and 2 in combination (340 MWe total). This evaluation found that it is more economical to repower the existing coal-fired generation unit with APFBC than to build newmore » pulverized coal capacity of equivalent output. The paper provides a review of the DOE study and summarizes the design and costs associated with the APFBC concept. A DOE-sponsored Clean Coal Technology (CCT) demonstration program will pioneer the first commercial APFBC demonstration in year 2001. That 170 MWe APFBC CCT demonstration will use all new equipment, and become the City of Lakeland's C.D. McIntosh, JR. steam plant Unit 4. This all-coal technology is under development by DOE and equipment manufacturers. This paper's concept evaluation is for a larger implementation than the Lakeland McIntosh CCT project. The repowering of L.V. Sutton Unit 2 is projected to boost the energy efficiency of the existing unit from its present 32.0% HHV level to an APFBC-repowered energy efficiency of 42.2% HHV (44.1% LHV). A large frame Westinghouse W501F combustion turbine is modified for APFBC use. This produces a 225+ MWe class APFBC. At this size, APFBC has a wide application for repowering many existing units in America. The paper focuses on the design issues, shows how the APFBC power block integrates with the existing site, and gives a brief summary of the resulting system performance and costs.« less

  4. Dry syngas purification process for coal gas produced in oxy-fuel type integrated gasification combined cycle power generation with carbon dioxide capturing feature.

    PubMed

    Kobayashi, Makoto; Akiho, Hiroyuki

    2017-12-01

    Electricity production from coal fuel with minimizing efficiency penalty for the carbon dioxide abatement will bring us sustainable and compatible energy utilization. One of the promising options is oxy-fuel type Integrated Gasification Combined Cycle (oxy-fuel IGCC) power generation that is estimated to achieve thermal efficiency of 44% at lower heating value (LHV) base and provide compressed carbon dioxide (CO 2 ) with concentration of 93 vol%. The proper operation of the plant is established by introducing dry syngas cleaning processes to control halide and sulfur compounds satisfying tolerate contaminants level of gas turbine. To realize the dry process, the bench scale test facility was planned to demonstrate the first-ever halide and sulfur removal with fixed bed reactor using actual syngas from O 2 -CO 2 blown gasifier for the oxy-fuel IGCC power generation. Design parameter for the test facility was required for the candidate sorbents for halide removal and sulfur removal. Breakthrough test was performed on two kinds of halide sorbents at accelerated condition and on honeycomb desulfurization sorbent at varied space velocity condition. The results for the both sorbents for halide and sulfur exhibited sufficient removal within the satisfactory short depth of sorbent bed, as well as superior bed conversion of the impurity removal reaction. These performance evaluation of the candidate sorbents of halide and sulfur removal provided rational and affordable design parameters for the bench scale test facility to demonstrate the dry syngas cleaning process for oxy-fuel IGCC system as the scaled up step of process development. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Spent coffee ground as a new bulking agent for accelerated biodrying of dewatered sludge.

    PubMed

    Hao, Zongdi; Yang, Benqin; Jahng, Deokjin

    2018-07-01

    The feasibility of using spent coffee ground (SCG) as a new bulking agent for biodrying of dewatered sludge (DS) was investigated in comparison with two other frequently-used bulking agents, air-dried sludge (AS) and sawdust (SD). Results showed that the moisture contents (MC) of 16-day DS biodrying with AS (Trial A), SCG (Trial B) and SD (Trial C) decreased from 70.14 wt%, 68.25 wt% and 71.63 wt% to 59.12 wt%, 41.35 wt% and 57.69 wt%, respectively. In case of Trial B, the MC rapidly decreased to 46.16 wt% with the highest water removal (70.87%) within 8 days because of the longest high-temperature period (5.8 days). Further studies indicated that the abundant biodegradable volatile solids (BVS) and high dissolved organic matter (DOM) contents in SCG were the main driving forces for water removal. According to pyrosequencing data, Firmicutes, most of which were recognized as thermophiles, was rapidly enriched on Day 8 and became the dominant phylum in Trial B. Four thermophilic genera, Bacillus, Ureibacillus, Geobacillus and Thermobifida, which can produce thermostable hydrolytic extracellular enzymes, were the most abundant in Trial B, indicating that these thermophilic bacteria evolved during the long high-temperature period enhanced the biodegradation of BVS in SCG. The 8-day biodried product of Trial B was demonstrated to be an excellent solid fuel with low heating value (LHV) of 9284 kJ kg -1 , which was 2.1 and 1.8 times those of biodried products with AS and SD, respectively. Thus SCG was found to be an excellent bulking agent accelerating DS biodrying and producing a solid fuel with a high calorific value. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Fine tuning of process parameters for improving briquette production from palm kernel shell gasification waste.

    PubMed

    Bazargan, Alireza; Rough, Sarah L; McKay, Gordon

    2018-04-01

    Palm kernel shell biochars (PKSB) ejected as residues from a gasifier have been used for solid fuel briquette production. With this approach, palm kernel shells can be used for energy production twice: first, by producing rich syngas during gasification; second, by compacting the leftover residues from gasification into high calorific value briquettes. Herein, the process parameters for the manufacture of PKSB biomass briquettes via compaction are optimized. Two possible optimum process scenarios are considered. In the first, the compaction speed is increased from 0.5 to 10 mm/s, the compaction pressure is decreased from 80 Pa to 40 MPa, the retention time is reduced from 10 s to zero, and the starch binder content of the briquette is halved from 0.1 to 0.05 kg/kg. With these adjustments, the briquette production rate increases by more than 20-fold; hence capital and operational costs can be reduced and the service life of compaction equipment can be increased. The resulting product satisfactorily passes tensile (compressive) crushing strength and impact resistance tests. The second scenario involves reducing the starch weight content to 0.03 kg/kg, while reducing the compaction pressure to a value no lower than 60 MPa. Overall, in both cases, the PKSB biomass briquettes show excellent potential as a solid fuel with calorific values on par with good-quality coal. CHNS: carbon, hydrogen, nitrogen, sulfur; FFB: fresh fruit bunch(es); HHV: higher heating value [J/kg]; LHV: lower heating value [J/kg]; PKS: palm kernel shell(s); PKSB: palm kernel shell biochar(s); POME: palm oil mill effluent; RDF: refuse-derived fuel; TGA: thermogravimetric analysis.

  7. Modeling sea-surface temperature and its variability

    NASA Technical Reports Server (NTRS)

    Sarachik, E. S.

    1985-01-01

    A brief review is presented of the temporal scales of sea surface temperature variability. Progress in modeling sea surface temperature, and remaining obstacles to the understanding of the variability is discussed.

  8. Effect of Adding McKenzie Syndrome, Centralization, Directional Preference, and Psychosocial Classification Variables to a Risk-Adjusted Model Predicting Functional Status Outcomes for Patients With Lumbar Impairments.

    PubMed

    Werneke, Mark W; Edmond, Susan; Deutscher, Daniel; Ward, Jason; Grigsby, David; Young, Michelle; McGill, Troy; McClenahan, Brian; Weinberg, Jon; Davidow, Amy L

    2016-09-01

    Study Design Retrospective cohort. Background Patient-classification subgroupings may be important prognostic factors explaining outcomes. Objectives To determine effects of adding classification variables (McKenzie syndrome and pain patterns, including centralization and directional preference; Symptom Checklist Back Pain Prediction Model [SCL BPPM]; and the Fear-Avoidance Beliefs Questionnaire subscales of work and physical activity) to a baseline risk-adjusted model predicting functional status (FS) outcomes. Methods Consecutive patients completed a battery of questionnaires that gathered information on 11 risk-adjustment variables. Physical therapists trained in Mechanical Diagnosis and Therapy methods classified each patient by McKenzie syndromes and pain pattern. Functional status was assessed at discharge by patient-reported outcomes. Only patients with complete data were included. Risk of selection bias was assessed. Prediction of discharge FS was assessed using linear stepwise regression models, allowing 13 variables to enter the model. Significant variables were retained in subsequent models. Model power (R(2)) and beta coefficients for model variables were estimated. Results Two thousand sixty-six patients with lumbar impairments were evaluated. Of those, 994 (48%), 10 (<1%), and 601 (29%) were excluded due to incomplete psychosocial data, McKenzie classification data, and missing FS at discharge, respectively. The final sample for analyses was 723 (35%). Overall R(2) for the baseline prediction FS model was 0.40. Adding classification variables to the baseline model did not result in significant increases in R(2). McKenzie syndrome or pain pattern explained 2.8% and 3.0% of the variance, respectively. When pain pattern and SCL BPPM were added simultaneously, overall model R(2) increased to 0.44. Although none of these increases in R(2) were significant, some classification variables were stronger predictors compared with some other variables included in the baseline model. Conclusion The small added prognostic capabilities identified when combining McKenzie or pain-pattern classifications with the SCL BPPM classification did not significantly improve prediction of FS outcomes in this study. Additional research is warranted to investigate the importance of classification variables compared with those used in the baseline model to maximize predictive power. Level of Evidence Prognosis, level 4. J Orthop Sports Phys Ther 2016;46(9):726-741. Epub 31 Jul 2016. doi:10.2519/jospt.2016.6266.

  9. Uncertainty and Variability in Physiologically-Based ...

    EPA Pesticide Factsheets

    EPA announced the availability of the final report, Uncertainty and Variability in Physiologically-Based Pharmacokinetic (PBPK) Models: Key Issues and Case Studies. This report summarizes some of the recent progress in characterizing uncertainty and variability in physiologically-based pharmacokinetic models and their predictions for use in risk assessment. This report summarizes some of the recent progress in characterizing uncertainty and variability in physiologically-based pharmacokinetic models and their predictions for use in risk assessment.

  10. An instrumental variable random-coefficients model for binary outcomes

    PubMed Central

    Chesher, Andrew; Rosen, Adam M

    2014-01-01

    In this paper, we study a random-coefficients model for a binary outcome. We allow for the possibility that some or even all of the explanatory variables are arbitrarily correlated with the random coefficients, thus permitting endogeneity. We assume the existence of observed instrumental variables Z that are jointly independent with the random coefficients, although we place no structure on the joint determination of the endogenous variable X and instruments Z, as would be required for a control function approach. The model fits within the spectrum of generalized instrumental variable models, and we thus apply identification results from our previous studies of such models to the present context, demonstrating their use. Specifically, we characterize the identified set for the distribution of random coefficients in the binary response model with endogeneity via a collection of conditional moment inequalities, and we investigate the structure of these sets by way of numerical illustration. PMID:25798048

  11. Selection of latent variables for multiple mixed-outcome models

    PubMed Central

    ZHOU, LING; LIN, HUAZHEN; SONG, XINYUAN; LI, YI

    2014-01-01

    Latent variable models have been widely used for modeling the dependence structure of multiple outcomes data. However, the formulation of a latent variable model is often unknown a priori, the misspecification will distort the dependence structure and lead to unreliable model inference. Moreover, multiple outcomes with varying types present enormous analytical challenges. In this paper, we present a class of general latent variable models that can accommodate mixed types of outcomes. We propose a novel selection approach that simultaneously selects latent variables and estimates parameters. We show that the proposed estimator is consistent, asymptotically normal and has the oracle property. The practical utility of the methods is confirmed via simulations as well as an application to the analysis of the World Values Survey, a global research project that explores peoples’ values and beliefs and the social and personal characteristics that might influence them. PMID:27642219

  12. Physiologically Based Pharmacokinetic (PBPK) Modeling of Interstrain Variability in Trichloroethylene Metabolism in the Mouse

    PubMed Central

    Campbell, Jerry L.; Clewell, Harvey J.; Zhou, Yi-Hui; Wright, Fred A.; Guyton, Kathryn Z.

    2014-01-01

    Background: Quantitative estimation of toxicokinetic variability in the human population is a persistent challenge in risk assessment of environmental chemicals. Traditionally, interindividual differences in the population are accounted for by default assumptions or, in rare cases, are based on human toxicokinetic data. Objectives: We evaluated the utility of genetically diverse mouse strains for estimating toxicokinetic population variability for risk assessment, using trichloroethylene (TCE) metabolism as a case study. Methods: We used data on oxidative and glutathione conjugation metabolism of TCE in 16 inbred and 1 hybrid mouse strains to calibrate and extend existing physiologically based pharmacokinetic (PBPK) models. We added one-compartment models for glutathione metabolites and a two-compartment model for dichloroacetic acid (DCA). We used a Bayesian population analysis of interstrain variability to quantify variability in TCE metabolism. Results: Concentration–time profiles for TCE metabolism to oxidative and glutathione conjugation metabolites varied across strains. Median predictions for the metabolic flux through oxidation were less variable (5-fold range) than that through glutathione conjugation (10-fold range). For oxidative metabolites, median predictions of trichloroacetic acid production were less variable (2-fold range) than DCA production (5-fold range), although the uncertainty bounds for DCA exceeded the predicted variability. Conclusions: Population PBPK modeling of genetically diverse mouse strains can provide useful quantitative estimates of toxicokinetic population variability. When extrapolated to lower doses more relevant to environmental exposures, mouse population-derived variability estimates for TCE metabolism closely matched population variability estimates previously derived from human toxicokinetic studies with TCE, highlighting the utility of mouse interstrain metabolism studies for addressing toxicokinetic variability. Citation: Chiu WA, Campbell JL Jr, Clewell HJ III, Zhou YH, Wright FA, Guyton KZ, Rusyn I. 2014. Physiologically based pharmacokinetic (PBPK) modeling of interstrain variability in trichloroethylene metabolism in the mouse. Environ Health Perspect 122:456–463; http://dx.doi.org/10.1289/ehp.1307623 PMID:24518055

  13. Attributing uncertainty in streamflow simulations due to variable inputs via the Quantile Flow Deviation metric

    NASA Astrophysics Data System (ADS)

    Shoaib, Syed Abu; Marshall, Lucy; Sharma, Ashish

    2018-06-01

    Every model to characterise a real world process is affected by uncertainty. Selecting a suitable model is a vital aspect of engineering planning and design. Observation or input errors make the prediction of modelled responses more uncertain. By way of a recently developed attribution metric, this study is aimed at developing a method for analysing variability in model inputs together with model structure variability to quantify their relative contributions in typical hydrological modelling applications. The Quantile Flow Deviation (QFD) metric is used to assess these alternate sources of uncertainty. The Australian Water Availability Project (AWAP) precipitation data for four different Australian catchments is used to analyse the impact of spatial rainfall variability on simulated streamflow variability via the QFD. The QFD metric attributes the variability in flow ensembles to uncertainty associated with the selection of a model structure and input time series. For the case study catchments, the relative contribution of input uncertainty due to rainfall is higher than that due to potential evapotranspiration, and overall input uncertainty is significant compared to model structure and parameter uncertainty. Overall, this study investigates the propagation of input uncertainty in a daily streamflow modelling scenario and demonstrates how input errors manifest across different streamflow magnitudes.

  14. Modelling space of spread Dengue Hemorrhagic Fever (DHF) in Central Java use spatial durbin model

    NASA Astrophysics Data System (ADS)

    Ispriyanti, Dwi; Prahutama, Alan; Taryono, Arkadina PN

    2018-05-01

    Dengue Hemorrhagic Fever is one of the major public health problems in Indonesia. From year to year, DHF causes Extraordinary Event in most parts of Indonesia, especially Central Java. Central Java consists of 35 districts or cities where each region is close to each other. Spatial regression is an analysis that suspects the influence of independent variables on the dependent variables with the influences of the region inside. In spatial regression modeling, there are spatial autoregressive model (SAR), spatial error model (SEM) and spatial autoregressive moving average (SARMA). Spatial Durbin model is the development of SAR where the dependent and independent variable have spatial influence. In this research dependent variable used is number of DHF sufferers. The independent variables observed are population density, number of hospitals, residents and health centers, and mean years of schooling. From the multiple regression model test, the variables that significantly affect the spread of DHF disease are the population and mean years of schooling. By using queen contiguity and rook contiguity, the best model produced is the SDM model with queen contiguity because it has the smallest AIC value of 494,12. Factors that generally affect the spread of DHF in Central Java Province are the number of population and the average length of school.

  15. Multilevel Model Prediction

    ERIC Educational Resources Information Center

    Frees, Edward W.; Kim, Jee-Seon

    2006-01-01

    Multilevel models are proven tools in social research for modeling complex, hierarchical systems. In multilevel modeling, statistical inference is based largely on quantification of random variables. This paper distinguishes among three types of random variables in multilevel modeling--model disturbances, random coefficients, and future response…

  16. Generalized Processing Tree Models: Jointly Modeling Discrete and Continuous Variables.

    PubMed

    Heck, Daniel W; Erdfelder, Edgar; Kieslich, Pascal J

    2018-05-24

    Multinomial processing tree models assume that discrete cognitive states determine observed response frequencies. Generalized processing tree (GPT) models extend this conceptual framework to continuous variables such as response times, process-tracing measures, or neurophysiological variables. GPT models assume finite-mixture distributions, with weights determined by a processing tree structure, and continuous components modeled by parameterized distributions such as Gaussians with separate or shared parameters across states. We discuss identifiability, parameter estimation, model testing, a modeling syntax, and the improved precision of GPT estimates. Finally, a GPT version of the feature comparison model of semantic categorization is applied to computer-mouse trajectories.

  17. Whole season compared to growth-stage resolved temperature trends: implications for US maize yield

    NASA Astrophysics Data System (ADS)

    Butler, E. E.; Mueller, N. D.; Huybers, P. J.

    2014-12-01

    The effect of temperature on maize yield has generally been considered using a single value for the entire growing season. We compare the effect of temperature trends on yield between two distinct models: a single temperature sensitivity for the whole season and a variable sensitivity across four distinct agronomic development stages. The more resolved variable-sensitivity model indicates roughly a factor of two greater influence of temperature on yield than that implied by the single-sensitivity model. The largest discrepancies occur in silking, which is demonstrated to be the most sensitive stage in the variable-sensitivity model. For instance, whereas median yields are observed to be only 53% of typical values during the hottest 1% of silking-stage temperatures, the single-sensitivity model over predicts median yields of 68% whereas the variable-sensitivity model more correctly predicts median yields of 61%. That the variable sensitivity model is also not capable of capturing the full extent of yield losses suggests that further refinement to represent the non-linear response would be useful. Results from the variable sensitivity model also indicate that management decisions regarding planting times, which have generally shifted toward earlier dates, have led to greater yield benefit than that implied by the single-sensitivity model. Together, the variation of both temperature trends and yield variability within growing stages calls for closer attention to how changes in management interact with changes in climate to ultimately affect yields.

  18. Impact of interannual variability (1979-1986) of transport and temperature on ozone as computed using a two-dimensional photochemical model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jackman, C.H.; Douglass, A.R., Chandra, S.; Stolarski, R.S.

    1991-03-20

    Eight years of NMC (National Meteorological Center) temperature and SBUV (solar backscattered ultraviolet) ozone data were used to calculate the monthly mean heating rates and residual circulation for use in a two-dimensional photochemical model in order to examine the interannual variability of modeled ozone. Fairly good correlations were found in the interannual behavior of modeled and measured SBUV ozone in the upper stratosphere at middle to low latitudes, where temperature dependent photochemistry is thought to dominate ozone behavior. The calculated total ozone is found to be more sensitive to the interannual residual circulation changes than to the interannual temperature changes.more » The magnitude of the modeled ozone variability is similar to the observed variability, but the observed and modeled year to year deviations are mostly uncorrelated. The large component of the observed total ozone variability at low latitudes due to the quasi-biennial oscillation (QBO) is not seen in the modeled total ozone, as only a small QBO signal is present in the heating rates, temperatures, and monthly mean residual circulation. Large interanual changes in tropospheric dynamics are believed to influence the interannual variability in the total ozone, especially at middle and high latitudes. Since these tropospheric changes and most of the QBO forcing are not included in the model formulation, it is not surprising that the interannual variability in total ozione is not well represented in the model computations.« less

  19. Towards multi-resolution global climate modeling with ECHAM6-FESOM. Part II: climate variability

    NASA Astrophysics Data System (ADS)

    Rackow, T.; Goessling, H. F.; Jung, T.; Sidorenko, D.; Semmler, T.; Barbi, D.; Handorf, D.

    2018-04-01

    This study forms part II of two papers describing ECHAM6-FESOM, a newly established global climate model with a unique multi-resolution sea ice-ocean component. While part I deals with the model description and the mean climate state, here we examine the internal climate variability of the model under constant present-day (1990) conditions. We (1) assess the internal variations in the model in terms of objective variability performance indices, (2) analyze variations in global mean surface temperature and put them in context to variations in the observed record, with particular emphasis on the recent warming slowdown, (3) analyze and validate the most common atmospheric and oceanic variability patterns, (4) diagnose the potential predictability of various climate indices, and (5) put the multi-resolution approach to the test by comparing two setups that differ only in oceanic resolution in the equatorial belt, where one ocean mesh keeps the coarse 1° resolution applied in the adjacent open-ocean regions and the other mesh is gradually refined to 0.25°. Objective variability performance indices show that, in the considered setups, ECHAM6-FESOM performs overall favourably compared to five well-established climate models. Internal variations of the global mean surface temperature in the model are consistent with observed fluctuations and suggest that the recent warming slowdown can be explained as a once-in-one-hundred-years event caused by internal climate variability; periods of strong cooling in the model (`hiatus' analogs) are mainly associated with ENSO-related variability and to a lesser degree also to PDO shifts, with the AMO playing a minor role. Common atmospheric and oceanic variability patterns are simulated largely consistent with their real counterparts. Typical deficits also found in other models at similar resolutions remain, in particular too weak non-seasonal variability of SSTs over large parts of the ocean and episodic periods of almost absent deep-water formation in the Labrador Sea, resulting in overestimated North Atlantic SST variability. Concerning the influence of locally (isotropically) increased resolution, the ENSO pattern and index statistics improve significantly with higher resolution around the equator, illustrating the potential of the novel unstructured-mesh method for global climate modeling.

  20. Effects of model spatial resolution on ecohydrologic predictions and their sensitivity to inter-annual climate variability

    Treesearch

    Kyongho Son; Christina Tague; Carolyn Hunsaker

    2016-01-01

    The effect of fine-scale topographic variability on model estimates of ecohydrologic responses to climate variability in California’s Sierra Nevada watersheds has not been adequately quantified and may be important for supporting reliable climate-impact assessments. This study tested the effect of digital elevation model (DEM) resolution on model accuracy and estimates...

  1. Variability And Uncertainty Analysis Of Contaminant Transport Model Using Fuzzy Latin Hypercube Sampling Technique

    NASA Astrophysics Data System (ADS)

    Kumar, V.; Nayagum, D.; Thornton, S.; Banwart, S.; Schuhmacher2, M.; Lerner, D.

    2006-12-01

    Characterization of uncertainty associated with groundwater quality models is often of critical importance, as for example in cases where environmental models are employed in risk assessment. Insufficient data, inherent variability and estimation errors of environmental model parameters introduce uncertainty into model predictions. However, uncertainty analysis using conventional methods such as standard Monte Carlo sampling (MCS) may not be efficient, or even suitable, for complex, computationally demanding models and involving different nature of parametric variability and uncertainty. General MCS or variant of MCS such as Latin Hypercube Sampling (LHS) assumes variability and uncertainty as a single random entity and the generated samples are treated as crisp assuming vagueness as randomness. Also when the models are used as purely predictive tools, uncertainty and variability lead to the need for assessment of the plausible range of model outputs. An improved systematic variability and uncertainty analysis can provide insight into the level of confidence in model estimates, and can aid in assessing how various possible model estimates should be weighed. The present study aims to introduce, Fuzzy Latin Hypercube Sampling (FLHS), a hybrid approach of incorporating cognitive and noncognitive uncertainties. The noncognitive uncertainty such as physical randomness, statistical uncertainty due to limited information, etc can be described by its own probability density function (PDF); whereas the cognitive uncertainty such estimation error etc can be described by the membership function for its fuzziness and confidence interval by ?-cuts. An important property of this theory is its ability to merge inexact generated data of LHS approach to increase the quality of information. The FLHS technique ensures that the entire range of each variable is sampled with proper incorporation of uncertainty and variability. A fuzzified statistical summary of the model results will produce indices of sensitivity and uncertainty that relate the effects of heterogeneity and uncertainty of input variables to model predictions. The feasibility of the method is validated to assess uncertainty propagation of parameter values for estimation of the contamination level of a drinking water supply well due to transport of dissolved phenolics from a contaminated site in the UK.

  2. Modelling the vertical distribution of canopy fuel load using national forest inventory and low-density airbone laser scanning data.

    PubMed

    González-Ferreiro, Eduardo; Arellano-Pérez, Stéfano; Castedo-Dorado, Fernando; Hevia, Andrea; Vega, José Antonio; Vega-Nieva, Daniel; Álvarez-González, Juan Gabriel; Ruiz-González, Ana Daría

    2017-01-01

    The fuel complex variables canopy bulk density and canopy base height are often used to predict crown fire initiation and spread. Direct measurement of these variables is impractical, and they are usually estimated indirectly by modelling. Recent advances in predicting crown fire behaviour require accurate estimates of the complete vertical distribution of canopy fuels. The objectives of the present study were to model the vertical profile of available canopy fuel in pine stands by using data from the Spanish national forest inventory plus low-density airborne laser scanning (ALS) metrics. In a first step, the vertical distribution of the canopy fuel load was modelled using the Weibull probability density function. In a second step, two different systems of models were fitted to estimate the canopy variables defining the vertical distributions; the first system related these variables to stand variables obtained in a field inventory, and the second system related the canopy variables to airborne laser scanning metrics. The models of each system were fitted simultaneously to compensate the effects of the inherent cross-model correlation between the canopy variables. Heteroscedasticity was also analyzed, but no correction in the fitting process was necessary. The estimated canopy fuel load profiles from field variables explained 84% and 86% of the variation in canopy fuel load for maritime pine and radiata pine respectively; whereas the estimated canopy fuel load profiles from ALS metrics explained 52% and 49% of the variation for the same species. The proposed models can be used to assess the effectiveness of different forest management alternatives for reducing crown fire hazard.

  3. Assessing the influence of watershed characteristics on chlorophyll a in waterbodies at global and regional scales

    USGS Publications Warehouse

    Woelmer, Whitney; Kao, Yu-Chun; Bunnell, David B.; Deines, Andrew M.; Bennion, David; Rogers, Mark W.; Brooks, Colin N.; Sayers, Michael J.; Banach, David M.; Grimm, Amanda G.; Shuchman, Robert A.

    2016-01-01

    Prediction of primary production of lentic water bodies (i.e., lakes and reservoirs) is valuable to researchers and resource managers alike, but is very rarely done at the global scale. With the development of remote sensing technologies, it is now feasible to gather large amounts of data across the world, including understudied and remote regions. To determine which factors were most important in explaining the variation of chlorophyll a (Chl-a), an indicator of primary production in water bodies, at global and regional scales, we first developed a geospatial database of 227 water bodies and watersheds with corresponding Chl-a, nutrient, hydrogeomorphic, and climate data. Then we used a generalized additive modeling approach and developed model selection criteria to select models that most parsimoniously related Chl-a to predictor variables for all 227 water bodies and for 51 lakes in the Laurentian Great Lakes region in the data set. Our best global model contained two hydrogeomorphic variables (water body surface area and the ratio of watershed to water body surface area) and a climate variable (average temperature in the warmest model selection criteria to select models that most parsimoniously related Chl-a to predictor variables quarter) and explained ~ 30% of variation in Chl-a. Our regional model contained one hydrogeomorphic variable (flow accumulation) and the same climate variable, but explained substantially more variation (58%). Our results indicate that a regional approach to watershed modeling may be more informative to predicting Chl-a, and that nearly a third of global variability in Chl-a may be explained using hydrogeomorphic and climate variables.

  4. Role of Internal Variability in Surface Temperature and Precipitation Change Uncertainties over India.

    NASA Astrophysics Data System (ADS)

    Achutarao, K. M.; Singh, R.

    2017-12-01

    There are various sources of uncertainty in model projections of future climate change. These include differences in the formulation of climate models, internal variability, and differences in scenarios. Internal variability in a climate system represents the unforced change due to the chaotic nature of the climate system and is considered irreducible (Deser et al., 2012). Internal variability becomes important at regional scales where it can dominate forced changes. Therefore it needs to be carefully assessed in future projections. In this study we segregate the role of internal variability in the future temperature and precipitation projections over the Indian region. We make use of the Coupled Model Inter-comparison Project - phase 5 (CMIP5; Taylor et al., 2012) database containing climate model simulations carried out by various modeling centers around the world. While the CMIP5 experimental protocol recommended producing numerous ensemble members, only a handful of the modeling groups provided multiple realizations. Having a small number of realizations is a limitation in producing a quantification of internal variability. We therefore exploit the Community Earth System Model Large Ensemble (CESM-LE; Kay et al., 2014) dataset which contains a 40 member ensemble of a single model- CESM1 (CAM5) to explore the role of internal variability in Future Projections. Surface air temperature and precipitation change projections over regional and sub-regional scale are analyzed under the IPCC emission scenario (RCP8.5) for different seasons and homogeneous climatic zones over India. We analyze the spread in projections due to internal variability in the CESM-LE and CMIP5 datasets over these regions.

  5. Balancing precision and risk: should multiple detection methods be analyzed separately in N-mixture models?

    USGS Publications Warehouse

    Graves, Tabitha A.; Royle, J. Andrew; Kendall, Katherine C.; Beier, Paul; Stetz, Jeffrey B.; Macleod, Amy C.

    2012-01-01

    Using multiple detection methods can increase the number, kind, and distribution of individuals sampled, which may increase accuracy and precision and reduce cost of population abundance estimates. However, when variables influencing abundance are of interest, if individuals detected via different methods are influenced by the landscape differently, separate analysis of multiple detection methods may be more appropriate. We evaluated the effects of combining two detection methods on the identification of variables important to local abundance using detections of grizzly bears with hair traps (systematic) and bear rubs (opportunistic). We used hierarchical abundance models (N-mixture models) with separate model components for each detection method. If both methods sample the same population, the use of either data set alone should (1) lead to the selection of the same variables as important and (2) provide similar estimates of relative local abundance. We hypothesized that the inclusion of 2 detection methods versus either method alone should (3) yield more support for variables identified in single method analyses (i.e. fewer variables and models with greater weight), and (4) improve precision of covariate estimates for variables selected in both separate and combined analyses because sample size is larger. As expected, joint analysis of both methods increased precision as well as certainty in variable and model selection. However, the single-method analyses identified different variables and the resulting predicted abundances had different spatial distributions. We recommend comparing single-method and jointly modeled results to identify the presence of individual heterogeneity between detection methods in N-mixture models, along with consideration of detection probabilities, correlations among variables, and tolerance to risk of failing to identify variables important to a subset of the population. The benefits of increased precision should be weighed against those risks. The analysis framework presented here will be useful for other species exhibiting heterogeneity by detection method.

  6. Future of endemic flora of biodiversity hotspots in India.

    PubMed

    Chitale, Vishwas Sudhir; Behera, Mukund Dev; Roy, Partha Sarthi

    2014-01-01

    India is one of the 12 mega biodiversity countries of the world, which represents 11% of world's flora in about 2.4% of global land mass. Approximately 28% of the total Indian flora and 33% of angiosperms occurring in India are endemic. Higher human population density in biodiversity hotspots in India puts undue pressure on these sensitive eco-regions. In the present study, we predict the future distribution of 637 endemic plant species from three biodiversity hotspots in India; Himalaya, Western Ghats, Indo-Burma, based on A1B scenario for year 2050 and 2080. We develop individual variable based models as well as mixed models in MaxEnt by combining ten least co-related bioclimatic variables, two disturbance variables and one physiography variable as predictor variables. The projected changes suggest that the endemic flora will be adversely impacted, even under such a moderate climate scenario. The future distribution is predicted to shift in northern and north-eastern direction in Himalaya and Indo-Burma, while in southern and south-western direction in Western Ghats, due to cooler climatic conditions in these regions. In the future distribution of endemic plants, we observe a significant shift and reduction in the distribution range compared to the present distribution. The model predicts a 23.99% range reduction and a 7.70% range expansion in future distribution by 2050, while a 41.34% range reduction and a 24.10% range expansion by 2080. Integration of disturbance and physiography variables along with bioclimatic variables in the models improved the prediction accuracy. Mixed models provide most accurate results for most of the combinations of climatic and non-climatic variables as compared to individual variable based models. We conclude that a) regions with cooler climates and higher moisture availability could serve as refugia for endemic plants in future climatic conditions; b) mixed models provide more accurate results, compared to single variable based models.

  7. Future of Endemic Flora of Biodiversity Hotspots in India

    PubMed Central

    Chitale, Vishwas Sudhir; Behera, Mukund Dev; Roy, Partha Sarthi

    2014-01-01

    India is one of the 12 mega biodiversity countries of the world, which represents 11% of world's flora in about 2.4% of global land mass. Approximately 28% of the total Indian flora and 33% of angiosperms occurring in India are endemic. Higher human population density in biodiversity hotspots in India puts undue pressure on these sensitive eco-regions. In the present study, we predict the future distribution of 637 endemic plant species from three biodiversity hotspots in India; Himalaya, Western Ghats, Indo-Burma, based on A1B scenario for year 2050 and 2080. We develop individual variable based models as well as mixed models in MaxEnt by combining ten least co-related bioclimatic variables, two disturbance variables and one physiography variable as predictor variables. The projected changes suggest that the endemic flora will be adversely impacted, even under such a moderate climate scenario. The future distribution is predicted to shift in northern and north-eastern direction in Himalaya and Indo-Burma, while in southern and south-western direction in Western Ghats, due to cooler climatic conditions in these regions. In the future distribution of endemic plants, we observe a significant shift and reduction in the distribution range compared to the present distribution. The model predicts a 23.99% range reduction and a 7.70% range expansion in future distribution by 2050, while a 41.34% range reduction and a 24.10% range expansion by 2080. Integration of disturbance and physiography variables along with bioclimatic variables in the models improved the prediction accuracy. Mixed models provide most accurate results for most of the combinations of climatic and non-climatic variables as compared to individual variable based models. We conclude that a) regions with cooler climates and higher moisture availability could serve as refugia for endemic plants in future climatic conditions; b) mixed models provide more accurate results, compared to single variable based models. PMID:25501852

  8. Representing general theoretical concepts in structural equation models: The role of composite variables

    USGS Publications Warehouse

    Grace, J.B.; Bollen, K.A.

    2008-01-01

    Structural equation modeling (SEM) holds the promise of providing natural scientists the capacity to evaluate complex multivariate hypotheses about ecological systems. Building on its predecessors, path analysis and factor analysis, SEM allows for the incorporation of both observed and unobserved (latent) variables into theoretically-based probabilistic models. In this paper we discuss the interface between theory and data in SEM and the use of an additional variable type, the composite. In simple terms, composite variables specify the influences of collections of other variables and can be helpful in modeling heterogeneous concepts of the sort commonly of interest to ecologists. While long recognized as a potentially important element of SEM, composite variables have received very limited use, in part because of a lack of theoretical consideration, but also because of difficulties that arise in parameter estimation when using conventional solution procedures. In this paper we present a framework for discussing composites and demonstrate how the use of partially-reduced-form models can help to overcome some of the parameter estimation and evaluation problems associated with models containing composites. Diagnostic procedures for evaluating the most appropriate and effective use of composites are illustrated with an example from the ecological literature. It is argued that an ability to incorporate composite variables into structural equation models may be particularly valuable in the study of natural systems, where concepts are frequently multifaceted and the influence of suites of variables are often of interest. ?? Springer Science+Business Media, LLC 2007.

  9. Enabling intelligent copernicus services for carbon and water balance modeling of boreal forest ecosystems - North State

    NASA Astrophysics Data System (ADS)

    Häme, Tuomas; Mutanen, Teemu; Rauste, Yrjö; Antropov, Oleg; Molinier, Matthieu; Quegan, Shaun; Kantzas, Euripides; Mäkelä, Annikki; Minunno, Francesco; Atli Benediktsson, Jon; Falco, Nicola; Arnason, Kolbeinn; Storvold, Rune; Haarpaintner, Jörg; Elsakov, Vladimir; Rasinmäki, Jussi

    2015-04-01

    The objective of project North State, funded by Framework Program 7 of the European Union, is to develop innovative data fusion methods that exploit the new generation of multi-source data from Sentinels and other satellites in an intelligent, self-learning framework. The remote sensing outputs are interfaced with state-of-the-art carbon and water flux models for monitoring the fluxes over boreal Europe to reduce current large uncertainties. This will provide a paradigm for the development of products for future Copernicus services. The models to be interfaced are a dynamic vegetation model and a light use efficiency model. We have identified four groups of variables that will be estimated with remote sensed data: land cover variables, forest characteristics, vegetation activity, and hydrological variables. The estimates will be used as model inputs and to validate the model outputs. The earth observation variables are computed as automatically as possible, with an objective to completely automatic estimation. North State has two sites for intensive studies in southern and northern Finland, respectively, one in Iceland and one in state Komi of Russia. Additionally, the model input variables will be estimated and models applied over European boreal and sub-arctic region from Ural Mountains to Iceland. The accuracy assessment of the earth observation variables will follow statistical sampling design. Model output predictions are compared to earth observation variables. Also flux tower measurements are applied in the model assessment. In the paper, results of hyperspectral, Sentinel-1, and Landsat data and their use in the models is presented. Also an example of a completely automatic land cover class prediction is reported.

  10. The effect of modeled absolute timing variability and relative timing variability on observational learning.

    PubMed

    Grierson, Lawrence E M; Roberts, James W; Welsher, Arthur M

    2017-05-01

    There is much evidence to suggest that skill learning is enhanced by skill observation. Recent research on this phenomenon indicates a benefit of observing variable/erred demonstrations. In this study, we explore whether it is variability within the relative organization or absolute parameterization of a movement that facilitates skill learning through observation. To do so, participants were randomly allocated into groups that observed a model with no variability, absolute timing variability, relative timing variability, or variability in both absolute and relative timing. All participants performed a four-segment movement pattern with specific absolute and relative timing goals prior to and following the observational intervention, as well as in a 24h retention test and transfers tests that featured new relative and absolute timing goals. Absolute timing error indicated that all groups initially acquired the absolute timing, maintained their performance at 24h retention, and exhibited performance deterioration in both transfer tests. Relative timing error revealed that the observation of no variability and relative timing variability produced greater performance at the post-test, 24h retention and relative timing transfer tests, but for the no variability group, deteriorated at absolute timing transfer test. The results suggest that the learning of absolute timing following observation unfolds irrespective of model variability. However, the learning of relative timing benefits from holding the absolute features constant, while the observation of no variability partially fails in transfer. We suggest learning by observing no variability and variable/erred models unfolds via similar neural mechanisms, although the latter benefits from the additional coding of information pertaining to movements that require a correction. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. A multi-segment foot model based on anatomically registered technical coordinate systems: method repeatability in pediatric feet.

    PubMed

    Saraswat, Prabhav; MacWilliams, Bruce A; Davis, Roy B

    2012-04-01

    Several multi-segment foot models to measure the motion of intrinsic joints of the foot have been reported. Use of these models in clinical decision making is limited due to lack of rigorous validation including inter-clinician, and inter-lab variability measures. A model with thoroughly quantified variability may significantly improve the confidence in the results of such foot models. This study proposes a new clinical foot model with the underlying strategy of using separate anatomic and technical marker configurations and coordinate systems. Anatomical landmark and coordinate system identification is determined during a static subject calibration. Technical markers are located at optimal sites for dynamic motion tracking. The model is comprised of the tibia and three foot segments (hindfoot, forefoot and hallux) and inter-segmental joint angles are computed in three planes. Data collection was carried out on pediatric subjects at two sites (Site 1: n=10 subjects by two clinicians and Site 2: five subjects by one clinician). A plaster mold method was used to quantify static intra-clinician and inter-clinician marker placement variability by allowing direct comparisons of marker data between sessions for each subject. Intra-clinician and inter-clinician joint angle variability were less than 4°. For dynamic walking kinematics, intra-clinician, inter-clinician and inter-laboratory variability were less than 6° for the ankle and forefoot, but slightly higher for the hallux. Inter-trial variability accounted for 2-4° of the total dynamic variability. Results indicate the proposed foot model reduces the effects of marker placement variability on computed foot kinematics during walking compared to similar measures in previous models. Copyright © 2011 Elsevier B.V. All rights reserved.

  12. The Importance of Distance to Resources in the Spatial Modelling of Bat Foraging Habitat

    PubMed Central

    Rainho, Ana; Palmeirim, Jorge M.

    2011-01-01

    Many bats are threatened by habitat loss, but opportunities to manage their habitats are now increasing. Success of management depends greatly on the capacity to determine where and how interventions should take place, so models predicting how animals use landscapes are important to plan them. Bats are quite distinctive in the way they use space for foraging because (i) most are colonial central-place foragers and (ii) exploit scattered and distant resources, although this increases flying costs. To evaluate how important distances to resources are in modelling foraging bat habitat suitability, we radio-tracked two cave-dwelling species of conservation concern (Rhinolophus mehelyi and Miniopterus schreibersii) in a Mediterranean landscape. Habitat and distance variables were evaluated using logistic regression modelling. Distance variables greatly increased the performance of models, and distance to roost and to drinking water could alone explain 86 and 73% of the use of space by M. schreibersii and R. mehelyi, respectively. Land-cover and soil productivity also provided a significant contribution to the final models. Habitat suitability maps generated by models with and without distance variables differed substantially, confirming the shortcomings of maps generated without distance variables. Indeed, areas shown as highly suitable in maps generated without distance variables proved poorly suitable when distance variables were also considered. We concluded that distances to resources are determinant in the way bats forage across the landscape, and that using distance variables substantially improves the accuracy of suitability maps generated with spatially explicit models. Consequently, modelling with these variables is important to guide habitat management in bats and similarly mobile animals, particularly if they are central-place foragers or depend on spatially scarce resources. PMID:21547076

  13. Maximum Likelihood Analysis of Nonlinear Structural Equation Models with Dichotomous Variables

    ERIC Educational Resources Information Center

    Song, Xin-Yuan; Lee, Sik-Yum

    2005-01-01

    In this article, a maximum likelihood approach is developed to analyze structural equation models with dichotomous variables that are common in behavioral, psychological and social research. To assess nonlinear causal effects among the latent variables, the structural equation in the model is defined by a nonlinear function. The basic idea of the…

  14. Personality, Cognitive, and Interpersonal Factors in Adolescent Substance Use: A Longitudinal Test of an Integrative Model.

    ERIC Educational Resources Information Center

    Barnea, Zipora; And Others

    1992-01-01

    A test with 1,446 high school students in Israel of a multidimensional model of adolescent drug use that incorporates sociodemographic variables, personality variables, cognitive variables, interpersonal factors, and the availability of drugs validated the model longitudinally. Results suggest that different legal and illegal substances share a…

  15. Group Comparisons in the Presence of Missing Data Using Latent Variable Modeling Techniques

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2010-01-01

    A latent variable modeling approach for examining population similarities and differences in observed variable relationship and mean indexes in incomplete data sets is discussed. The method is based on the full information maximum likelihood procedure of model fitting and parameter estimation. The procedure can be employed to test group identities…

  16. Measuring Variability and Change with an Item Response Model for Polytomous Variables.

    ERIC Educational Resources Information Center

    Eid, Michael; Hoffman, Lore

    1998-01-01

    An extension of the graded-response model of F. Samejima (1969) is presented for the measurement of variability and change. The model is illustrated with a longitudinal study of student interest in radioactivity conducted with about 1,200 German students in elementary school when the study began. (SLD)

  17. Regression Methods for Categorical Dependent Variables: Effects on a Model of Student College Choice

    ERIC Educational Resources Information Center

    Rapp, Kelly E.

    2012-01-01

    The use of categorical dependent variables with the classical linear regression model (CLRM) violates many of the model's assumptions and may result in biased estimates (Long, 1997; O'Connell, Goldstein, Rogers, & Peng, 2008). Many dependent variables of interest to educational researchers (e.g., professorial rank, educational attainment) are…

  18. The effects of a confidant and a peer group on the well-being of single elders.

    PubMed

    Gupta, V; Korte, C

    1994-01-01

    A study of 100 elderly people was carried out to compare the predictions of well-being derived from the confidant model with those derived from the Weiss model. The confidant model predicts that the most important feature of a person's social network for the well-being of that person is whether or not the person has a confidant. The Weiss model states that different persons are needed to fulfill the different needs of the person and in particular that a confidant is important to the need for intimacy and emotional security while a peer group of social friends is needed to fulfill sociability and identity needs. The two models were evaluated by comparing the relative influence of the confidant variable with the peer group variable on subject's well-being. Regression analysis was carried out on the well-being measure using as predictor variables the confidant variable, peer group variable, age, health, and financial status. The confidant and peer group variables were of equal importance to well-being, thus confirming the Weiss model.

  19. Bayesian Techniques for Comparing Time-dependent GRMHD Simulations to Variable Event Horizon Telescope Observations

    NASA Astrophysics Data System (ADS)

    Kim, Junhan; Marrone, Daniel P.; Chan, Chi-Kwan; Medeiros, Lia; Özel, Feryal; Psaltis, Dimitrios

    2016-12-01

    The Event Horizon Telescope (EHT) is a millimeter-wavelength, very-long-baseline interferometry (VLBI) experiment that is capable of observing black holes with horizon-scale resolution. Early observations have revealed variable horizon-scale emission in the Galactic Center black hole, Sagittarius A* (Sgr A*). Comparing such observations to time-dependent general relativistic magnetohydrodynamic (GRMHD) simulations requires statistical tools that explicitly consider the variability in both the data and the models. We develop here a Bayesian method to compare time-resolved simulation images to variable VLBI data, in order to infer model parameters and perform model comparisons. We use mock EHT data based on GRMHD simulations to explore the robustness of this Bayesian method and contrast it to approaches that do not consider the effects of variability. We find that time-independent models lead to offset values of the inferred parameters with artificially reduced uncertainties. Moreover, neglecting the variability in the data and the models often leads to erroneous model selections. We finally apply our method to the early EHT data on Sgr A*.

  20. BAYESIAN TECHNIQUES FOR COMPARING TIME-DEPENDENT GRMHD SIMULATIONS TO VARIABLE EVENT HORIZON TELESCOPE OBSERVATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Junhan; Marrone, Daniel P.; Chan, Chi-Kwan

    2016-12-01

    The Event Horizon Telescope (EHT) is a millimeter-wavelength, very-long-baseline interferometry (VLBI) experiment that is capable of observing black holes with horizon-scale resolution. Early observations have revealed variable horizon-scale emission in the Galactic Center black hole, Sagittarius A* (Sgr A*). Comparing such observations to time-dependent general relativistic magnetohydrodynamic (GRMHD) simulations requires statistical tools that explicitly consider the variability in both the data and the models. We develop here a Bayesian method to compare time-resolved simulation images to variable VLBI data, in order to infer model parameters and perform model comparisons. We use mock EHT data based on GRMHD simulations to explore themore » robustness of this Bayesian method and contrast it to approaches that do not consider the effects of variability. We find that time-independent models lead to offset values of the inferred parameters with artificially reduced uncertainties. Moreover, neglecting the variability in the data and the models often leads to erroneous model selections. We finally apply our method to the early EHT data on Sgr A*.« less

  1. Modeling temporal and spatial variability of traffic-related air pollution: Hourly land use regression models for black carbon

    NASA Astrophysics Data System (ADS)

    Dons, Evi; Van Poppel, Martine; Kochan, Bruno; Wets, Geert; Int Panis, Luc

    2013-08-01

    Land use regression (LUR) modeling is a statistical technique used to determine exposure to air pollutants in epidemiological studies. Time-activity diaries can be combined with LUR models, enabling detailed exposure estimation and limiting exposure misclassification, both in shorter and longer time lags. In this study, the traffic related air pollutant black carbon was measured with μ-aethalometers on a 5-min time base at 63 locations in Flanders, Belgium. The measurements show that hourly concentrations vary between different locations, but also over the day. Furthermore the diurnal pattern is different for street and background locations. This suggests that annual LUR models are not sufficient to capture all the variation. Hourly LUR models for black carbon are developed using different strategies: by means of dummy variables, with dynamic dependent variables and/or with dynamic and static independent variables. The LUR model with 48 dummies (weekday hours and weekend hours) performs not as good as the annual model (explained variance of 0.44 compared to 0.77 in the annual model). The dataset with hourly concentrations of black carbon can be used to recalibrate the annual model, resulting in many of the original explaining variables losing their statistical significance, and certain variables having the wrong direction of effect. Building new independent hourly models, with static or dynamic covariates, is proposed as the best solution to solve these issues. R2 values for hourly LUR models are mostly smaller than the R2 of the annual model, ranging from 0.07 to 0.8. Between 6 a.m. and 10 p.m. on weekdays the R2 approximates the annual model R2. Even though models of consecutive hours are developed independently, similar variables turn out to be significant. Using dynamic covariates instead of static covariates, i.e. hourly traffic intensities and hourly population densities, did not significantly improve the models' performance.

  2. Quasar microlensing models with constraints on the Quasar light curves

    NASA Astrophysics Data System (ADS)

    Tie, S. S.; Kochanek, C. S.

    2018-01-01

    Quasar microlensing analyses implicitly generate a model of the variability of the source quasar. The implied source variability may be unrealistic yet its likelihood is generally not evaluated. We used the damped random walk (DRW) model for quasar variability to evaluate the likelihood of the source variability and applied the revized algorithm to a microlensing analysis of the lensed quasar RX J1131-1231. We compared estimates of the size of the quasar disc and the average stellar mass of the lens galaxy with and without applying the DRW likelihoods for the source variability model and found no significant effect on the estimated physical parameters. The most likely explanation is that unreliastic source light-curve models are generally associated with poor microlensing fits that already make a negligible contribution to the probability distributions of the derived parameters.

  3. Ordinal probability effect measures for group comparisons in multinomial cumulative link models.

    PubMed

    Agresti, Alan; Kateri, Maria

    2017-03-01

    We consider simple ordinal model-based probability effect measures for comparing distributions of two groups, adjusted for explanatory variables. An "ordinal superiority" measure summarizes the probability that an observation from one distribution falls above an independent observation from the other distribution, adjusted for explanatory variables in a model. The measure applies directly to normal linear models and to a normal latent variable model for ordinal response variables. It equals Φ(β/2) for the corresponding ordinal model that applies a probit link function to cumulative multinomial probabilities, for standard normal cdf Φ and effect β that is the coefficient of the group indicator variable. For the more general latent variable model for ordinal responses that corresponds to a linear model with other possible error distributions and corresponding link functions for cumulative multinomial probabilities, the ordinal superiority measure equals exp(β)/[1+exp(β)] with the log-log link and equals approximately exp(β/2)/[1+exp(β/2)] with the logit link, where β is the group effect. Another ordinal superiority measure generalizes the difference of proportions from binary to ordinal responses. We also present related measures directly for ordinal models for the observed response that need not assume corresponding latent response models. We present confidence intervals for the measures and illustrate with an example. © 2016, The International Biometric Society.

  4. Accuracy of latent-variable estimation in Bayesian semi-supervised learning.

    PubMed

    Yamazaki, Keisuke

    2015-09-01

    Hierarchical probabilistic models, such as Gaussian mixture models, are widely used for unsupervised learning tasks. These models consist of observable and latent variables, which represent the observable data and the underlying data-generation process, respectively. Unsupervised learning tasks, such as cluster analysis, are regarded as estimations of latent variables based on the observable ones. The estimation of latent variables in semi-supervised learning, where some labels are observed, will be more precise than that in unsupervised, and one of the concerns is to clarify the effect of the labeled data. However, there has not been sufficient theoretical analysis of the accuracy of the estimation of latent variables. In a previous study, a distribution-based error function was formulated, and its asymptotic form was calculated for unsupervised learning with generative models. It has been shown that, for the estimation of latent variables, the Bayes method is more accurate than the maximum-likelihood method. The present paper reveals the asymptotic forms of the error function in Bayesian semi-supervised learning for both discriminative and generative models. The results show that the generative model, which uses all of the given data, performs better when the model is well specified. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Arctic Ocean Freshwater: How Robust are Model Simulations

    NASA Technical Reports Server (NTRS)

    Jahn, A.; Aksenov, Y.; deCuevas, B. A.; deSteur, L.; Haekkinen, S.; Hansen, E.; Herbaut, C.; Houssais, M.-N.; Karcher, M.; Kauker, F.; hide

    2012-01-01

    The Arctic freshwater (FW) has been the focus of many modeling studies, due to the potential impact of Arctic FW on the deep water formation in the North Atlantic. A comparison of the hindcasts from ten ocean-sea ice models shows that the simulation of the Arctic FW budget is quite different in the investigated models. While they agree on the general sink and source terms of the Arctic FW budget, the long-term means as well as the variability of the FW export vary among models. The best model-to-model agreement is found for the interannual and seasonal variability of the solid FW export and the solid FW storage, which also agree well with observations. For the interannual and seasonal variability of the liquid FW export, the agreement among models is better for the Canadian Arctic Archipelago (CAA) than for Fram Strait. The reason for this is that models are more consistent in simulating volume flux anomalies than salinity anomalies and volume-flux anomalies dominate the liquid FW export variability in the CAA but not in Fram Strait. The seasonal cycle of the liquid FW export generally shows a better agreement among models than the interannual variability, and compared to observations the models capture the seasonality of the liquid FW export rather well. In order to improve future simulations of the Arctic FW budget, the simulation of the salinity field needs to be improved, so that model results on the variability of the liquid FW export and storage become more robust.

  6. An 'Observational Large Ensemble' to compare observed and modeled temperature trend uncertainty due to internal variability.

    NASA Astrophysics Data System (ADS)

    Poppick, A. N.; McKinnon, K. A.; Dunn-Sigouin, E.; Deser, C.

    2017-12-01

    Initial condition climate model ensembles suggest that regional temperature trends can be highly variable on decadal timescales due to characteristics of internal climate variability. Accounting for trend uncertainty due to internal variability is therefore necessary to contextualize recent observed temperature changes. However, while the variability of trends in a climate model ensemble can be evaluated directly (as the spread across ensemble members), internal variability simulated by a climate model may be inconsistent with observations. Observation-based methods for assessing the role of internal variability on trend uncertainty are therefore required. Here, we use a statistical resampling approach to assess trend uncertainty due to internal variability in historical 50-year (1966-2015) winter near-surface air temperature trends over North America. We compare this estimate of trend uncertainty to simulated trend variability in the NCAR CESM1 Large Ensemble (LENS), finding that uncertainty in wintertime temperature trends over North America due to internal variability is largely overestimated by CESM1, on average by a factor of 32%. Our observation-based resampling approach is combined with the forced signal from LENS to produce an 'Observational Large Ensemble' (OLENS). The members of OLENS indicate a range of spatially coherent fields of temperature trends resulting from different sequences of internal variability consistent with observations. The smaller trend variability in OLENS suggests that uncertainty in the historical climate change signal in observations due to internal variability is less than suggested by LENS.

  7. Parametric Cost Models for Space Telescopes

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip

    2010-01-01

    A study is in-process to develop a multivariable parametric cost model for space telescopes. Cost and engineering parametric data has been collected on 30 different space telescopes. Statistical correlations have been developed between 19 variables of 59 variables sampled. Single Variable and Multi-Variable Cost Estimating Relationships have been developed. Results are being published.

  8. Nonparametric model validations for hidden Markov models with applications in financial econometrics.

    PubMed

    Zhao, Zhibiao

    2011-06-01

    We address the nonparametric model validation problem for hidden Markov models with partially observable variables and hidden states. We achieve this goal by constructing a nonparametric simultaneous confidence envelope for transition density function of the observable variables and checking whether the parametric density estimate is contained within such an envelope. Our specification test procedure is motivated by a functional connection between the transition density of the observable variables and the Markov transition kernel of the hidden states. Our approach is applicable for continuous time diffusion models, stochastic volatility models, nonlinear time series models, and models with market microstructure noise.

  9. Solar Irradiance Variability is Caused by the Magnetic Activity on the Solar Surface.

    PubMed

    Yeo, Kok Leng; Solanki, Sami K; Norris, Charlotte M; Beeck, Benjamin; Unruh, Yvonne C; Krivova, Natalie A

    2017-09-01

    The variation in the radiative output of the Sun, described in terms of solar irradiance, is important to climatology. A common assumption is that solar irradiance variability is driven by its surface magnetism. Verifying this assumption has, however, been hampered by the fact that models of solar irradiance variability based on solar surface magnetism have to be calibrated to observed variability. Making use of realistic three-dimensional magnetohydrodynamic simulations of the solar atmosphere and state-of-the-art solar magnetograms from the Solar Dynamics Observatory, we present a model of total solar irradiance (TSI) that does not require any such calibration. In doing so, the modeled irradiance variability is entirely independent of the observational record. (The absolute level is calibrated to the TSI record from the Total Irradiance Monitor.) The model replicates 95% of the observed variability between April 2010 and July 2016, leaving little scope for alternative drivers of solar irradiance variability at least over the time scales examined (days to years).

  10. Bayesian dynamical systems modelling in the social sciences.

    PubMed

    Ranganathan, Shyam; Spaiser, Viktoria; Mann, Richard P; Sumpter, David J T

    2014-01-01

    Data arising from social systems is often highly complex, involving non-linear relationships between the macro-level variables that characterize these systems. We present a method for analyzing this type of longitudinal or panel data using differential equations. We identify the best non-linear functions that capture interactions between variables, employing Bayes factor to decide how many interaction terms should be included in the model. This method punishes overly complicated models and identifies models with the most explanatory power. We illustrate our approach on the classic example of relating democracy and economic growth, identifying non-linear relationships between these two variables. We show how multiple variables and variable lags can be accounted for and provide a toolbox in R to implement our approach.

  11. The use of auxiliary variables in capture-recapture and removal experiments

    USGS Publications Warehouse

    Pollock, K.H.; Hines, J.E.; Nichols, J.D.

    1984-01-01

    The dependence of animal capture probabilities on auxiliary variables is an important practical problem which has not been considered in the development of estimation procedures for capture-recapture and removal experiments. In this paper the linear logistic binary regression model is used to relate the probability of capture to continuous auxiliary variables. The auxiliary variables could be environmental quantities such as air or water temperature, or characteristics of individual animals, such as body length or weight. Maximum likelihood estimators of the population parameters are considered for a variety of models which all assume a closed population. Testing between models is also considered. The models can also be used when one auxiliary variable is a measure of the effort expended in obtaining the sample.

  12. Vulnerability in Determining the Cost of Information System Project to Avoid Loses

    NASA Astrophysics Data System (ADS)

    Haryono, Kholid; Ikhsani, Zulfa Amalia

    2018-03-01

    Context: This study discusses the priority of cost required in software development projects. Objectives: To show the costing models, the variables involved, and how practitioners assess and decide the priorities of each variable. To strengthen the information, each variable also confirmed the risk if ignored. Method: The method is done by two approaches. First, systematic literature reviews to find the models and variables used to decide the cost of software development. Second, confirm and take judgments about the level of importance and risk of each variable to the software developer. Result: Obtained about 54 variables that appear on the 10 models discussed. The variables are categorized into 15 groups based on the similarity of meaning. Each group becomes a variable. Confirmation results with practitioners on the level of importance and risk. It shown there are two variables that are considered very important and high risk if ignored. That is duration and effort. Conclusion: The relationship of variable rates between the results of literature studies and confirmation of practitioners contributes to the use of software business actors in considering project cost variables.

  13. Species distribution model transferability and model grain size - finer may not always be better.

    PubMed

    Manzoor, Syed Amir; Griffiths, Geoffrey; Lukac, Martin

    2018-05-08

    Species distribution models have been used to predict the distribution of invasive species for conservation planning. Understanding spatial transferability of niche predictions is critical to promote species-habitat conservation and forecasting areas vulnerable to invasion. Grain size of predictor variables is an important factor affecting the accuracy and transferability of species distribution models. Choice of grain size is often dependent on the type of predictor variables used and the selection of predictors sometimes rely on data availability. This study employed the MAXENT species distribution model to investigate the effect of the grain size on model transferability for an invasive plant species. We modelled the distribution of Rhododendron ponticum in Wales, U.K. and tested model performance and transferability by varying grain size (50 m, 300 m, and 1 km). MAXENT-based models are sensitive to grain size and selection of variables. We found that over-reliance on the commonly used bioclimatic variables may lead to less accurate models as it often compromises the finer grain size of biophysical variables which may be more important determinants of species distribution at small spatial scales. Model accuracy is likely to increase with decreasing grain size. However, successful model transferability may require optimization of model grain size.

  14. Uncertainty quantification of fast sodium current steady-state inactivation for multi-scale models of cardiac electrophysiology.

    PubMed

    Pathmanathan, Pras; Shotwell, Matthew S; Gavaghan, David J; Cordeiro, Jonathan M; Gray, Richard A

    2015-01-01

    Perhaps the most mature area of multi-scale systems biology is the modelling of the heart. Current models are grounded in over fifty years of research in the development of biophysically detailed models of the electrophysiology (EP) of cardiac cells, but one aspect which is inadequately addressed is the incorporation of uncertainty and physiological variability. Uncertainty quantification (UQ) is the identification and characterisation of the uncertainty in model parameters derived from experimental data, and the computation of the resultant uncertainty in model outputs. It is a necessary tool for establishing the credibility of computational models, and will likely be expected of EP models for future safety-critical clinical applications. The focus of this paper is formal UQ of one major sub-component of cardiac EP models, the steady-state inactivation of the fast sodium current, INa. To better capture average behaviour and quantify variability across cells, we have applied for the first time an 'individual-based' statistical methodology to assess voltage clamp data. Advantages of this approach over a more traditional 'population-averaged' approach are highlighted. The method was used to characterise variability amongst cells isolated from canine epi and endocardium, and this variability was then 'propagated forward' through a canine model to determine the resultant uncertainty in model predictions at different scales, such as of upstroke velocity and spiral wave dynamics. Statistically significant differences between epi and endocardial cells (greater half-inactivation and less steep slope of steady state inactivation curve for endo) was observed, and the forward propagation revealed a lack of robustness of the model to underlying variability, but also surprising robustness to variability at the tissue scale. Overall, the methodology can be used to: (i) better analyse voltage clamp data; (ii) characterise underlying population variability; (iii) investigate consequences of variability; and (iv) improve the ability to validate a model. To our knowledge this article is the first to quantify population variability in membrane dynamics in this manner, and the first to perform formal UQ for a component of a cardiac model. The approach is likely to find much wider applicability across systems biology as current application domains reach greater levels of maturity. Published by Elsevier Ltd.

  15. Variability of the Martian thermospheric temperatures during the last 7 Martian Years

    NASA Astrophysics Data System (ADS)

    Gonzalez-Galindo, Francisco; Lopez-Valverde, Miguel Angel; Millour, Ehouarn; Forget, François

    2014-05-01

    The temperatures and densities in the Martian upper atmosphere have a significant influence over the different processes producing atmospheric escape. A good knowledge of the thermosphere and its variability is thus necessary in order to better understand and quantify the atmospheric loss to space and the evolution of the planet. Different global models have been used to study the seasonal and interannual variability of the Martian thermosphere, usually considering three solar scenarios (solar minimum, solar medium and solar maximum conditions) to take into account the solar cycle variability. However, the variability of the solar activity within the simulated period of time is not usually considered in these models. We have improved the description of the UV solar flux included on the General Circulation Model for Mars developed at the Laboratoire de Météorologie Dynamique (LMD-MGCM) in order to include its observed day-to-day variability. We have used the model to simulate the thermospheric variability during Martian Years 24 to 30, using realistic UV solar fluxes and dust opacities. The model predicts and interannual variability of the temperatures in the upper thermosphere that ranges from about 50 K during the aphelion to up to 150 K during perihelion. The seasonal variability of temperatures due to the eccentricity of the Martian orbit is modified by the variability of the solar flux within a given Martian year. The solar rotation cycle produces temperature oscillations of up to 30 K. We have also studied the response of the modeled thermosphere to the global dust storms in Martian Year 25 and Martian Year 28. The atmospheric dynamics are significantly modified by the global dust storms, which induces significant changes in the thermospheric temperatures. The response of the model to the presence of both global dust storms is in good agreement with previous modeling results (Medvedev et al., Journal of Geophysical Research, 2013). As expected, the simulated ionosphere is also sensitive to the variability of the solar activity. Acknowledgemnt: Francisco González-Galindo is funded by a CSIC JAE-Doc contract financed by the European Social Fund

  16. Bio-inspired online variable recruitment control of fluidic artificial muscles

    NASA Astrophysics Data System (ADS)

    Jenkins, Tyler E.; Chapman, Edward M.; Bryant, Matthew

    2016-12-01

    This paper details the creation of a hybrid variable recruitment control scheme for fluidic artificial muscle (FAM) actuators with an emphasis on maximizing system efficiency and switching control performance. Variable recruitment is the process of altering a system’s active number of actuators, allowing operation in distinct force regimes. Previously, FAM variable recruitment was only quantified with offline, manual valve switching; this study addresses the creation and characterization of novel, on-line FAM switching control algorithms. The bio-inspired algorithms are implemented in conjunction with a PID and model-based controller, and applied to a simulated plant model. Variable recruitment transition effects and chatter rejection are explored via a sensitivity analysis, allowing a system designer to weigh tradeoffs in actuator modeling, algorithm choice, and necessary hardware. Variable recruitment is further developed through simulation of a robotic arm tracking a variety of spline position inputs, requiring several levels of actuator recruitment. Switching controller performance is quantified and compared with baseline systems lacking variable recruitment. The work extends current variable recruitment knowledge by creating novel online variable recruitment control schemes, and exploring how online actuator recruitment affects system efficiency and control performance. Key topics associated with implementing a variable recruitment scheme, including the effects of modeling inaccuracies, hardware considerations, and switching transition concerns are also addressed.

  17. Generating Variable Wind Profiles and Modeling Their Effects on Small-Arms Trajectories

    DTIC Science & Technology

    2016-04-01

    ARL-TR-7642 ● APR 2016 US Army Research Laboratory Generating Variable Wind Profiles and Modeling Their Effects on Small-Arms... Wind Profiles and Modeling Their Effects on Small-Arms Trajectories by Timothy A Fargus Weapons and Materials Research Directorate, ARL...Generating Variable Wind Profiles and Modeling Their Effects on Small-Arms Trajectories 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM

  18. Variable Density Effects in Stochastic Lagrangian Models for Turbulent Combustion

    DTIC Science & Technology

    2016-07-20

    PDF methods in dealing with chemical reaction and convection are preserved irrespective of density variation. Since the density variation in a typical...combustion process may be as large as factor of seven, including variable- density effects in PDF methods is of significance. Conventionally, the...strategy of modelling variable density flows in PDF methods is similar to that used for second-moment closure models (SMCM): models are developed based on

  19. Up-scaling of multi-variable flood loss models from objects to land use units at the meso-scale

    NASA Astrophysics Data System (ADS)

    Kreibich, Heidi; Schröter, Kai; Merz, Bruno

    2016-05-01

    Flood risk management increasingly relies on risk analyses, including loss modelling. Most of the flood loss models usually applied in standard practice have in common that complex damaging processes are described by simple approaches like stage-damage functions. Novel multi-variable models significantly improve loss estimation on the micro-scale and may also be advantageous for large-scale applications. However, more input parameters also reveal additional uncertainty, even more in upscaling procedures for meso-scale applications, where the parameters need to be estimated on a regional area-wide basis. To gain more knowledge about challenges associated with the up-scaling of multi-variable flood loss models the following approach is applied: Single- and multi-variable micro-scale flood loss models are up-scaled and applied on the meso-scale, namely on basis of ATKIS land-use units. Application and validation is undertaken in 19 municipalities, which were affected during the 2002 flood by the River Mulde in Saxony, Germany by comparison to official loss data provided by the Saxon Relief Bank (SAB).In the meso-scale case study based model validation, most multi-variable models show smaller errors than the uni-variable stage-damage functions. The results show the suitability of the up-scaling approach, and, in accordance with micro-scale validation studies, that multi-variable models are an improvement in flood loss modelling also on the meso-scale. However, uncertainties remain high, stressing the importance of uncertainty quantification. Thus, the development of probabilistic loss models, like BT-FLEMO used in this study, which inherently provide uncertainty information are the way forward.

  20. Multinomial logistic regression modelling of obesity and overweight among primary school students in a rural area of Negeri Sembilan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghazali, Amirul Syafiq Mohd; Ali, Zalila; Noor, Norlida Mohd

    Multinomial logistic regression is widely used to model the outcomes of a polytomous response variable, a categorical dependent variable with more than two categories. The model assumes that the conditional mean of the dependent categorical variables is the logistic function of an affine combination of predictor variables. Its procedure gives a number of logistic regression models that make specific comparisons of the response categories. When there are q categories of the response variable, the model consists of q-1 logit equations which are fitted simultaneously. The model is validated by variable selection procedures, tests of regression coefficients, a significant test ofmore » the overall model, goodness-of-fit measures, and validation of predicted probabilities using odds ratio. This study used the multinomial logistic regression model to investigate obesity and overweight among primary school students in a rural area on the basis of their demographic profiles, lifestyles and on the diet and food intake. The results indicated that obesity and overweight of students are related to gender, religion, sleep duration, time spent on electronic games, breakfast intake in a week, with whom meals are taken, protein intake, and also, the interaction between breakfast intake in a week with sleep duration, and the interaction between gender and protein intake.« less

  1. Multinomial logistic regression modelling of obesity and overweight among primary school students in a rural area of Negeri Sembilan

    NASA Astrophysics Data System (ADS)

    Ghazali, Amirul Syafiq Mohd; Ali, Zalila; Noor, Norlida Mohd; Baharum, Adam

    2015-10-01

    Multinomial logistic regression is widely used to model the outcomes of a polytomous response variable, a categorical dependent variable with more than two categories. The model assumes that the conditional mean of the dependent categorical variables is the logistic function of an affine combination of predictor variables. Its procedure gives a number of logistic regression models that make specific comparisons of the response categories. When there are q categories of the response variable, the model consists of q-1 logit equations which are fitted simultaneously. The model is validated by variable selection procedures, tests of regression coefficients, a significant test of the overall model, goodness-of-fit measures, and validation of predicted probabilities using odds ratio. This study used the multinomial logistic regression model to investigate obesity and overweight among primary school students in a rural area on the basis of their demographic profiles, lifestyles and on the diet and food intake. The results indicated that obesity and overweight of students are related to gender, religion, sleep duration, time spent on electronic games, breakfast intake in a week, with whom meals are taken, protein intake, and also, the interaction between breakfast intake in a week with sleep duration, and the interaction between gender and protein intake.

  2. A downscaling scheme for atmospheric variables to drive soil-vegetation-atmosphere transfer models

    NASA Astrophysics Data System (ADS)

    Schomburg, A.; Venema, V.; Lindau, R.; Ament, F.; Simmer, C.

    2010-09-01

    For driving soil-vegetation-transfer models or hydrological models, high-resolution atmospheric forcing data is needed. For most applications the resolution of atmospheric model output is too coarse. To avoid biases due to the non-linear processes, a downscaling system should predict the unresolved variability of the atmospheric forcing. For this purpose we derived a disaggregation system consisting of three steps: (1) a bi-quadratic spline-interpolation of the low-resolution data, (2) a so-called `deterministic' part, based on statistical rules between high-resolution surface variables and the desired atmospheric near-surface variables and (3) an autoregressive noise-generation step. The disaggregation system has been developed and tested based on high-resolution model output (400m horizontal grid spacing). A novel automatic search-algorithm has been developed for deriving the deterministic downscaling rules of step 2. When applied to the atmospheric variables of the lowest layer of the atmospheric COSMO-model, the disaggregation is able to adequately reconstruct the reference fields. Applying downscaling step 1 and 2, root mean square errors are decreased. Step 3 finally leads to a close match of the subgrid variability and temporal autocorrelation with the reference fields. The scheme can be applied to the output of atmospheric models, both for stand-alone offline simulations, and a fully coupled model system.

  3. Methodological development for selection of significant predictors explaining fatal road accidents.

    PubMed

    Dadashova, Bahar; Arenas-Ramírez, Blanca; Mira-McWilliams, José; Aparicio-Izquierdo, Francisco

    2016-05-01

    Identification of the most relevant factors for explaining road accident occurrence is an important issue in road safety research, particularly for future decision-making processes in transport policy. However model selection for this particular purpose is still an ongoing research. In this paper we propose a methodological development for model selection which addresses both explanatory variable and adequate model selection issues. A variable selection procedure, TIM (two-input model) method is carried out by combining neural network design and statistical approaches. The error structure of the fitted model is assumed to follow an autoregressive process. All models are estimated using Markov Chain Monte Carlo method where the model parameters are assigned non-informative prior distributions. The final model is built using the results of the variable selection. For the application of the proposed methodology the number of fatal accidents in Spain during 2000-2011 was used. This indicator has experienced the maximum reduction internationally during the indicated years thus making it an interesting time series from a road safety policy perspective. Hence the identification of the variables that have affected this reduction is of particular interest for future decision making. The results of the variable selection process show that the selected variables are main subjects of road safety policy measures. Published by Elsevier Ltd.

  4. European Wintertime Windstorms and its Links to Large-Scale Variability Modes

    NASA Astrophysics Data System (ADS)

    Befort, D. J.; Wild, S.; Walz, M. A.; Knight, J. R.; Lockwood, J. F.; Thornton, H. E.; Hermanson, L.; Bett, P.; Weisheimer, A.; Leckebusch, G. C.

    2017-12-01

    Winter storms associated with extreme wind speeds and heavy precipitation are the most costly natural hazard in several European countries. Improved understanding and seasonal forecast skill of winter storms will thus help society, policy-makers and (re-) insurance industry to be better prepared for such events. We firstly assess the ability to represent extra-tropical windstorms over the Northern Hemisphere of three seasonal forecast ensemble suites: ECMWF System3, ECMWF System4 and GloSea5. Our results show significant skill for inter-annual variability of windstorm frequency over parts of Europe in two of these forecast suites (ECMWF-S4 and GloSea5) indicating the potential use of current seasonal forecast systems. In a regression model we further derive windstorm variability using the forecasted NAO from the seasonal model suites thus estimating the suitability of the NAO as the only predictor. We find that the NAO as the main large-scale mode over Europe can explain some of the achieved skill and is therefore an important source of variability in the seasonal models. However, our results show that the regression model fails to reproduce the skill level of the directly forecast windstorm frequency over large areas of central Europe. This suggests that the seasonal models also capture other sources of variability/predictability of windstorms than the NAO. In order to investigate which other large-scale variability modes steer the interannual variability of windstorms we develop a statistical model using a Poisson GLM. We find that the Scandinavian Pattern (SCA) in fact explains a larger amount of variability for Central Europe during the 20th century than the NAO. This statistical model is able to skilfully reproduce the interannual variability of windstorm frequency especially for the British Isles and Central Europe with correlations up to 0.8.

  5. Multilevel and Latent Variable Modeling with Composite Links and Exploded Likelihoods

    ERIC Educational Resources Information Center

    Rabe-Hesketh, Sophia; Skrondal, Anders

    2007-01-01

    Composite links and exploded likelihoods are powerful yet simple tools for specifying a wide range of latent variable models. Applications considered include survival or duration models, models for rankings, small area estimation with census information, models for ordinal responses, item response models with guessing, randomized response models,…

  6. Multi-region statistical shape model for cochlear implantation

    NASA Astrophysics Data System (ADS)

    Romera, Jordi; Kjer, H. Martin; Piella, Gemma; Ceresa, Mario; González Ballester, Miguel A.

    2016-03-01

    Statistical shape models are commonly used to analyze the variability between similar anatomical structures and their use is established as a tool for analysis and segmentation of medical images. However, using a global model to capture the variability of complex structures is not enough to achieve the best results. The complexity of a proper global model increases even more when the amount of data available is limited to a small number of datasets. Typically, the anatomical variability between structures is associated to the variability of their physiological regions. In this paper, a complete pipeline is proposed for building a multi-region statistical shape model to study the entire variability from locally identified physiological regions of the inner ear. The proposed model, which is based on an extension of the Point Distribution Model (PDM), is built for a training set of 17 high-resolution images (24.5 μm voxels) of the inner ear. The model is evaluated according to its generalization ability and specificity. The results are compared with the ones of a global model built directly using the standard PDM approach. The evaluation results suggest that better accuracy can be achieved using a regional modeling of the inner ear.

  7. Approaches for modeling within subject variability in pharmacometric count data analysis: dynamic inter-occasion variability and stochastic differential equations.

    PubMed

    Deng, Chenhui; Plan, Elodie L; Karlsson, Mats O

    2016-06-01

    Parameter variation in pharmacometric analysis studies can be characterized as within subject parameter variability (WSV) in pharmacometric models. WSV has previously been successfully modeled using inter-occasion variability (IOV), but also stochastic differential equations (SDEs). In this study, two approaches, dynamic inter-occasion variability (dIOV) and adapted stochastic differential equations, were proposed to investigate WSV in pharmacometric count data analysis. These approaches were applied to published count models for seizure counts and Likert pain scores. Both approaches improved the model fits significantly. In addition, stochastic simulation and estimation were used to explore further the capability of the two approaches to diagnose and improve models where existing WSV is not recognized. The results of simulations confirmed the gain in introducing WSV as dIOV and SDEs when parameters vary randomly over time. Further, the approaches were also informative as diagnostics of model misspecification, when parameters changed systematically over time but this was not recognized in the structural model. The proposed approaches in this study offer strategies to characterize WSV and are not restricted to count data.

  8. On the measurement of stability in over-time data.

    PubMed

    Kenny, D A; Campbell, D T

    1989-06-01

    In this article, autoregressive models and growth curve models are compared. Autoregressive models are useful because they allow for random change, permit scores to increase or decrease, and do not require strong assumptions about the level of measurement. Three previously presented designs for estimating stability are described: (a) time-series, (b) simplex, and (c) two-wave, one-factor methods. A two-wave, multiple-factor model also is presented, in which the variables are assumed to be caused by a set of latent variables. The factor structure does not change over time and so the synchronous relationships are temporally invariant. The factors do not cause each other and have the same stability. The parameters of the model are the factor loading structure, each variable's reliability, and the stability of the factors. We apply the model to two data sets. For eight cognitive skill variables measured at four times, the 2-year stability is estimated to be .92 and the 6-year stability is .83. For nine personality variables, the 3-year stability is .68. We speculate that for many variables there are two components: one component that changes very slowly (the trait component) and another that changes very rapidly (the state component); thus each variable is a mixture of trait and state. Circumstantial evidence supporting this view is presented.

  9. Dengue: recent past and future threats

    PubMed Central

    Rogers, David J.

    2015-01-01

    This article explores four key questions about statistical models developed to describe the recent past and future of vector-borne diseases, with special emphasis on dengue: (1) How many variables should be used to make predictions about the future of vector-borne diseases?(2) Is the spatial resolution of a climate dataset an important determinant of model accuracy?(3) Does inclusion of the future distributions of vectors affect predictions of the futures of the diseases they transmit?(4) Which are the key predictor variables involved in determining the distributions of vector-borne diseases in the present and future?Examples are given of dengue models using one, five or 10 meteorological variables and at spatial resolutions of from one-sixth to two degrees. Model accuracy is improved with a greater number of descriptor variables, but is surprisingly unaffected by the spatial resolution of the data. Dengue models with a reduced set of climate variables derived from the HadCM3 global circulation model predictions for the 1980s are improved when risk maps for dengue's two main vectors (Aedes aegypti and Aedes albopictus) are also included as predictor variables; disease and vector models are projected into the future using the global circulation model predictions for the 2020s, 2040s and 2080s. The Garthwaite–Koch corr-max transformation is presented as a novel way of showing the relative contribution of each of the input predictor variables to the map predictions. PMID:25688021

  10. Scales of variability of black carbon plumes and their dependence on resolution of ECHAM6-HAM

    NASA Astrophysics Data System (ADS)

    Weigum, Natalie; Stier, Philip; Schutgens, Nick; Kipling, Zak

    2015-04-01

    Prediction of the aerosol effect on climate depends on the ability of three-dimensional numerical models to accurately estimate aerosol properties. However, a limitation of traditional grid-based models is their inability to resolve variability on scales smaller than a grid box. Past research has shown that significant aerosol variability exists on scales smaller than these grid-boxes, which can lead to discrepancies between observations and aerosol models. The aim of this study is to understand how a global climate model's (GCM) inability to resolve sub-grid scale variability affects simulations of important aerosol features. This problem is addressed by comparing observed black carbon (BC) plume scales from the HIPPO aircraft campaign to those simulated by ECHAM-HAM GCM, and testing how model resolution affects these scales. This study additionally investigates how model resolution affects BC variability in remote and near-source regions. These issues are examined using three different approaches: comparison of observed and simulated along-flight-track plume scales, two-dimensional autocorrelation analysis, and 3-dimensional plume analysis. We find that the degree to which GCMs resolve variability can have a significant impact on the scales of BC plumes, and it is important for models to capture the scales of aerosol plume structures, which account for a large degree of aerosol variability. In this presentation, we will provide further results from the three analysis techniques along with a summary of the implication of these results on future aerosol model development.

  11. Comparing Indirect Effects in SEM: A Sequential Model Fitting Method Using Covariance-Equivalent Specifications

    ERIC Educational Resources Information Center

    Chan, Wai

    2007-01-01

    In social science research, an indirect effect occurs when the influence of an antecedent variable on the effect variable is mediated by an intervening variable. To compare indirect effects within a sample or across different samples, structural equation modeling (SEM) can be used if the computer program supports model fitting with nonlinear…

  12. Selection of Variables in Cluster Analysis: An Empirical Comparison of Eight Procedures

    ERIC Educational Resources Information Center

    Steinley, Douglas; Brusco, Michael J.

    2008-01-01

    Eight different variable selection techniques for model-based and non-model-based clustering are evaluated across a wide range of cluster structures. It is shown that several methods have difficulties when non-informative variables (i.e., random noise) are included in the model. Furthermore, the distribution of the random noise greatly impacts the…

  13. Preliminary results of spatial modeling of selected forest health variables in Georgia

    Treesearch

    Brock Stewart; Chris J. Cieszewski

    2009-01-01

    Variables relating to forest health monitoring, such as mortality, are difficult to predict and model. We present here the results of fitting various spatial regression models to these variables. We interpolate plot-level values compiled from the Forest Inventory and Analysis National Information Management System (FIA-NIMS) data that are related to forest health....

  14. Separation of variables in anisotropic models: anisotropic Rabi and elliptic Gaudin model in an external magnetic field

    NASA Astrophysics Data System (ADS)

    Skrypnyk, T.

    2017-08-01

    We study the problem of separation of variables for classical integrable Hamiltonian systems governed by non-skew-symmetric non-dynamical so(3)\\otimes so(3) -valued elliptic r-matrices with spectral parameters. We consider several examples of such models, and perform separation of variables for classical anisotropic one- and two-spin Gaudin-type models in an external magnetic field, and for Jaynes-Cummings-Dicke-type models without the rotating wave approximation.

  15. A simplified model to evaluate the effect of fluid rheology on non-Newtonian flow in variable aperture fractures

    NASA Astrophysics Data System (ADS)

    Felisa, Giada; Ciriello, Valentina; Longo, Sandro; Di Federico, Vittorio

    2017-04-01

    Modeling of non-Newtonian flow in fractured media is essential in hydraulic fracturing operations, largely used for optimal exploitation of oil, gas and thermal reservoirs. Complex fluids interact with pre-existing rock fractures also during drilling operations, enhanced oil recovery, environmental remediation, and other natural phenomena such as magma and sand intrusions, and mud volcanoes. A first step in the modeling effort is a detailed understanding of flow in a single fracture, as the fracture aperture is typically spatially variable. A large bibliography exists on Newtonian flow in single, variable aperture fractures. Ultimately, stochastic modeling of aperture variability at the single fracture scale leads to determination of the flowrate under a given pressure gradient as a function of the parameters describing the variability of the aperture field and the fluid rheological behaviour. From the flowrate, a flow, or 'hydraulic', aperture can then be derived. The equivalent flow aperture for non-Newtonian fluids of power-law nature in single, variable aperture fractures has been obtained in the past both for deterministic and stochastic variations. Detailed numerical modeling of power-law fluid flow in a variable aperture fracture demonstrated that pronounced channelization effects are associated to a nonlinear fluid rheology. The availability of an equivalent flow aperture as a function of the parameters describing the fluid rheology and the aperture variability is enticing, as it allows taking their interaction into account when modeling flow in fracture networks at a larger scale. A relevant issue in non-Newtonian fracture flow is the rheological nature of the fluid. The constitutive model routinely used for hydro-fracturing modeling is the simple, two-parameter power-law. Yet this model does not characterize real fluids at low and high shear rates, as it implies, for shear-thinning fluids, an apparent viscosity which becomes unbounded for zero shear rate and tends to zero for infinite shear rate. On the contrary, the four-parameter Carreau constitutive equation includes asymptotic values of the apparent viscosity at those limits; in turn, the Carreau rheological equation is well approximated by the more tractable truncated power-law model. Results for flow of such fluids between parallel walls are already available. This study extends the adoption of the truncated power-law model to variable aperture fractures, with the aim of understanding the joint influence of rheology and aperture spatial variability. The aperture variation, modeled within a stochastic or deterministic framework, is taken to be one-dimensional and perpendicular to the flow direction; for stochastic modeling, the influence of different distribution functions is examined. Results are then compared with those obtained for pure power-law fluids for different combinations of model parameters. It is seen that the adoption of the pure power law model leads to significant overestimation of the flowrate with respect to the truncated model, more so for large external pressure gradient and/or aperture variability.

  16. Effects of environmental variables on invasive amphibian activity: Using model selection on quantiles for counts

    USGS Publications Warehouse

    Muller, Benjamin J.; Cade, Brian S.; Schwarzkoph, Lin

    2018-01-01

    Many different factors influence animal activity. Often, the value of an environmental variable may influence significantly the upper or lower tails of the activity distribution. For describing relationships with heterogeneous boundaries, quantile regressions predict a quantile of the conditional distribution of the dependent variable. A quantile count model extends linear quantile regression methods to discrete response variables, and is useful if activity is quantified by trapping, where there may be many tied (equal) values in the activity distribution, over a small range of discrete values. Additionally, different environmental variables in combination may have synergistic or antagonistic effects on activity, so examining their effects together, in a modeling framework, is a useful approach. Thus, model selection on quantile counts can be used to determine the relative importance of different variables in determining activity, across the entire distribution of capture results. We conducted model selection on quantile count models to describe the factors affecting activity (numbers of captures) of cane toads (Rhinella marina) in response to several environmental variables (humidity, temperature, rainfall, wind speed, and moon luminosity) over eleven months of trapping. Environmental effects on activity are understudied in this pest animal. In the dry season, model selection on quantile count models suggested that rainfall positively affected activity, especially near the lower tails of the activity distribution. In the wet season, wind speed limited activity near the maximum of the distribution, while minimum activity increased with minimum temperature. This statistical methodology allowed us to explore, in depth, how environmental factors influenced activity across the entire distribution, and is applicable to any survey or trapping regime, in which environmental variables affect activity.

  17. Modelling the vertical distribution of canopy fuel load using national forest inventory and low-density airbone laser scanning data

    PubMed Central

    Castedo-Dorado, Fernando; Hevia, Andrea; Vega, José Antonio; Vega-Nieva, Daniel; Ruiz-González, Ana Daría

    2017-01-01

    The fuel complex variables canopy bulk density and canopy base height are often used to predict crown fire initiation and spread. Direct measurement of these variables is impractical, and they are usually estimated indirectly by modelling. Recent advances in predicting crown fire behaviour require accurate estimates of the complete vertical distribution of canopy fuels. The objectives of the present study were to model the vertical profile of available canopy fuel in pine stands by using data from the Spanish national forest inventory plus low-density airborne laser scanning (ALS) metrics. In a first step, the vertical distribution of the canopy fuel load was modelled using the Weibull probability density function. In a second step, two different systems of models were fitted to estimate the canopy variables defining the vertical distributions; the first system related these variables to stand variables obtained in a field inventory, and the second system related the canopy variables to airborne laser scanning metrics. The models of each system were fitted simultaneously to compensate the effects of the inherent cross-model correlation between the canopy variables. Heteroscedasticity was also analyzed, but no correction in the fitting process was necessary. The estimated canopy fuel load profiles from field variables explained 84% and 86% of the variation in canopy fuel load for maritime pine and radiata pine respectively; whereas the estimated canopy fuel load profiles from ALS metrics explained 52% and 49% of the variation for the same species. The proposed models can be used to assess the effectiveness of different forest management alternatives for reducing crown fire hazard. PMID:28448524

  18. Unfinished Business in Clarifying Causal Measurement: Commentary on Bainter and Bollen

    ERIC Educational Resources Information Center

    Markus, Keith A.

    2014-01-01

    In a series of articles and comments, Kenneth Bollen and his collaborators have incrementally refined an account of structural equation models that (a) model a latent variable as the effect of several observed variables and (b) carry an interpretation of the observed variables as, in some sense, measures of the latent variable that they cause.…

  19. AeroPropulsoServoElasticity: Dynamic Modeling of the Variable Cycle Propulsion System

    NASA Technical Reports Server (NTRS)

    Kopasakis, George

    2012-01-01

    This presentation was made at the 2012 Fundamental Aeronautics Program Technical Conference and it covers research work for the Dynamic Modeling of the Variable cycle Propulsion System that was done under the Supersonics Project, in the area of AeroPropulsoServoElasticity. The presentation covers the objective for the propulsion system dynamic modeling work, followed by the work that has been done so far to model the variable Cycle Engine, modeling of the inlet, the nozzle, the modeling that has been done to model the affects of flow distortion, and finally presenting some concluding remarks and future plans.

  20. USING STRUCTURAL EQUATION MODELING TO INVESTIGATE RELATIONSHIPS AMONG ECOLOGICAL VARIABLES

    EPA Science Inventory

    This paper gives an introductory account of Structural Equation Modeling (SEM) and demonstrates its application using LISREL< with a model utilizing environmental data. Using nine EMAP data variables, we analyzed their correlation matrix with an SEM model. The model characterized...

  1. Implementation of Kalman filter algorithm on models reduced using singular pertubation approximation method and its application to measurement of water level

    NASA Astrophysics Data System (ADS)

    Rachmawati, Vimala; Khusnul Arif, Didik; Adzkiya, Dieky

    2018-03-01

    The systems contained in the universe often have a large order. Thus, the mathematical model has many state variables that affect the computation time. In addition, generally not all variables are known, so estimations are needed to measure the magnitude of the system that cannot be measured directly. In this paper, we discuss the model reduction and estimation of state variables in the river system to measure the water level. The model reduction of a system is an approximation method of a system with a lower order without significant errors but has a dynamic behaviour that is similar to the original system. The Singular Perturbation Approximation method is one of the model reduction methods where all state variables of the equilibrium system are partitioned into fast and slow modes. Then, The Kalman filter algorithm is used to estimate state variables of stochastic dynamic systems where estimations are computed by predicting state variables based on system dynamics and measurement data. Kalman filters are used to estimate state variables in the original system and reduced system. Then, we compare the estimation results of the state and computational time between the original and reduced system.

  2. A mathematical model for Vertical Attitude Takeoff and Landing (VATOL) aircraft simulation. Volume 2: Model equations and base aircraft data

    NASA Technical Reports Server (NTRS)

    Fortenbaugh, R. L.

    1980-01-01

    Equations incorporated in a VATOL six degree of freedom off-line digital simulation program and data for the Vought SF-121 VATOL aircraft concept which served as the baseline for the development of this program are presented. The equations and data are intended to facilitate the development of a piloted VATOL simulation. The equation presentation format is to state the equations which define a particular model segment. Listings of constants required to quantify the model segment, input variables required to exercise the model segment, and output variables required by other model segments are included. In several instances a series of input or output variables are followed by a section number in parentheses which identifies the model segment of origination or termination of those variables.

  3. Seemingly unrelated intervention time series models for effectiveness evaluation of large scale environmental remediation.

    PubMed

    Ip, Ryan H L; Li, W K; Leung, Kenneth M Y

    2013-09-15

    Large scale environmental remediation projects applied to sea water always involve large amount of capital investments. Rigorous effectiveness evaluations of such projects are, therefore, necessary and essential for policy review and future planning. This study aims at investigating effectiveness of environmental remediation using three different Seemingly Unrelated Regression (SUR) time series models with intervention effects, including Model (1) assuming no correlation within and across variables, Model (2) assuming no correlation across variable but allowing correlations within variable across different sites, and Model (3) allowing all possible correlations among variables (i.e., an unrestricted model). The results suggested that the unrestricted SUR model is the most reliable one, consistently having smallest variations of the estimated model parameters. We discussed our results with reference to marine water quality management in Hong Kong while bringing managerial issues into consideration. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Identifying bird and reptile vulnerabilities to climate change in the southwestern United States

    USGS Publications Warehouse

    Hatten, James R.; Giermakowski, J. Tomasz; Holmes, Jennifer A.; Nowak, Erika M.; Johnson, Matthew J.; Ironside, Kirsten E.; van Riper, Charles; Peters, Michael; Truettner, Charles; Cole, Kenneth L.

    2016-07-06

    Current and future breeding ranges of 15 bird and 16 reptile species were modeled in the Southwestern United States. Rather than taking a broad-scale, vulnerability-assessment approach, we created a species distribution model (SDM) for each focal species incorporating climatic, landscape, and plant variables. Baseline climate (1940–2009) was characterized with Parameter-elevation Regressions on Independent Slopes Model (PRISM) data and future climate with global-circulation-model data under an A1B emission scenario. Climatic variables included monthly and seasonal temperature and precipitation; landscape variables included terrain ruggedness, soil type, and insolation; and plant variables included trees and shrubs commonly associated with a focal species. Not all species-distribution models contained a plant, but if they did, we included a built-in annual migration rate for more accurate plant-range projections in 2039 or 2099. We conducted a group meta-analysis to (1) determine how influential each variable class was when averaged across all species distribution models (birds or reptiles), and (2) identify the correlation among contemporary (2009) habitat fragmentation and biological attributes and future range projections (2039 or 2099). Projected changes in bird and reptile ranges varied widely among species, with one-third of the ranges predicted to expand and two-thirds predicted to contract. A group meta-analysis indicated that climatic variables were the most influential variable class when averaged across all models for both groups, followed by landscape and plant variables (birds), or plant and landscape variables (reptiles), respectively. The second part of the meta-analysis indicated that numerous contemporary habitat-fragmentation (for example, patch isolation) and biological-attribute (for example, clutch size, longevity) variables were significantly correlated with the magnitude of projected range changes for birds and reptiles. Patch isolation was a significant trans-specific driver of projected bird and reptile ranges, suggesting that strategic actions should focus on restoration and enhancement of habitat at local and regional scales to promote landscape connectivity and conservation of core areas.

  5. Using multivariate regression modeling for sampling and predicting chemical characteristics of mixed waste in old landfills.

    PubMed

    Brandstätter, Christian; Laner, David; Prantl, Roman; Fellner, Johann

    2014-12-01

    Municipal solid waste landfills pose a threat on environment and human health, especially old landfills which lack facilities for collection and treatment of landfill gas and leachate. Consequently, missing information about emission flows prevent site-specific environmental risk assessments. To overcome this gap, the combination of waste sampling and analysis with statistical modeling is one option for estimating present and future emission potentials. Optimizing the tradeoff between investigation costs and reliable results requires knowledge about both: the number of samples to be taken and variables to be analyzed. This article aims to identify the optimized number of waste samples and variables in order to predict a larger set of variables. Therefore, we introduce a multivariate linear regression model and tested the applicability by usage of two case studies. Landfill A was used to set up and calibrate the model based on 50 waste samples and twelve variables. The calibrated model was applied to Landfill B including 36 waste samples and twelve variables with four predictor variables. The case study results are twofold: first, the reliable and accurate prediction of the twelve variables can be achieved with the knowledge of four predictor variables (Loi, EC, pH and Cl). For the second Landfill B, only ten full measurements would be needed for a reliable prediction of most response variables. The four predictor variables would exhibit comparably low analytical costs in comparison to the full set of measurements. This cost reduction could be used to increase the number of samples yielding an improved understanding of the spatial waste heterogeneity in landfills. Concluding, the future application of the developed model potentially improves the reliability of predicted emission potentials. The model could become a standard screening tool for old landfills if its applicability and reliability would be tested in additional case studies. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Weather variability and the incidence of cryptosporidiosis: comparison of time series poisson regression and SARIMA models.

    PubMed

    Hu, Wenbiao; Tong, Shilu; Mengersen, Kerrie; Connell, Des

    2007-09-01

    Few studies have examined the relationship between weather variables and cryptosporidiosis in Australia. This paper examines the potential impact of weather variability on the transmission of cryptosporidiosis and explores the possibility of developing an empirical forecast system. Data on weather variables, notified cryptosporidiosis cases, and population size in Brisbane were supplied by the Australian Bureau of Meteorology, Queensland Department of Health, and Australian Bureau of Statistics for the period of January 1, 1996-December 31, 2004, respectively. Time series Poisson regression and seasonal auto-regression integrated moving average (SARIMA) models were performed to examine the potential impact of weather variability on the transmission of cryptosporidiosis. Both the time series Poisson regression and SARIMA models show that seasonal and monthly maximum temperature at a prior moving average of 1 and 3 months were significantly associated with cryptosporidiosis disease. It suggests that there may be 50 more cases a year for an increase of 1 degrees C maximum temperature on average in Brisbane. Model assessments indicated that the SARIMA model had better predictive ability than the Poisson regression model (SARIMA: root mean square error (RMSE): 0.40, Akaike information criterion (AIC): -12.53; Poisson regression: RMSE: 0.54, AIC: -2.84). Furthermore, the analysis of residuals shows that the time series Poisson regression appeared to violate a modeling assumption, in that residual autocorrelation persisted. The results of this study suggest that weather variability (particularly maximum temperature) may have played a significant role in the transmission of cryptosporidiosis. A SARIMA model may be a better predictive model than a Poisson regression model in the assessment of the relationship between weather variability and the incidence of cryptosporidiosis.

  7. The relative impacts of climate and land-use change on conterminous United States bird species from 2001 to 2075

    USGS Publications Warehouse

    Sohl, Terry L.

    2014-01-01

    Species distribution models often use climate data to assess contemporary and/or future ranges for animal or plant species. Land use and land cover (LULC) data are important predictor variables for determining species range, yet are rarely used when modeling future distributions. In this study, maximum entropy modeling was used to construct species distribution maps for 50 North American bird species to determine relative contributions of climate and LULC for contemporary (2001) and future (2075) time periods. Species presence data were used as a dependent variable, while climate, LULC, and topographic data were used as predictor variables. Results varied by species, but in general, measures of model fit for 2001 indicated significantly poorer fit when either climate or LULC data were excluded from model simulations. Climate covariates provided a higher contribution to 2001 model results than did LULC variables, although both categories of variables strongly contributed. The area deemed to be "suitable" for 2001 species presence was strongly affected by the choice of model covariates, with significantly larger ranges predicted when LULC was excluded as a covariate. Changes in species ranges for 2075 indicate much larger overall range changes due to projected climate change than due to projected LULC change. However, the choice of study area impacted results for both current and projected model applications, with truncation of actual species ranges resulting in lower model fit scores and increased difficulty in interpreting covariate impacts on species range. Results indicate species-specific response to climate and LULC variables; however, both climate and LULC variables clearly are important for modeling both contemporary and potential future species ranges.

  8. Newtonian nudging for a Richards equation-based distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Paniconi, Claudio; Marrocu, Marino; Putti, Mario; Verbunt, Mark

    The objective of data assimilation is to provide physically consistent estimates of spatially distributed environmental variables. In this study a relatively simple data assimilation method has been implemented in a relatively complex hydrological model. The data assimilation technique is Newtonian relaxation or nudging, in which model variables are driven towards observations by a forcing term added to the model equations. The forcing term is proportional to the difference between simulation and observation (relaxation component) and contains four-dimensional weighting functions that can incorporate prior knowledge about the spatial and temporal variability and characteristic scales of the state variable(s) being assimilated. The numerical model couples a three-dimensional finite element Richards equation solver for variably saturated porous media and a finite difference diffusion wave approximation based on digital elevation data for surface water dynamics. We describe the implementation of the data assimilation algorithm for the coupled model and report on the numerical and hydrological performance of the resulting assimilation scheme. Nudging is shown to be successful in improving the hydrological simulation results, and it introduces little computational cost, in terms of CPU and other numerical aspects of the model's behavior, in some cases even improving numerical performance compared to model runs without nudging. We also examine the sensitivity of the model to nudging term parameters including the spatio-temporal influence coefficients in the weighting functions. Overall the nudging algorithm is quite flexible, for instance in dealing with concurrent observation datasets, gridded or scattered data, and different state variables, and the implementation presented here can be readily extended to any of these features not already incorporated. Moreover the nudging code and tests can serve as a basis for implementation of more sophisticated data assimilation techniques in a Richards equation-based hydrological model.

  9. The Relative Impacts of Climate and Land-Use Change on Conterminous United States Bird Species from 2001 to 2075

    PubMed Central

    Sohl, Terry L.

    2014-01-01

    Species distribution models often use climate data to assess contemporary and/or future ranges for animal or plant species. Land use and land cover (LULC) data are important predictor variables for determining species range, yet are rarely used when modeling future distributions. In this study, maximum entropy modeling was used to construct species distribution maps for 50 North American bird species to determine relative contributions of climate and LULC for contemporary (2001) and future (2075) time periods. Species presence data were used as a dependent variable, while climate, LULC, and topographic data were used as predictor variables. Results varied by species, but in general, measures of model fit for 2001 indicated significantly poorer fit when either climate or LULC data were excluded from model simulations. Climate covariates provided a higher contribution to 2001 model results than did LULC variables, although both categories of variables strongly contributed. The area deemed to be “suitable” for 2001 species presence was strongly affected by the choice of model covariates, with significantly larger ranges predicted when LULC was excluded as a covariate. Changes in species ranges for 2075 indicate much larger overall range changes due to projected climate change than due to projected LULC change. However, the choice of study area impacted results for both current and projected model applications, with truncation of actual species ranges resulting in lower model fit scores and increased difficulty in interpreting covariate impacts on species range. Results indicate species-specific response to climate and LULC variables; however, both climate and LULC variables clearly are important for modeling both contemporary and potential future species ranges. PMID:25372571

  10. Influence of variable selection on partial least squares discriminant analysis models for explosive residue classification

    NASA Astrophysics Data System (ADS)

    De Lucia, Frank C., Jr.; Gottfried, Jennifer L.

    2011-02-01

    Using a series of thirteen organic materials that includes novel high-nitrogen energetic materials, conventional organic military explosives, and benign organic materials, we have demonstrated the importance of variable selection for maximizing residue discrimination with partial least squares discriminant analysis (PLS-DA). We built several PLS-DA models using different variable sets based on laser induced breakdown spectroscopy (LIBS) spectra of the organic residues on an aluminum substrate under an argon atmosphere. The model classification results for each sample are presented and the influence of the variables on these results is discussed. We found that using the whole spectra as the data input for the PLS-DA model gave the best results. However, variables due to the surrounding atmosphere and the substrate contribute to discrimination when the whole spectra are used, indicating this may not be the most robust model. Further iterative testing with additional validation data sets is necessary to determine the most robust model.

  11. A latent class distance association model for cross-classified data with a categorical response variable.

    PubMed

    Vera, José Fernando; de Rooij, Mark; Heiser, Willem J

    2014-11-01

    In this paper we propose a latent class distance association model for clustering in the predictor space of large contingency tables with a categorical response variable. The rows of such a table are characterized as profiles of a set of explanatory variables, while the columns represent a single outcome variable. In many cases such tables are sparse, with many zero entries, which makes traditional models problematic. By clustering the row profiles into a few specific classes and representing these together with the categories of the response variable in a low-dimensional Euclidean space using a distance association model, a parsimonious prediction model can be obtained. A generalized EM algorithm is proposed to estimate the model parameters and the adjusted Bayesian information criterion statistic is employed to test the number of mixture components and the dimensionality of the representation. An empirical example highlighting the advantages of the new approach and comparing it with traditional approaches is presented. © 2014 The British Psychological Society.

  12. Observations and Models of Highly Intermittent Phytoplankton Distributions

    PubMed Central

    Mandal, Sandip; Locke, Christopher; Tanaka, Mamoru; Yamazaki, Hidekatsu

    2014-01-01

    The measurement of phytoplankton distributions in ocean ecosystems provides the basis for elucidating the influences of physical processes on plankton dynamics. Technological advances allow for measurement of phytoplankton data to greater resolution, displaying high spatial variability. In conventional mathematical models, the mean value of the measured variable is approximated to compare with the model output, which may misinterpret the reality of planktonic ecosystems, especially at the microscale level. To consider intermittency of variables, in this work, a new modelling approach to the planktonic ecosystem is applied, called the closure approach. Using this approach for a simple nutrient-phytoplankton model, we have shown how consideration of the fluctuating parts of model variables can affect system dynamics. Also, we have found a critical value of variance of overall fluctuating terms below which the conventional non-closure model and the mean value from the closure model exhibit the same result. This analysis gives an idea about the importance of the fluctuating parts of model variables and about when to use the closure approach. Comparisons of plot of mean versus standard deviation of phytoplankton at different depths, obtained using this new approach with real observations, give this approach good conformity. PMID:24787740

  13. Using Instrumental Variable (IV) Tests to Evaluate Model Specification in Latent Variable Structural Equation Models*

    PubMed Central

    Kirby, James B.; Bollen, Kenneth A.

    2009-01-01

    Structural Equation Modeling with latent variables (SEM) is a powerful tool for social and behavioral scientists, combining many of the strengths of psychometrics and econometrics into a single framework. The most common estimator for SEM is the full-information maximum likelihood estimator (ML), but there is continuing interest in limited information estimators because of their distributional robustness and their greater resistance to structural specification errors. However, the literature discussing model fit for limited information estimators for latent variable models is sparse compared to that for full information estimators. We address this shortcoming by providing several specification tests based on the 2SLS estimator for latent variable structural equation models developed by Bollen (1996). We explain how these tests can be used to not only identify a misspecified model, but to help diagnose the source of misspecification within a model. We present and discuss results from a Monte Carlo experiment designed to evaluate the finite sample properties of these tests. Our findings suggest that the 2SLS tests successfully identify most misspecified models, even those with modest misspecification, and that they provide researchers with information that can help diagnose the source of misspecification. PMID:20419054

  14. Simulation of South-Asian Summer Monsoon in a GCM

    NASA Astrophysics Data System (ADS)

    Ajayamohan, R. S.

    2007-10-01

    Major characteristics of Indian summer monsoon climate are analyzed using simulations from the upgraded version of Florida State University Global Spectral Model (FSUGSM). The Indian monsoon has been studied in terms of mean precipitation and low-level and upper-level circulation patterns and compared with observations. In addition, the model's fidelity in simulating observed monsoon intraseasonal variability, interannual variability and teleconnection patterns is examined. The model is successful in simulating the major rainbelts over the Indian monsoon region. However, the model exhibits bias in simulating the precipitation bands over the South China Sea and the West Pacific region. Seasonal mean circulation patterns of low-level and upper-level winds are consistent with the model's precipitation pattern. Basic features like onset and peak phase of monsoon are realistically simulated. However, model simulation indicates an early withdrawal of monsoon. Northward propagation of rainbelts over the Indian continent is simulated fairly well, but the propagation is weak over the ocean. The model simulates the meridional dipole structure associated with the monsoon intraseasonal variability realistically. The model is unable to capture the observed interannual variability of monsoon and its teleconnection patterns. Estimate of potential predictability of the model reveals the dominating influence of internal variability over the Indian monsoon region.

  15. Input variable selection and calibration data selection for storm water quality regression models.

    PubMed

    Sun, Siao; Bertrand-Krajewski, Jean-Luc

    2013-01-01

    Storm water quality models are useful tools in storm water management. Interest has been growing in analyzing existing data for developing models for urban storm water quality evaluations. It is important to select appropriate model inputs when many candidate explanatory variables are available. Model calibration and verification are essential steps in any storm water quality modeling. This study investigates input variable selection and calibration data selection in storm water quality regression models. The two selection problems are mutually interacted. A procedure is developed in order to fulfil the two selection tasks in order. The procedure firstly selects model input variables using a cross validation method. An appropriate number of variables are identified as model inputs to ensure that a model is neither overfitted nor underfitted. Based on the model input selection results, calibration data selection is studied. Uncertainty of model performances due to calibration data selection is investigated with a random selection method. An approach using the cluster method is applied in order to enhance model calibration practice based on the principle of selecting representative data for calibration. The comparison between results from the cluster selection method and random selection shows that the former can significantly improve performances of calibrated models. It is found that the information content in calibration data is important in addition to the size of calibration data.

  16. Determination of riverbank erosion probability using Locally Weighted Logistic Regression

    NASA Astrophysics Data System (ADS)

    Ioannidou, Elena; Flori, Aikaterini; Varouchakis, Emmanouil A.; Giannakis, Georgios; Vozinaki, Anthi Eirini K.; Karatzas, George P.; Nikolaidis, Nikolaos

    2015-04-01

    Riverbank erosion is a natural geomorphologic process that affects the fluvial environment. The most important issue concerning riverbank erosion is the identification of the vulnerable locations. An alternative to the usual hydrodynamic models to predict vulnerable locations is to quantify the probability of erosion occurrence. This can be achieved by identifying the underlying relations between riverbank erosion and the geomorphological or hydrological variables that prevent or stimulate erosion. Thus, riverbank erosion can be determined by a regression model using independent variables that are considered to affect the erosion process. The impact of such variables may vary spatially, therefore, a non-stationary regression model is preferred instead of a stationary equivalent. Locally Weighted Regression (LWR) is proposed as a suitable choice. This method can be extended to predict the binary presence or absence of erosion based on a series of independent local variables by using the logistic regression model. It is referred to as Locally Weighted Logistic Regression (LWLR). Logistic regression is a type of regression analysis used for predicting the outcome of a categorical dependent variable (e.g. binary response) based on one or more predictor variables. The method can be combined with LWR to assign weights to local independent variables of the dependent one. LWR allows model parameters to vary over space in order to reflect spatial heterogeneity. The probabilities of the possible outcomes are modelled as a function of the independent variables using a logistic function. Logistic regression measures the relationship between a categorical dependent variable and, usually, one or several continuous independent variables by converting the dependent variable to probability scores. Then, a logistic regression is formed, which predicts success or failure of a given binary variable (e.g. erosion presence or absence) for any value of the independent variables. The erosion occurrence probability can be calculated in conjunction with the model deviance regarding the independent variables tested. The most straightforward measure for goodness of fit is the G statistic. It is a simple and effective way to study and evaluate the Logistic Regression model efficiency and the reliability of each independent variable. The developed statistical model is applied to the Koiliaris River Basin on the island of Crete, Greece. Two datasets of river bank slope, river cross-section width and indications of erosion were available for the analysis (12 and 8 locations). Two different types of spatial dependence functions, exponential and tricubic, were examined to determine the local spatial dependence of the independent variables at the measurement locations. The results show a significant improvement when the tricubic function is applied as the erosion probability is accurately predicted at all eight validation locations. Results for the model deviance show that cross-section width is more important than bank slope in the estimation of erosion probability along the Koiliaris riverbanks. The proposed statistical model is a useful tool that quantifies the erosion probability along the riverbanks and can be used to assist managing erosion and flooding events. Acknowledgements This work is part of an on-going THALES project (CYBERSENSORS - High Frequency Monitoring System for Integrated Water Resources Management of Rivers). The project has been co-financed by the European Union (European Social Fund - ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF) - Research Funding Program: THALES. Investing in knowledge society through the European Social Fund.

  17. Simulation of crop yield variability by improved root-soil-interaction modelling

    NASA Astrophysics Data System (ADS)

    Duan, X.; Gayler, S.; Priesack, E.

    2009-04-01

    Understanding the processes and factors that govern the within-field variability in crop yield has attached great importance due to applications in precision agriculture. Crop response to environment at field scale is a complex dynamic process involving the interactions of soil characteristics, weather conditions and crop management. The numerous static factors combined with temporal variations make it very difficult to identify and manage the variability pattern. Therefore, crop simulation models are considered to be useful tools in analyzing separately the effects of change in soil or weather conditions on the spatial variability, in order to identify the cause of yield variability and to quantify the spatial and temporal variation. However, tests showed that usual crop models such as CERES-Wheat and CERES-Maize were not able to quantify the observed within-field yield variability, while their performance on crop growth simulation under more homogeneous and mainly non-limiting conditions was sufficent to simulate average yields at the field-scale. On a study site in South Germany, within-field variability in crop growth has been documented since years. After detailed analysis and classification of the soil patterns, two site specific factors, the plant-available-water and the O2 deficiency, were considered as the main causes of the crop growth variability in this field. Based on our measurement of root distribution in the soil profile, we hypothesize that in our case the insufficiency of the applied crop models to simulate the yield variability can be due to the oversimplification of the involved root models which fail to be sensitive to different soil conditions. In this study, the root growth model described by Jones et al. (1991) was adapted by using data of root distributions in the field and linking the adapted root model to the CERES crop model. The ability of the new root model to increase the sensitivity of the CERES crop models to different enviromental conditions was then evaluated by means of comparison of the simualtion results with measured data and by scenario calculations.

  18. A canonical neural mechanism for behavioral variability

    NASA Astrophysics Data System (ADS)

    Darshan, Ran; Wood, William E.; Peters, Susan; Leblois, Arthur; Hansel, David

    2017-05-01

    The ability to generate variable movements is essential for learning and adjusting complex behaviours. This variability has been linked to the temporal irregularity of neuronal activity in the central nervous system. However, how neuronal irregularity actually translates into behavioural variability is unclear. Here we combine modelling, electrophysiological and behavioural studies to address this issue. We demonstrate that a model circuit comprising topographically organized and strongly recurrent neural networks can autonomously generate irregular motor behaviours. Simultaneous recordings of neurons in singing finches reveal that neural correlations increase across the circuit driving song variability, in agreement with the model predictions. Analysing behavioural data, we find remarkable similarities in the babbling statistics of 5-6-month-old human infants and juveniles from three songbird species and show that our model naturally accounts for these `universal' statistics.

  19. Nonparametric model validations for hidden Markov models with applications in financial econometrics

    PubMed Central

    Zhao, Zhibiao

    2011-01-01

    We address the nonparametric model validation problem for hidden Markov models with partially observable variables and hidden states. We achieve this goal by constructing a nonparametric simultaneous confidence envelope for transition density function of the observable variables and checking whether the parametric density estimate is contained within such an envelope. Our specification test procedure is motivated by a functional connection between the transition density of the observable variables and the Markov transition kernel of the hidden states. Our approach is applicable for continuous time diffusion models, stochastic volatility models, nonlinear time series models, and models with market microstructure noise. PMID:21750601

  20. Variability aware compact model characterization for statistical circuit design optimization

    NASA Astrophysics Data System (ADS)

    Qiao, Ying; Qian, Kun; Spanos, Costas J.

    2012-03-01

    Variability modeling at the compact transistor model level can enable statistically optimized designs in view of limitations imposed by the fabrication technology. In this work we propose an efficient variabilityaware compact model characterization methodology based on the linear propagation of variance. Hierarchical spatial variability patterns of selected compact model parameters are directly calculated from transistor array test structures. This methodology has been implemented and tested using transistor I-V measurements and the EKV-EPFL compact model. Calculation results compare well to full-wafer direct model parameter extractions. Further studies are done on the proper selection of both compact model parameters and electrical measurement metrics used in the method.

  1. On the Asymptotic Relative Efficiency of Planned Missingness Designs.

    PubMed

    Rhemtulla, Mijke; Savalei, Victoria; Little, Todd D

    2016-03-01

    In planned missingness (PM) designs, certain data are set a priori to be missing. PM designs can increase validity and reduce cost; however, little is known about the loss of efficiency that accompanies these designs. The present paper compares PM designs to reduced sample (RN) designs that have the same total number of data points concentrated in fewer participants. In 4 studies, we consider models for both observed and latent variables, designs that do or do not include an "X set" of variables with complete data, and a full range of between- and within-set correlation values. All results are obtained using asymptotic relative efficiency formulas, and thus no data are generated; this novel approach allows us to examine whether PM designs have theoretical advantages over RN designs removing the impact of sampling error. Our primary findings are that (a) in manifest variable regression models, estimates of regression coefficients have much lower relative efficiency in PM designs as compared to RN designs, (b) relative efficiency of factor correlation or latent regression coefficient estimates is maximized when the indicators of each latent variable come from different sets, and (c) the addition of an X set improves efficiency in manifest variable regression models only for the parameters that directly involve the X-set variables, but it substantially improves efficiency of most parameters in latent variable models. We conclude that PM designs can be beneficial when the model of interest is a latent variable model; recommendations are made for how to optimize such a design.

  2. Modeling Psychological Attributes in Psychology – An Epistemological Discussion: Network Analysis vs. Latent Variables

    PubMed Central

    Guyon, Hervé; Falissard, Bruno; Kop, Jean-Luc

    2017-01-01

    Network Analysis is considered as a new method that challenges Latent Variable models in inferring psychological attributes. With Network Analysis, psychological attributes are derived from a complex system of components without the need to call on any latent variables. But the ontological status of psychological attributes is not adequately defined with Network Analysis, because a psychological attribute is both a complex system and a property emerging from this complex system. The aim of this article is to reappraise the legitimacy of latent variable models by engaging in an ontological and epistemological discussion on psychological attributes. Psychological attributes relate to the mental equilibrium of individuals embedded in their social interactions, as robust attractors within complex dynamic processes with emergent properties, distinct from physical entities located in precise areas of the brain. Latent variables thus possess legitimacy, because the emergent properties can be conceptualized and analyzed on the sole basis of their manifestations, without exploring the upstream complex system. However, in opposition with the usual Latent Variable models, this article is in favor of the integration of a dynamic system of manifestations. Latent Variables models and Network Analysis thus appear as complementary approaches. New approaches combining Latent Network Models and Network Residuals are certainly a promising new way to infer psychological attributes, placing psychological attributes in an inter-subjective dynamic approach. Pragmatism-realism appears as the epistemological framework required if we are to use latent variables as representations of psychological attributes. PMID:28572780

  3. ON JOINT DETERMINISTIC GRID MODELING AND SUB-GRID VARIABILITY CONCEPTUAL FRAMEWORK FOR MODEL EVALUATION

    EPA Science Inventory

    The general situation, (but exemplified in urban areas), where a significant degree of sub-grid variability (SGV) exists in grid models poses problems when comparing gridbased air quality modeling results with observations. Typically, grid models ignore or parameterize processes ...

  4. A site specific model and analysis of the neutral somatic mutation rate in whole-genome cancer data.

    PubMed

    Bertl, Johanna; Guo, Qianyun; Juul, Malene; Besenbacher, Søren; Nielsen, Morten Muhlig; Hornshøj, Henrik; Pedersen, Jakob Skou; Hobolth, Asger

    2018-04-19

    Detailed modelling of the neutral mutational process in cancer cells is crucial for identifying driver mutations and understanding the mutational mechanisms that act during cancer development. The neutral mutational process is very complex: whole-genome analyses have revealed that the mutation rate differs between cancer types, between patients and along the genome depending on the genetic and epigenetic context. Therefore, methods that predict the number of different types of mutations in regions or specific genomic elements must consider local genomic explanatory variables. A major drawback of most methods is the need to average the explanatory variables across the entire region or genomic element. This procedure is particularly problematic if the explanatory variable varies dramatically in the element under consideration. To take into account the fine scale of the explanatory variables, we model the probabilities of different types of mutations for each position in the genome by multinomial logistic regression. We analyse 505 cancer genomes from 14 different cancer types and compare the performance in predicting mutation rate for both regional based models and site-specific models. We show that for 1000 randomly selected genomic positions, the site-specific model predicts the mutation rate much better than regional based models. We use a forward selection procedure to identify the most important explanatory variables. The procedure identifies site-specific conservation (phyloP), replication timing, and expression level as the best predictors for the mutation rate. Finally, our model confirms and quantifies certain well-known mutational signatures. We find that our site-specific multinomial regression model outperforms the regional based models. The possibility of including genomic variables on different scales and patient specific variables makes it a versatile framework for studying different mutational mechanisms. Our model can serve as the neutral null model for the mutational process; regions that deviate from the null model are candidates for elements that drive cancer development.

  5. Reassessing regime shifts in the North Pacific: incremental climate change and commercial fishing are necessary for explaining decadal-scale biological variability.

    PubMed

    Litzow, Michael A; Mueter, Franz J; Hobday, Alistair J

    2014-01-01

    In areas of the North Pacific that are largely free of overfishing, climate regime shifts - abrupt changes in modes of low-frequency climate variability - are seen as the dominant drivers of decadal-scale ecological variability. We assessed the ability of leading modes of climate variability [Pacific Decadal Oscillation (PDO), North Pacific Gyre Oscillation (NPGO), Arctic Oscillation (AO), Pacific-North American Pattern (PNA), North Pacific Index (NPI), El Niño-Southern Oscillation (ENSO)] to explain decadal-scale (1965-2008) patterns of climatic and biological variability across two North Pacific ecosystems (Gulf of Alaska and Bering Sea). Our response variables were the first principle component (PC1) of four regional climate parameters [sea surface temperature (SST), sea level pressure (SLP), freshwater input, ice cover], and PCs 1-2 of 36 biological time series [production or abundance for populations of salmon (Oncorhynchus spp.), groundfish, herring (Clupea pallasii), shrimp, and jellyfish]. We found that the climate modes alone could not explain ecological variability in the study region. Both linear models (for climate PC1) and generalized additive models (for biology PC1-2) invoking only the climate modes produced residuals with significant temporal trends, indicating that the models failed to capture coherent patterns of ecological variability. However, when the residual climate trend and a time series of commercial fishery catches were used as additional candidate variables, resulting models of biology PC1-2 satisfied assumptions of independent residuals and out-performed models constructed from the climate modes alone in terms of predictive power. As measured by effect size and Akaike weights, the residual climate trend was the most important variable for explaining biology PC1 variability, and commercial catch the most important variable for biology PC2. Patterns of climate sensitivity and exploitation history for taxa strongly associated with biology PC1-2 suggest plausible mechanistic explanations for these modeling results. Our findings suggest that, even in the absence of overfishing and in areas strongly influenced by internal climate variability, climate regime shift effects can only be understood in the context of other ecosystem perturbations. © 2013 John Wiley & Sons Ltd.

  6. North Atlantic simulations in Coordinated Ocean-ice Reference Experiments phase II (CORE-II). Part II: Inter-annual to decadal variability

    NASA Astrophysics Data System (ADS)

    Danabasoglu, Gokhan; Yeager, Steve G.; Kim, Who M.; Behrens, Erik; Bentsen, Mats; Bi, Daohua; Biastoch, Arne; Bleck, Rainer; Böning, Claus; Bozec, Alexandra; Canuto, Vittorio M.; Cassou, Christophe; Chassignet, Eric; Coward, Andrew C.; Danilov, Sergey; Diansky, Nikolay; Drange, Helge; Farneti, Riccardo; Fernandez, Elodie; Fogli, Pier Giuseppe; Forget, Gael; Fujii, Yosuke; Griffies, Stephen M.; Gusev, Anatoly; Heimbach, Patrick; Howard, Armando; Ilicak, Mehmet; Jung, Thomas; Karspeck, Alicia R.; Kelley, Maxwell; Large, William G.; Leboissetier, Anthony; Lu, Jianhua; Madec, Gurvan; Marsland, Simon J.; Masina, Simona; Navarra, Antonio; Nurser, A. J. George; Pirani, Anna; Romanou, Anastasia; Salas y Mélia, David; Samuels, Bonita L.; Scheinert, Markus; Sidorenko, Dmitry; Sun, Shan; Treguier, Anne-Marie; Tsujino, Hiroyuki; Uotila, Petteri; Valcke, Sophie; Voldoire, Aurore; Wang, Qiang; Yashayaev, Igor

    2016-01-01

    Simulated inter-annual to decadal variability and trends in the North Atlantic for the 1958-2007 period from twenty global ocean - sea-ice coupled models are presented. These simulations are performed as contributions to the second phase of the Coordinated Ocean-ice Reference Experiments (CORE-II). The study is Part II of our companion paper (Danabasoglu et al., 2014) which documented the mean states in the North Atlantic from the same models. A major focus of the present study is the representation of Atlantic meridional overturning circulation (AMOC) variability in the participating models. Relationships between AMOC variability and those of some other related variables, such as subpolar mixed layer depths, the North Atlantic Oscillation (NAO), and the Labrador Sea upper-ocean hydrographic properties, are also investigated. In general, AMOC variability shows three distinct stages. During the first stage that lasts until the mid- to late-1970s, AMOC is relatively steady, remaining lower than its long-term (1958-2007) mean. Thereafter, AMOC intensifies with maximum transports achieved in the mid- to late-1990s. This enhancement is then followed by a weakening trend until the end of our integration period. This sequence of low frequency AMOC variability is consistent with previous studies. Regarding strengthening of AMOC between about the mid-1970s and the mid-1990s, our results support a previously identified variability mechanism where AMOC intensification is connected to increased deep water formation in the subpolar North Atlantic, driven by NAO-related surface fluxes. The simulations tend to show general agreement in their temporal representations of, for example, AMOC, sea surface temperature (SST), and subpolar mixed layer depth variabilities. In particular, the observed variability of the North Atlantic SSTs is captured well by all models. These findings indicate that simulated variability and trends are primarily dictated by the atmospheric datasets which include the influence of ocean dynamics from nature superimposed onto anthropogenic effects. Despite these general agreements, there are many differences among the model solutions, particularly in the spatial structures of variability patterns. For example, the location of the maximum AMOC variability differs among the models between Northern and Southern Hemispheres.

  7. North Atlantic Simulations in Coordinated Ocean-Ice Reference Experiments Phase II (CORE-II) . Part II; Inter-Annual to Decadal Variability

    NASA Technical Reports Server (NTRS)

    Danabasoglu, Gokhan; Yeager, Steve G.; Kim, Who M.; Behrens, Erik; Bentsen, Mats; Bi, Daohua; Biastoch, Arne; Bleck, Rainer; Boening, Claus; Bozec, Alexandra; hide

    2015-01-01

    Simulated inter-annual to decadal variability and trends in the North Atlantic for the 1958-2007 period from twenty global ocean - sea-ice coupled models are presented. These simulations are performed as contributions to the second phase of the Coordinated Ocean-ice Reference Experiments (CORE-II). The study is Part II of our companion paper (Danabasoglu et al., 2014) which documented the mean states in the North Atlantic from the same models. A major focus of the present study is the representation of Atlantic meridional overturning circulation (AMOC) variability in the participating models. Relationships between AMOC variability and those of some other related variables, such as subpolar mixed layer depths, the North Atlantic Oscillation (NAO), and the Labrador Sea upper-ocean hydrographic properties, are also investigated. In general, AMOC variability shows three distinct stages. During the first stage that lasts until the mid- to late-1970s, AMOC is relatively steady, remaining lower than its long-term (1958-2007) mean. Thereafter, AMOC intensifies with maximum transports achieved in the mid- to late-1990s. This enhancement is then followed by a weakening trend until the end of our integration period. This sequence of low frequency AMOC variability is consistent with previous studies. Regarding strengthening of AMOC between about the mid-1970s and the mid-1990s, our results support a previously identified variability mechanism where AMOC intensification is connected to increased deep water formation in the subpolar North Atlantic, driven by NAO-related surface fluxes. The simulations tend to show general agreement in their representations of, for example, AMOC, sea surface temperature (SST), and subpolar mixed layer depth variabilities. In particular, the observed variability of the North Atlantic SSTs is captured well by all models. These findings indicate that simulated variability and trends are primarily dictated by the atmospheric datasets which include the influence of ocean dynamics from nature superimposed onto anthropogenic effects. Despite these general agreements, there are many differences among the model solutions, particularly in the spatial structures of variability patterns. For example, the location of the maximum AMOC variability differs among the models between Northern and Southern Hemispheres.

  8. A Latent Variable Approach to the Simple View of Reading

    ERIC Educational Resources Information Center

    Kershaw, Sarah; Schatschneider, Chris

    2012-01-01

    The present study utilized a latent variable modeling approach to examine the Simple View of Reading in a sample of students from 3rd, 7th, and 10th grades (N = 215, 188, and 180, respectively). Latent interaction modeling and other latent variable models were employed to investigate (a) the functional form of the relationship between decoding and…

  9. Variable Complexity Optimization of Composite Structures

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.

    2002-01-01

    The use of several levels of modeling in design has been dubbed variable complexity modeling. The work under the grant focused on developing variable complexity modeling strategies with emphasis on response surface techniques. Applications included design of stiffened composite plates for improved damage tolerance, the use of response surfaces for fitting weights obtained by structural optimization, and design against uncertainty using response surface techniques.

  10. Functionally relevant climate variables for arid lands: Aclimatic water deficit approach for modelling desert shrub distributions

    Treesearch

    Thomas E. Dilts; Peter J. Weisberg; Camie M. Dencker; Jeanne C. Chambers

    2015-01-01

    We have three goals. (1) To develop a suite of functionally relevant climate variables for modelling vegetation distribution on arid and semi-arid landscapes of the Great Basin, USA. (2) To compare the predictive power of vegetation distribution models based on mechanistically proximate factors (water deficit variables) and factors that are more mechanistically removed...

  11. Rebuilding DEMATEL threshold value: an example of a food and beverage information system.

    PubMed

    Hsieh, Yi-Fang; Lee, Yu-Cheng; Lin, Shao-Bin

    2016-01-01

    This study demonstrates how a decision-making trial and evaluation laboratory (DEMATEL) threshold value can be quickly and reasonably determined in the process of combining DEMATEL and decomposed theory of planned behavior (DTPB) models. Models are combined to identify the key factors of a complex problem. This paper presents a case study of a food and beverage information system as an example. The analysis of the example indicates that, given direct and indirect relationships among variables, if a traditional DTPB model only simulates the effects of the variables without considering that the variables will affect the original cause-and-effect relationships among the variables, then the original DTPB model variables cannot represent a complete relationship. For the food and beverage example, a DEMATEL method was employed to reconstruct a DTPB model and, more importantly, to calculate reasonable DEMATEL threshold value for determining additional relationships of variables in the original DTPB model. This study is method-oriented, and the depth of investigation into any individual case is limited. Therefore, the methods proposed in various fields of study should ideally be used to identify deeper and more practical implications.

  12. Fatigue and crashes: the case of freight transport in Colombia.

    PubMed

    Torregroza-Vargas, Nathaly M; Bocarejo, Juan Pablo; Ramos-Bonilla, Juan P

    2014-11-01

    Truck drivers have been involved in a significant number of road fatalities in Colombia. To identify variables that could be associated with crashes in which truck drivers are involved, a logistic regression model was constructed. The model had as the response variable a dichotomous variable that included the presence or absence of a crash during a specific trip. As independent variables the model included information regarding a driver's work shift, with variables that could be associated with driver's fatigue. The model also included potential confounders related with road conditions. With the model, it was possible to determine the odds ratio of a crash in relation to several variables, adjusting for confounding. To collect the information about the trips included in the model, a survey among truck drivers was conducted. The results suggest strong associations between crashes (i.e., some of them statistically significant) with the number of stops made during the trip, and the average time of each stop. Survey analysis allowed us to identify the practices that contribute to generating fatigue and unhealthy conditions on the road among professional drivers. A review of national regulations confirmed the lack of legislation on this topic. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Bayesian dynamic modeling of time series of dengue disease case counts.

    PubMed

    Martínez-Bello, Daniel Adyro; López-Quílez, Antonio; Torres-Prieto, Alexander

    2017-07-01

    The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model's short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC) for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease, producing useful models for decision-making in public health.

  14. Understanding the context of healthcare utilization: assessing environmental and provider-related variables in the behavioral model of utilization.

    PubMed Central

    Phillips, K A; Morrison, K R; Andersen, R; Aday, L A

    1998-01-01

    OBJECTIVE: The behavioral model of utilization, developed by Andersen, Aday, and others, is one of the most frequently used frameworks for analyzing the factors that are associated with patient utilization of healthcare services. However, the use of the model for examining the context within which utilization occurs-the role of the environment and provider-related factors-has been largely neglected. OBJECTIVE: To conduct a systematic review and analysis to determine if studies of medical care utilization that have used the behavioral model during the last 20 years have included environmental and provider-related variables and the methods used to analyze these variables. We discuss barriers to the use of these contextual variables and potential solutions. DATA SOURCES: The Social Science Citation Index and Science Citation Index. We included all articles from 1975-1995 that cited any of three key articles on the behavioral model, that included all articles that were empirical analyses and studies of formal medical care utilization, and articles that specifically stated their use of the behavioral model (n = 139). STUDY DESIGN: Design was a systematic literature review. DATA ANALYSIS: We used a structured review process to code articles on whether they included contextual variables: (1) environmental variables (characteristics of the healthcare delivery system, external environment, and community-level enabling factors); and (2) provider-related variables (patient factors that may be influenced by providers and provider characteristics that interact with patient characteristics to influence utilization). We also examined the methods used in studies that included contextual variables. PRINCIPAL FINDINGS: Forty-five percent of the studies included environmental variables and 51 percent included provider-related variables. Few studies examined specific measures of the healthcare system or provider characteristics or used methods other than simple regression analysis with hierarchical entry of variables. Only 14 percent of studies analyzed the context of healthcare by including both environmental and provider-related variables as well as using relevant methods. CONCLUSIONS: By assessing whether and how contextual variables are used, we are able to highlight the contributions made by studies using these approaches, to identify variables and methods that have been relatively underused, and to suggest solutions to barriers in using contextual variables. PMID:9685123

  15. Refinement of regression models to estimate real-time concentrations of contaminants in the Menomonee River drainage basin, southeast Wisconsin, 2008-11

    USGS Publications Warehouse

    Baldwin, Austin K.; Robertson, Dale M.; Saad, David A.; Magruder, Christopher

    2013-01-01

    In 2008, the U.S. Geological Survey and the Milwaukee Metropolitan Sewerage District initiated a study to develop regression models to estimate real-time concentrations and loads of chloride, suspended solids, phosphorus, and bacteria in streams near Milwaukee, Wisconsin. To collect monitoring data for calibration of models, water-quality sensors and automated samplers were installed at six sites in the Menomonee River drainage basin. The sensors continuously measured four potential explanatory variables: water temperature, specific conductance, dissolved oxygen, and turbidity. Discrete water-quality samples were collected and analyzed for five response variables: chloride, total suspended solids, total phosphorus, Escherichia coli bacteria, and fecal coliform bacteria. Using the first year of data, regression models were developed to continuously estimate the response variables on the basis of the continuously measured explanatory variables. Those models were published in a previous report. In this report, those models are refined using 2 years of additional data, and the relative improvement in model predictability is discussed. In addition, a set of regression models is presented for a new site in the Menomonee River Basin, Underwood Creek at Wauwatosa. The refined models use the same explanatory variables as the original models. The chloride models all used specific conductance as the explanatory variable, except for the model for the Little Menomonee River near Freistadt, which used both specific conductance and turbidity. Total suspended solids and total phosphorus models used turbidity as the only explanatory variable, and bacteria models used water temperature and turbidity as explanatory variables. An analysis of covariance (ANCOVA), used to compare the coefficients in the original models to those in the refined models calibrated using all of the data, showed that only 3 of the 25 original models changed significantly. Root-mean-squared errors (RMSEs) calculated for both the original and refined models using the entire dataset showed a median improvement in RMSE of 2.1 percent, with a range of 0.0–13.9 percent. Therefore most of the original models did almost as well at estimating concentrations during the validation period (October 2009–September 2011) as the refined models, which were calibrated using those data. Application of these refined models can produce continuously estimated concentrations of chloride, total suspended solids, total phosphorus, E. coli bacteria, and fecal coliform bacteria that may assist managers in quantifying the effects of land-use changes and improvement projects, establish total maximum daily loads, and enable better informed decision making in the future.

  16. Can Geostatistical Models Represent Nature's Variability? An Analysis Using Flume Experiments

    NASA Astrophysics Data System (ADS)

    Scheidt, C.; Fernandes, A. M.; Paola, C.; Caers, J.

    2015-12-01

    The lack of understanding in the Earth's geological and physical processes governing sediment deposition render subsurface modeling subject to large uncertainty. Geostatistics is often used to model uncertainty because of its capability to stochastically generate spatially varying realizations of the subsurface. These methods can generate a range of realizations of a given pattern - but how representative are these of the full natural variability? And how can we identify the minimum set of images that represent this natural variability? Here we use this minimum set to define the geostatistical prior model: a set of training images that represent the range of patterns generated by autogenic variability in the sedimentary environment under study. The proper definition of the prior model is essential in capturing the variability of the depositional patterns. This work starts with a set of overhead images from an experimental basin that showed ongoing autogenic variability. We use the images to analyze the essential characteristics of this suite of patterns. In particular, our goal is to define a prior model (a minimal set of selected training images) such that geostatistical algorithms, when applied to this set, can reproduce the full measured variability. A necessary prerequisite is to define a measure of variability. In this study, we measure variability using a dissimilarity distance between the images. The distance indicates whether two snapshots contain similar depositional patterns. To reproduce the variability in the images, we apply an MPS algorithm to the set of selected snapshots of the sedimentary basin that serve as training images. The training images are chosen from among the initial set by using the distance measure to ensure that only dissimilar images are chosen. Preliminary investigations show that MPS can reproduce fairly accurately the natural variability of the experimental depositional system. Furthermore, the selected training images provide process information. They fall into three basic patterns: a channelized end member, a sheet flow end member, and one intermediate case. These represent the continuum between autogenic bypass or erosion, and net deposition.

  17. Assessing geotechnical centrifuge modelling in addressing variably saturated flow in soil and fractured rock.

    PubMed

    Jones, Brendon R; Brouwers, Luke B; Van Tonder, Warren D; Dippenaar, Matthys A

    2017-05-01

    The vadose zone typically comprises soil underlain by fractured rock. Often, surface water and groundwater parameters are readily available, but variably saturated flow through soil and rock are oversimplified or estimated as input for hydrological models. In this paper, a series of geotechnical centrifuge experiments are conducted to contribute to the knowledge gaps in: (i) variably saturated flow and dispersion in soil and (ii) variably saturated flow in discrete vertical and horizontal fractures. Findings from the research show that the hydraulic gradient, and not the hydraulic conductivity, is scaled for seepage flow in the geotechnical centrifuge. Furthermore, geotechnical centrifuge modelling has been proven as a viable experimental tool for the modelling of hydrodynamic dispersion as well as the replication of similar flow mechanisms for unsaturated fracture flow, as previously observed in literature. Despite the imminent challenges of modelling variable saturation in the vadose zone, the geotechnical centrifuge offers a powerful experimental tool to physically model and observe variably saturated flow. This can be used to give valuable insight into mechanisms associated with solid-fluid interaction problems under these conditions. Findings from future research can be used to validate current numerical modelling techniques and address the subsequent influence on aquifer recharge and vulnerability, contaminant transport, waste disposal, dam construction, slope stability and seepage into subsurface excavations.

  18. Robust check loss-based variable selection of high-dimensional single-index varying-coefficient model

    NASA Astrophysics Data System (ADS)

    Song, Yunquan; Lin, Lu; Jian, Ling

    2016-07-01

    Single-index varying-coefficient model is an important mathematical modeling method to model nonlinear phenomena in science and engineering. In this paper, we develop a variable selection method for high-dimensional single-index varying-coefficient models using a shrinkage idea. The proposed procedure can simultaneously select significant nonparametric components and parametric components. Under defined regularity conditions, with appropriate selection of tuning parameters, the consistency of the variable selection procedure and the oracle property of the estimators are established. Moreover, due to the robustness of the check loss function to outliers in the finite samples, our proposed variable selection method is more robust than the ones based on the least squares criterion. Finally, the method is illustrated with numerical simulations.

  19. Crop status evaluations and yield predictions

    NASA Technical Reports Server (NTRS)

    Haun, J. R.

    1975-01-01

    A model was developed for predicting the day 50 percent of the wheat crop is planted in North Dakota. This model incorporates location as an independent variable. The Julian date when 50 percent of the crop was planted for the nine divisions of North Dakota for seven years was regressed on the 49 variables through the step-down multiple regression procedure. This procedure begins with all of the independent variables and sequentially removes variables that are below a predetermined level of significance after each step. The prediction equation was tested on daily data. The accuracy of the model is considered satisfactory for finding the historic dates on which to initiate yield prediction model. Growth prediction models were also developed for spring wheat.

  20. Dynamic rupture modeling with laboratory-derived constitutive relations

    USGS Publications Warehouse

    Okubo, P.G.

    1989-01-01

    A laboratory-derived state variable friction constitutive relation is used in the numerical simulation of the dynamic growth of an in-plane or mode II shear crack. According to this formulation, originally presented by J.H. Dieterich, frictional resistance varies with the logarithm of the slip rate and with the logarithm of the frictional state variable as identified by A.L. Ruina. Under conditions of steady sliding, the state variable is proportional to (slip rate)-1. Following suddenly introduced increases in slip rate, the rate and state dependencies combine to produce behavior which resembles slip weakening. When rupture nucleation is artificially forced at fixed rupture velocity, rupture models calculated with the state variable friction in a uniformly distributed initial stress field closely resemble earlier rupture models calculated with a slip weakening fault constitutive relation. Model calculations suggest that dynamic rupture following a state variable friction relation is similar to that following a simpler fault slip weakening law. However, when modeling the full cycle of fault motions, rate-dependent frictional responses included in the state variable formulation are important at low slip rates associated with rupture nucleation. -from Author

  1. Hierarchical Bayesian models to assess between- and within-batch variability of pathogen contamination in food.

    PubMed

    Commeau, Natalie; Cornu, Marie; Albert, Isabelle; Denis, Jean-Baptiste; Parent, Eric

    2012-03-01

    Assessing within-batch and between-batch variability is of major interest for risk assessors and risk managers in the context of microbiological contamination of food. For example, the ratio between the within-batch variability and the between-batch variability has a large impact on the results of a sampling plan. Here, we designed hierarchical Bayesian models to represent such variability. Compatible priors were built mathematically to obtain sound model comparisons. A numeric criterion is proposed to assess the contamination structure comparing the ability of the models to replicate grouped data at the batch level using a posterior predictive loss approach. Models were applied to two case studies: contamination by Listeria monocytogenes of pork breast used to produce diced bacon and contamination by the same microorganism on cold smoked salmon at the end of the process. In the first case study, a contamination structure clearly exists and is located at the batch level, that is, between batches variability is relatively strong, whereas in the second a structure also exists but is less marked. © 2012 Society for Risk Analysis.

  2. Modelling Pseudocalanus elongatus stage-structured population dynamics embedded in a water column ecosystem model for the northern North Sea

    NASA Astrophysics Data System (ADS)

    Moll, Andreas; Stegert, Christoph

    2007-01-01

    This paper outlines an approach to couple a structured zooplankton population model with state variables for eggs, nauplii, two copepodites stages and adults adapted to Pseudocalanus elongatus into the complex marine ecosystem model ECOHAM2 with 13 state variables resolving the carbon and nitrogen cycle. Different temperature and food scenarios derived from laboratory culture studies were examined to improve the process parameterisation for copepod stage dependent development processes. To study annual cycles under realistic weather and hydrographic conditions, the coupled ecosystem-zooplankton model is applied to a water column in the northern North Sea. The main ecosystem state variables were validated against observed monthly mean values. Then vertical profiles of selected state variables were compared to the physical forcing to study differences between zooplankton as one biomass state variable or partitioned into five population state variables. Simulated generation times are more affected by temperature than food conditions except during the spring phytoplankton bloom. Up to six generations within the annual cycle can be discerned in the simulation.

  3. A non-linear data mining parameter selection algorithm for continuous variables

    PubMed Central

    Razavi, Marianne; Brady, Sean

    2017-01-01

    In this article, we propose a new data mining algorithm, by which one can both capture the non-linearity in data and also find the best subset model. To produce an enhanced subset of the original variables, a preferred selection method should have the potential of adding a supplementary level of regression analysis that would capture complex relationships in the data via mathematical transformation of the predictors and exploration of synergistic effects of combined variables. The method that we present here has the potential to produce an optimal subset of variables, rendering the overall process of model selection more efficient. This algorithm introduces interpretable parameters by transforming the original inputs and also a faithful fit to the data. The core objective of this paper is to introduce a new estimation technique for the classical least square regression framework. This new automatic variable transformation and model selection method could offer an optimal and stable model that minimizes the mean square error and variability, while combining all possible subset selection methodology with the inclusion variable transformations and interactions. Moreover, this method controls multicollinearity, leading to an optimal set of explanatory variables. PMID:29131829

  4. Effect of occupational mobility and health status on life satisfaction of Chinese residents of different occupations: logistic diagonal mobility models analysis of cross-sectional data on eight Chinese provinces.

    PubMed

    Liang, Ying; Lu, Peiyi

    2014-02-08

    Life satisfaction research in China is in development, requiring new perspectives for enrichment. In China, occupational mobility is accompanied by changes in economic liberalization and the emergence of occupational stratification. On the whole, however, occupational mobility has rarely been used as an independent variable. Health status is always used as the observed or dependent variable in studies of the phenomenon and its influencing factors. A research gap still exists for enriching this field. The data used in this study were obtained from the China Health and Nutrition Survey (CHNS). The study included nine provinces in China. The survey was conducted from 1989 to 2009.Every survey involved approximately 4400 families or 19,000 individual samples and parts of community data. First, we built a 5 × 5 social mobility table and calculated life satisfaction of Chinese residents of different occupations in each table. Second, gender, age, marital status, education level, annual income and hukou, health status, occupational mobility were used as independent variables. Lastly, we used logistic diagonal mobility models to analyze the relationship between life satisfaction and the variables. Model 1 was the basic model, which consisted of the standard model and controlled variables and excluded drift variables. Model 2 was the total model, which consisted of all variables of interest in this study. Model 3 was the screening model, which excluded the insignificant drift effect index in Model 2. From the perspective of the analysis of controlled variables, health conditions, direction, and distance of occupational mobility significantly affected life satisfaction of Chinese residents of different occupations. (1) From the perspective of health status, respondents who have not been sick or injured had better life satisfaction than those who had been sick or injured. (2) From the perspective of occupational mobility direction, the coefficients of occupational mobility in the models are less than 0, which means that upward mobility negatively affects life satisfaction. (3) From the perspective of distance, when analyzing mobility distance in Models 2 and 3, a greater distance indicates better life satisfaction.

  5. Effect of occupational mobility and health status on life satisfaction of Chinese residents of different occupations: logistic diagonal mobility models analysis of cross-sectional data on eight Chinese provinces

    PubMed Central

    2014-01-01

    Background Life satisfaction research in China is in development, requiring new perspectives for enrichment. In China, occupational mobility is accompanied by changes in economic liberalization and the emergence of occupational stratification. On the whole, however, occupational mobility has rarely been used as an independent variable. Health status is always used as the observed or dependent variable in studies of the phenomenon and its influencing factors. A research gap still exists for enriching this field. Methods The data used in this study were obtained from the China Health and Nutrition Survey (CHNS). The study included nine provinces in China. The survey was conducted from 1989 to 2009.Every survey involved approximately 4400 families or 19,000 individual samples and parts of community data. Results First, we built a 5 × 5 social mobility table and calculated life satisfaction of Chinese residents of different occupations in each table. Second, gender, age, marital status, education level, annual income and hukou, health status, occupational mobility were used as independent variables. Lastly, we used logistic diagonal mobility models to analyze the relationship between life satisfaction and the variables. Model 1 was the basic model, which consisted of the standard model and controlled variables and excluded drift variables. Model 2 was the total model, which consisted of all variables of interest in this study. Model 3 was the screening model, which excluded the insignificant drift effect index in Model 2. Conclusion From the perspective of the analysis of controlled variables, health conditions, direction, and distance of occupational mobility significantly affected life satisfaction of Chinese residents of different occupations. (1) From the perspective of health status, respondents who have not been sick or injured had better life satisfaction than those who had been sick or injured. (2) From the perspective of occupational mobility direction, the coefficients of occupational mobility in the models are less than 0, which means that upward mobility negatively affects life satisfaction. (3) From the perspective of distance, when analyzing mobility distance in Models 2 and 3, a greater distance indicates better life satisfaction. PMID:24506976

  6. A novel model incorporating two variability sources for describing motor evoked potentials

    PubMed Central

    Goetz, Stefan M.; Luber, Bruce; Lisanby, Sarah H.; Peterchev, Angel V.

    2014-01-01

    Objective Motor evoked potentials (MEPs) play a pivotal role in transcranial magnetic stimulation (TMS), e.g., for determining the motor threshold and probing cortical excitability. Sampled across the range of stimulation strengths, MEPs outline an input–output (IO) curve, which is often used to characterize the corticospinal tract. More detailed understanding of the signal generation and variability of MEPs would provide insight into the underlying physiology and aid correct statistical treatment of MEP data. Methods A novel regression model is tested using measured IO data of twelve subjects. The model splits MEP variability into two independent contributions, acting on both sides of a strong sigmoidal nonlinearity that represents neural recruitment. Traditional sigmoidal regression with a single variability source after the nonlinearity is used for comparison. Results The distribution of MEP amplitudes varied across different stimulation strengths, violating statistical assumptions in traditional regression models. In contrast to the conventional regression model, the dual variability source model better described the IO characteristics including phenomena such as changing distribution spread and skewness along the IO curve. Conclusions MEP variability is best described by two sources that most likely separate variability in the initial excitation process from effects occurring later on. The new model enables more accurate and sensitive estimation of the IO curve characteristics, enhancing its power as a detection tool, and may apply to other brain stimulation modalities. Furthermore, it extracts new information from the IO data concerning the neural variability—information that has previously been treated as noise. PMID:24794287

  7. The Interface Between Theory and Data in Structural Equation Models

    USGS Publications Warehouse

    Grace, James B.; Bollen, Kenneth A.

    2006-01-01

    Structural equation modeling (SEM) holds the promise of providing natural scientists the capacity to evaluate complex multivariate hypotheses about ecological systems. Building on its predecessors, path analysis and factor analysis, SEM allows for the incorporation of both observed and unobserved (latent) variables into theoretically based probabilistic models. In this paper we discuss the interface between theory and data in SEM and the use of an additional variable type, the composite, for representing general concepts. In simple terms, composite variables specify the influences of collections of other variables and can be helpful in modeling general relationships of the sort commonly of interest to ecologists. While long recognized as a potentially important element of SEM, composite variables have received very limited use, in part because of a lack of theoretical consideration, but also because of difficulties that arise in parameter estimation when using conventional solution procedures. In this paper we present a framework for discussing composites and demonstrate how the use of partially reduced form models can help to overcome some of the parameter estimation and evaluation problems associated with models containing composites. Diagnostic procedures for evaluating the most appropriate and effective use of composites are illustrated with an example from the ecological literature. It is argued that an ability to incorporate composite variables into structural equation models may be particularly valuable in the study of natural systems, where concepts are frequently multifaceted and the influences of suites of variables are often of interest.

  8. Enhanced future variability during India's rainy season

    NASA Astrophysics Data System (ADS)

    Menon, Arathy; Levermann, Anders; Schewe, Jacob

    2013-04-01

    The Indian summer monsoon shapes the livelihood of a large share of the world's population. About 80% of annual precipitation over India occurs during the monsoon season from June through September. Next to its seasonal mean rainfall the day-to-day variability is crucial for the risk of flooding, national water supply and agricultural productivity. Here we show that the latest ensemble of climate model simulations, prepared for the IPCC's AR-5, consistently projects significant increases in day-to-day rainfall variability under unmitigated climate change. While all models show an increase in day-to-day variability, some models are more realistic in capturing the observed seasonal mean rainfall over India than others. While no model's monsoon rainfall exceeds the observed value by more than two standard deviations, half of the models simulate a significantly weaker monsoon than observed. The relative increase in day-to-day variability by the year 2100 ranges from 15% to 48% under the strongest scenario (RCP-8.5), in the ten models which capture seasonal mean rainfall closest to observations. The variability increase per degree of global warming is independent of the scenario in most models, and is 8% +/- 4% per K on average. This consistent projection across 20 comprehensive climate models provides confidence in the results and suggests the necessity of profound adaptation measures in the case of unmitigated climate change.

  9. How does spatial variability of climate affect catchment streamflow predictions?

    EPA Science Inventory

    Spatial variability of climate can negatively affect catchment streamflow predictions if it is not explicitly accounted for in hydrologic models. In this paper, we examine the changes in streamflow predictability when a hydrologic model is run with spatially variable (distribute...

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poyer, D.A.

    In this report, tests of statistical significance of five sets of variables with household energy consumption (at the point of end-use) are described. Five models, in sequence, were empirically estimated and tested for statistical significance by using the Residential Energy Consumption Survey of the US Department of Energy, Energy Information Administration. Each model incorporated additional information, embodied in a set of variables not previously specified in the energy demand system. The variable sets were generally labeled as economic variables, weather variables, household-structure variables, end-use variables, and housing-type variables. The tests of statistical significance showed each of the variable sets tomore » be highly significant in explaining the overall variance in energy consumption. The findings imply that the contemporaneous interaction of different types of variables, and not just one exclusive set of variables, determines the level of household energy consumption.« less

  11. Replicates in high dimensions, with applications to latent variable graphical models.

    PubMed

    Tan, Kean Ming; Ning, Yang; Witten, Daniela M; Liu, Han

    2016-12-01

    In classical statistics, much thought has been put into experimental design and data collection. In the high-dimensional setting, however, experimental design has been less of a focus. In this paper, we stress the importance of collecting multiple replicates for each subject in this setting. We consider learning the structure of a graphical model with latent variables, under the assumption that these variables take a constant value across replicates within each subject. By collecting multiple replicates for each subject, we are able to estimate the conditional dependence relationships among the observed variables given the latent variables. To test the null hypothesis of conditional independence between two observed variables, we propose a pairwise decorrelated score test. Theoretical guarantees are established for parameter estimation and for this test. We show that our proposal is able to estimate latent variable graphical models more accurately than some existing proposals, and apply the proposed method to a brain imaging dataset.

  12. Habitat models to predict wetland bird occupancy influenced by scale, anthropogenic disturbance, and imperfect detection

    USGS Publications Warehouse

    Glisson, Wesley J.; Conway, Courtney J.; Nadeau, Christopher P.; Borgmann, Kathi L.

    2017-01-01

    Understanding species–habitat relationships for endangered species is critical for their conservation. However, many studies have limited value for conservation because they fail to account for habitat associations at multiple spatial scales, anthropogenic variables, and imperfect detection. We addressed these three limitations by developing models for an endangered wetland bird, Yuma Ridgway's rail (Rallus obsoletus yumanensis), that examined how the spatial scale of environmental variables, inclusion of anthropogenic disturbance variables, and accounting for imperfect detection in validation data influenced model performance. These models identified associations between environmental variables and occupancy. We used bird survey and spatial environmental data at 2473 locations throughout the species' U.S. range to create and validate occupancy models and produce predictive maps of occupancy. We compared habitat-based models at three spatial scales (100, 224, and 500 m radii buffers) with and without anthropogenic disturbance variables using validation data adjusted for imperfect detection and an unadjusted validation dataset that ignored imperfect detection. The inclusion of anthropogenic disturbance variables improved the performance of habitat models at all three spatial scales, and the 224-m-scale model performed best. All models exhibited greater predictive ability when imperfect detection was incorporated into validation data. Yuma Ridgway's rail occupancy was negatively associated with ephemeral and slow-moving riverine features and high-intensity anthropogenic development, and positively associated with emergent vegetation, agriculture, and low-intensity development. Our modeling approach accounts for common limitations in modeling species–habitat relationships and creating predictive maps of occupancy probability and, therefore, provides a useful framework for other species.

  13. Selection of climate change scenario data for impact modelling.

    PubMed

    Sloth Madsen, M; Maule, C Fox; MacKellar, N; Olesen, J E; Christensen, J Hesselbjerg

    2012-01-01

    Impact models investigating climate change effects on food safety often need detailed climate data. The aim of this study was to select climate change projection data for selected crop phenology and mycotoxin impact models. Using the ENSEMBLES database of climate model output, this study illustrates how the projected climate change signal of important variables as temperature, precipitation and relative humidity depends on the choice of the climate model. Using climate change projections from at least two different climate models is recommended to account for model uncertainty. To make the climate projections suitable for impact analysis at the local scale a weather generator approach was adopted. As the weather generator did not treat all the necessary variables, an ad-hoc statistical method was developed to synthesise realistic values of missing variables. The method is presented in this paper, applied to relative humidity, but it could be adopted to other variables if needed.

  14. Preliminary Multi-Variable Cost Model for Space Telescopes

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip; Hendrichs, Todd

    2010-01-01

    Parametric cost models are routinely used to plan missions, compare concepts and justify technology investments. This paper reviews the methodology used to develop space telescope cost models; summarizes recently published single variable models; and presents preliminary results for two and three variable cost models. Some of the findings are that increasing mass reduces cost; it costs less per square meter of collecting aperture to build a large telescope than a small telescope; and technology development as a function of time reduces cost at the rate of 50% per 17 years.

  15. A Bayesian methodological framework for accommodating interannual variability of nutrient loading with the SPARROW model

    NASA Astrophysics Data System (ADS)

    Wellen, Christopher; Arhonditsis, George B.; Labencki, Tanya; Boyd, Duncan

    2012-10-01

    Regression-type, hybrid empirical/process-based models (e.g., SPARROW, PolFlow) have assumed a prominent role in efforts to estimate the sources and transport of nutrient pollution at river basin scales. However, almost no attempts have been made to explicitly accommodate interannual nutrient loading variability in their structure, despite empirical and theoretical evidence indicating that the associated source/sink processes are quite variable at annual timescales. In this study, we present two methodological approaches to accommodate interannual variability with the Spatially Referenced Regressions on Watershed attributes (SPARROW) nonlinear regression model. The first strategy uses the SPARROW model to estimate a static baseline load and climatic variables (e.g., precipitation) to drive the interannual variability. The second approach allows the source/sink processes within the SPARROW model to vary at annual timescales using dynamic parameter estimation techniques akin to those used in dynamic linear models. Model parameterization is founded upon Bayesian inference techniques that explicitly consider calibration data and model uncertainty. Our case study is the Hamilton Harbor watershed, a mixed agricultural and urban residential area located at the western end of Lake Ontario, Canada. Our analysis suggests that dynamic parameter estimation is the more parsimonious of the two strategies tested and can offer insights into the temporal structural changes associated with watershed functioning. Consistent with empirical and theoretical work, model estimated annual in-stream attenuation rates varied inversely with annual discharge. Estimated phosphorus source areas were concentrated near the receiving water body during years of high in-stream attenuation and dispersed along the main stems of the streams during years of low attenuation, suggesting that nutrient source areas are subject to interannual variability.

  16. Survival of white-tailed deer neonates in Minnesota and South Dakota

    USGS Publications Warehouse

    Grovenburg, T.W.; Swanson, C.C.; Jacques, C.N.; Klaver, R.W.; Brinkman, T.J.; Burris, B.M.; Deperno, C.S.; Jenks, J.A.

    2011-01-01

    Understanding the influence of intrinsic (e.g., age, birth mass, and sex) and habitat factors on survival of neonate white-tailed deer improves understanding of population ecology. During 2002–2004, we captured and radiocollared 78 neonates in eastern South Dakota and southwestern Minnesota, of which 16 died before 1 September. Predation accounted for 80% of mortality; the remaining 20% was attributed to starvation. Canids (coyotes [Canis latrans], domestic dogs) accounted for 100% of predation on neonates. We used known fate analysis in Program MARK to estimate survival rates and investigate the influence of intrinsic and habitat variables on survival. We developed 2 a priori model sets, including intrinsic variables (model set 1) and habitat variables (model set 2; forested cover, wetlands, grasslands, and croplands). For model set 1, model {Sage-interval} had the lowest AICc (Akaike's information criterion for small sample size) value, indicating that age at mortality (3-stage age-interval: 0–2 weeks, 2–8 weeks, and >8 weeks) best explained survival. Model set 2 indicated that habitat variables did not further influence survival in the study area; β-estimates and 95% confidence intervals for habitat variables in competing models encompassed zero; thus, we excluded these models from consideration. Overall survival rate using model {Sage-interval} was 0.87 (95% CI = 0.83–0.91); 61% of mortalities occurred at 0–2 weeks of age, 26% at 2–8 weeks of age, and 13% at >8 weeks of age. Our results indicate that variables influencing survival may be area specific. Region-specific data are needed to determine influences of intrinsic and habitat variables on neonate survival before wildlife managers can determine which habitat management activities influence neonate populations.

  17. Variable cycle control model for intersection based on multi-source information

    NASA Astrophysics Data System (ADS)

    Sun, Zhi-Yuan; Li, Yue; Qu, Wen-Cong; Chen, Yan-Yan

    2018-05-01

    In order to improve the efficiency of traffic control system in the era of big data, a new variable cycle control model based on multi-source information is presented for intersection in this paper. Firstly, with consideration of multi-source information, a unified framework based on cyber-physical system is proposed. Secondly, taking into account the variable length of cell, hysteresis phenomenon of traffic flow and the characteristics of lane group, a Lane group-based Cell Transmission Model is established to describe the physical properties of traffic flow under different traffic signal control schemes. Thirdly, the variable cycle control problem is abstracted into a bi-level programming model. The upper level model is put forward for cycle length optimization considering traffic capacity and delay. The lower level model is a dynamic signal control decision model based on fairness analysis. Then, a Hybrid Intelligent Optimization Algorithm is raised to solve the proposed model. Finally, a case study shows the efficiency and applicability of the proposed model and algorithm.

  18. Sensitivity of the interannual variability of mineral aerosol simulations to meteorological forcing dataset

    DOE PAGES

    Smith, Molly B.; Mahowald, Natalie M.; Albani, Samuel; ...

    2017-03-07

    Interannual variability in desert dust is widely observed and simulated, yet the sensitivity of these desert dust simulations to a particular meteorological dataset, as well as a particular model construction, is not well known. Here we use version 4 of the Community Atmospheric Model (CAM4) with the Community Earth System Model (CESM) to simulate dust forced by three different reanalysis meteorological datasets for the period 1990–2005. We then contrast the results of these simulations with dust simulated using online winds dynamically generated from sea surface temperatures, as well as with simulations conducted using other modeling frameworks but the same meteorological forcings, in order tomore » determine the sensitivity of climate model output to the specific reanalysis dataset used. For the seven cases considered in our study, the different model configurations are able to simulate the annual mean of the global dust cycle, seasonality and interannual variability approximately equally well (or poorly) at the limited observational sites available. Altogether, aerosol dust-source strength has remained fairly constant during the time period from 1990 to 2005, although there is strong seasonal and some interannual variability simulated in the models and seen in the observations over this time period. Model interannual variability comparisons to observations, as well as comparisons between models, suggest that interannual variability in dust is still difficult to simulate accurately, with averaged correlation coefficients of 0.1 to 0.6. Because of the large variability, at least 1 year of observations at most sites are needed to correctly observe the mean, but in some regions, particularly the remote oceans of the Southern Hemisphere, where interannual variability may be larger than in the Northern Hemisphere, 2–3 years of data are likely to be needed.« less

  19. Seasonal precipitation forecasting for the Melbourne region using a Self-Organizing Maps approach

    NASA Astrophysics Data System (ADS)

    Pidoto, Ross; Wallner, Markus; Haberlandt, Uwe

    2017-04-01

    The Melbourne region experiences highly variable inter-annual rainfall. For close to a decade during the 2000s, below average rainfall seriously affected the environment, water supplies and agriculture. A seasonal rainfall forecasting model for the Melbourne region based on the novel approach of a Self-Organizing Map has been developed and tested for its prediction performance. Predictor variables at varying lead times were first assessed for inclusion within the model by calculating their importance via Random Forests. Predictor variables tested include the climate indices SOI, DMI and N3.4, in addition to gridded global sea surface temperature data. Five forecasting models were developed: an annual model and four seasonal models, each individually optimized for performance through Pearson's correlation r and the Nash-Sutcliffe Efficiency. The annual model showed a prediction performance of r = 0.54 and NSE = 0.14. The best seasonal model was for spring, with r = 0.61 and NSE = 0.31. Autumn was the worst performing seasonal model. The sea surface temperature data contributed fewer predictor variables compared to climate indices. Most predictor variables were supplied at a minimum lead, however some predictors were found at lead times of up to a year.

  20. Towards a Stochastic Predictive Understanding of Ecosystem Functioning and Resilience to Environmental Changes

    NASA Astrophysics Data System (ADS)

    Pappas, C.

    2017-12-01

    Terrestrial ecosystem processes respond differently to hydrometeorological variability across timescales, and so does our scientific understanding of the underlying mechanisms. Process-based modeling of ecosystem functioning is therefore challenging, especially when long-term predictions are envisioned. Here we analyze the statistical properties of hydrometeorological and ecosystem variability, i.e., the variability of ecosystem process related to vegetation carbon dynamics, from hourly to decadal timescales. 23 extra-tropical forest sites, covering different climatic zones and vegetation characteristics, are examined. Micrometeorological and reanalysis data of precipitation, air temperature, shortwave radiation and vapor pressure deficit are used to describe hydrometeorological variability. Ecosystem variability is quantified using long-term eddy covariance flux data of hourly net ecosystem exchange of CO2 between land surface and atmosphere, monthly remote sensing vegetation indices, annual tree-ring widths and above-ground biomass increment estimates. We find that across sites and timescales ecosystem variability is confined within a hydrometeorological envelope that describes the range of variability of the available resources, i.e., water and energy. Furthermore, ecosystem variability demonstrates long-term persistence, highlighting ecological memory and slow ecosystem recovery rates after disturbances. We derive an analytical model, combining deterministic harmonics and stochastic processes, that represents major mechanisms and uncertainties and mimics the observed pattern of hydrometeorological and ecosystem variability. This stochastic framework offers a parsimonious and mathematically tractable approach for modelling ecosystem functioning and for understanding its response and resilience to environmental changes. Furthermore, this framework reflects well the observed ecological memory, an inherent property of ecosystem functioning that is currently not captured by simulation results with process-based models. Our analysis offers a perspective for terrestrial ecosystem modelling, combining current process understanding with stochastic methods, and paves the way for new model-data integration opportunities in Earth system sciences.

  1. Average inactivity time model, associated orderings and reliability properties

    NASA Astrophysics Data System (ADS)

    Kayid, M.; Izadkhah, S.; Abouammoh, A. M.

    2018-02-01

    In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.

  2. Estimating an area-level socioeconomic status index and its association with colonoscopy screening adherence.

    PubMed

    Wheeler, David C; Czarnota, Jenna; Jones, Resa M

    2017-01-01

    Socioeconomic status (SES) is often considered a risk factor for health outcomes. SES is typically measured using individual variables of educational attainment, income, housing, and employment variables or a composite of these variables. Approaches to building the composite variable include using equal weights for each variable or estimating the weights with principal components analysis or factor analysis. However, these methods do not consider the relationship between the outcome and the SES variables when constructing the index. In this project, we used weighted quantile sum (WQS) regression to estimate an area-level SES index and its effect in a model of colonoscopy screening adherence in the Minnesota-Wisconsin Metropolitan Statistical Area. We considered several specifications of the SES index including using different spatial scales (e.g., census block group-level, tract-level) for the SES variables. We found a significant positive association (odds ratio = 1.17, 95% CI: 1.15-1.19) between the SES index and colonoscopy adherence in the best fitting model. The model with the best goodness-of-fit included a multi-scale SES index with 10 variables at the block group-level and one at the tract-level, with home ownership, race, and income among the most important variables. Contrary to previous index construction, our results were not consistent with an assumption of equal importance of variables in the SES index when explaining colonoscopy screening adherence. Our approach is applicable in any study where an SES index is considered as a variable in a regression model and the weights for the SES variables are not known in advance.

  3. University of North Carolina Caries Risk Assessment Study: comparisons of high risk prediction, any risk prediction, and any risk etiologic models.

    PubMed

    Beck, J D; Weintraub, J A; Disney, J A; Graves, R C; Stamm, J W; Kaste, L M; Bohannan, H M

    1992-12-01

    The purpose of this analysis is to compare three different statistical models for predicting children likely to be at risk of developing dental caries over a 3-yr period. Data are based on 4117 children who participated in the University of North Carolina Caries Risk Assessment Study, a longitudinal study conducted in the Aiken, South Carolina, and Portland, Maine areas. The three models differed with respect to either the types of variables included or the definition of disease outcome. The two "Prediction" models included both risk factor variables thought to cause dental caries and indicator variables that are associated with dental caries, but are not thought to be causal for the disease. The "Etiologic" model included only etiologic factors as variables. A dichotomous outcome measure--none or any 3-yr increment, was used in the "Any Risk Etiologic model" and the "Any Risk Prediction Model". Another outcome, based on a gradient measure of disease, was used in the "High Risk Prediction Model". The variables that are significant in these models vary across grades and sites, but are more consistent among the Etiologic model than the Predictor models. However, among the three sets of models, the Any Risk Prediction Models have the highest sensitivity and positive predictive values, whereas the High Risk Prediction Models have the highest specificity and negative predictive values. Considerations in determining model preference are discussed.

  4. Development of a mathematical model of the human cardiovascular system: An educational perspective

    NASA Astrophysics Data System (ADS)

    Johnson, Bruce Allen

    A mathematical model of the human cardiovascular system will be a useful educational tool in biological sciences and bioengineering classrooms. The goal of this project is to develop a mathematical model of the human cardiovascular system that responds appropriately to variations of significant physical variables. Model development is based on standard fluid statics and dynamics principles, pressure-volume characteristics of the cardiac cycle, and compliant behavior of blood vessels. Cardiac cycle phases provide the physical and logical model structure, and Boolean algebra links model sections. The model is implemented using VisSim, a highly intuitive and easily learned block diagram modeling software package. Comparisons of model predictions of key variables to published values suggest that the model reasonably approximates expected behavior of those variables. The model responds plausibly to variations of independent variables. Projected usefulness of the model as an educational tool is threefold: independent variables which determine heart function may be easily varied to observe cause and effect; the model is used in an interactive setting; and the relationship of governing equations to model behavior is readily viewable and intuitive. Future use of this model in classrooms may give a more reasonable indication of its value as an educational tool.* *This dissertation includes a CD that is multimedia (contains text and other applications that are not available in a printed format). The CD requires the following applications: CorelPhotoHouse, CorelWordPerfect, VisSinViewer (included on CD), Internet access.

  5. Bootstrap investigation of the stability of a Cox regression model.

    PubMed

    Altman, D G; Andersen, P K

    1989-07-01

    We describe a bootstrap investigation of the stability of a Cox proportional hazards regression model resulting from the analysis of a clinical trial of azathioprine versus placebo in patients with primary biliary cirrhosis. We have considered stability to refer both to the choice of variables included in the model and, more importantly, to the predictive ability of the model. In stepwise Cox regression analyses of 100 bootstrap samples using 17 candidate variables, the most frequently selected variables were those selected in the original analysis, and no other important variable was identified. Thus there was no reason to doubt the model obtained in the original analysis. For each patient in the trial, bootstrap confidence intervals were constructed for the estimated probability of surviving two years. It is shown graphically that these intervals are markedly wider than those obtained from the original model.

  6. The unusual suspect: Land use is a key predictor of biodiversity patterns in the Iberian Peninsula

    NASA Astrophysics Data System (ADS)

    Martins, Inês Santos; Proença, Vânia; Pereira, Henrique Miguel

    2014-11-01

    Although land use change is a key driver of biodiversity change, related variables such as habitat area and habitat heterogeneity are seldom considered in modeling approaches at larger extents. To address this knowledge gap we tested the contribution of land use related variables to models describing richness patterns of amphibians, reptiles and passerines in the Iberian Peninsula. We analyzed the relationship between species richness and habitat heterogeneity at two spatial resolutions (i.e., 10 km × 10 km and 50 km × 50 km). Using both ordinary least square and simultaneous autoregressive models, we assessed the relative importance of land use variables, climate variables and topographic variables. We also compare the species-area relationship with a multi-habitat model, the countryside species-area relationship, to assess the role of the area of different types of habitats on species diversity across scales. The association between habitat heterogeneity and species richness varied with the taxa and spatial resolution. A positive relationship was detected for all taxa at a grain size of 10 km × 10 km, but only passerines responded at a grain size of 50 km × 50 km. Species richness patterns were well described by abiotic predictors, but habitat predictors also explained a considerable portion of the variation. Moreover, species richness patterns were better described by a multi-habitat species-area model, incorporating land use variables, than by the classic power model, which only includes area as the single explanatory variable. Our results suggest that the role of land use in shaping species richness patterns goes beyond the local scale and persists at larger spatial scales. These findings call for the need of integrating land use variables in models designed to assess species richness response to large scale environmental changes.

  7. Incorporation of expert variability into breast cancer treatment recommendation in designing clinical protocol guided fuzzy rule system models.

    PubMed

    Garibaldi, Jonathan M; Zhou, Shang-Ming; Wang, Xiao-Ying; John, Robert I; Ellis, Ian O

    2012-06-01

    It has been often demonstrated that clinicians exhibit both inter-expert and intra-expert variability when making difficult decisions. In contrast, the vast majority of computerized models that aim to provide automated support for such decisions do not explicitly recognize or replicate this variability. Furthermore, the perfect consistency of computerized models is often presented as a de facto benefit. In this paper, we describe a novel approach to incorporate variability within a fuzzy inference system using non-stationary fuzzy sets in order to replicate human variability. We apply our approach to a decision problem concerning the recommendation of post-operative breast cancer treatment; specifically, whether or not to administer chemotherapy based on assessment of five clinical variables: NPI (the Nottingham Prognostic Index), estrogen receptor status, vascular invasion, age and lymph node status. In doing so, we explore whether such explicit modeling of variability provides any performance advantage over a more conventional fuzzy approach, when tested on a set of 1310 unselected cases collected over a fourteen year period at the Nottingham University Hospitals NHS Trust, UK. The experimental results show that the standard fuzzy inference system (that does not model variability) achieves overall agreement to clinical practice around 84.6% (95% CI: 84.1-84.9%), while the non-stationary fuzzy model can significantly increase performance to around 88.1% (95% CI: 88.0-88.2%), p<0.001. We conclude that non-stationary fuzzy models provide a valuable new approach that may be applied to clinical decision support systems in any application domain. Copyright © 2012 Elsevier Inc. All rights reserved.

  8. Effects of lidar pulse density and sample size on a model-assisted approach to estimate forest inventory variables

    Treesearch

    Jacob Strunk; Hailemariam Temesgen; Hans-Erik Andersen; James P. Flewelling; Lisa Madsen

    2012-01-01

    Using lidar in an area-based model-assisted approach to forest inventory has the potential to increase estimation precision for some forest inventory variables. This study documents the bias and precision of a model-assisted (regression estimation) approach to forest inventory with lidar-derived auxiliary variables relative to lidar pulse density and the number of...

  9. A canonical neural mechanism for behavioral variability

    PubMed Central

    Darshan, Ran; Wood, William E.; Peters, Susan; Leblois, Arthur; Hansel, David

    2017-01-01

    The ability to generate variable movements is essential for learning and adjusting complex behaviours. This variability has been linked to the temporal irregularity of neuronal activity in the central nervous system. However, how neuronal irregularity actually translates into behavioural variability is unclear. Here we combine modelling, electrophysiological and behavioural studies to address this issue. We demonstrate that a model circuit comprising topographically organized and strongly recurrent neural networks can autonomously generate irregular motor behaviours. Simultaneous recordings of neurons in singing finches reveal that neural correlations increase across the circuit driving song variability, in agreement with the model predictions. Analysing behavioural data, we find remarkable similarities in the babbling statistics of 5–6-month-old human infants and juveniles from three songbird species and show that our model naturally accounts for these ‘universal' statistics. PMID:28530225

  10. A gentle introduction to quantile regression for ecologists

    USGS Publications Warehouse

    Cade, B.S.; Noon, B.R.

    2003-01-01

    Quantile regression is a way to estimate the conditional quantiles of a response variable distribution in the linear model that provides a more complete view of possible causal relationships between variables in ecological processes. Typically, all the factors that affect ecological processes are not measured and included in the statistical models used to investigate relationships between variables associated with those processes. As a consequence, there may be a weak or no predictive relationship between the mean of the response variable (y) distribution and the measured predictive factors (X). Yet there may be stronger, useful predictive relationships with other parts of the response variable distribution. This primer relates quantile regression estimates to prediction intervals in parametric error distribution regression models (eg least squares), and discusses the ordering characteristics, interval nature, sampling variation, weighting, and interpretation of the estimates for homogeneous and heterogeneous regression models.

  11. Skill of ENSEMBLES seasonal re-forecasts for malaria prediction in West Africa

    NASA Astrophysics Data System (ADS)

    Jones, A. E.; Morse, A. P.

    2012-12-01

    This study examines the performance of malaria-relevant climate variables from the ENSEMBLES seasonal ensemble re-forecasts for sub-Saharan West Africa, using a dynamic malaria model to transform temperature and rainfall forecasts into simulated malaria incidence and verifying these forecasts against simulations obtained by driving the malaria model with General Circulation Model-derived reanalysis. Two subregions of forecast skill are identified: the highlands of Cameroon, where low temperatures limit simulated malaria during the forecast period and interannual variability in simulated malaria is closely linked to variability in temperature, and northern Nigeria/southern Niger, where simulated malaria variability is strongly associated with rainfall variability during the peak rain months.

  12. A Priori Subgrid Scale Modeling for a Droplet Laden Temporal Mixing Layer

    NASA Technical Reports Server (NTRS)

    Okongo, Nora; Bellan, Josette

    2000-01-01

    Subgrid analysis of a transitional temporal mixing layer with evaporating droplets has been performed using a direct numerical simulation (DNS) database. The DNS is for a Reynolds number (based on initial vorticity thickness) of 600, with droplet mass loading of 0.2. The gas phase is computed using a Eulerian formulation, with Lagrangian droplet tracking. Since Large Eddy Simulation (LES) of this flow requires the computation of unfiltered gas-phase variables at droplet locations from filtered gas-phase variables at the grid points, it is proposed to model these by assuming the gas-phase variables to be given by the filtered variables plus a correction based on the filtered standard deviation, which can be computed from the sub-grid scale (SGS) standard deviation. This model predicts unfiltered variables at droplet locations better than simply interpolating the filtered variables. Three methods are investigated for modeling the SGS standard deviation: Smagorinsky, gradient and scale-similarity. When properly calibrated, the gradient and scale-similarity methods give results in excellent agreement with the DNS.

  13. Plausible Effect of Weather on Atlantic Meridional Overturning Circulation with a Coupled General Circulation Model

    NASA Astrophysics Data System (ADS)

    Liu, Zedong; Wan, Xiuquan

    2018-04-01

    The Atlantic meridional overturning circulation (AMOC) is a vital component of the global ocean circulation and the heat engine of the climate system. Through the use of a coupled general circulation model, this study examines the role of synoptic systems on the AMOC and presents evidence that internally generated high-frequency, synoptic-scale weather variability in the atmosphere could play a significant role in maintaining the overall strength and variability of the AMOC, thereby affecting climate variability and change. Results of a novel coupling technique show that the strength and variability of the AMOC are greatly reduced once the synoptic weather variability is suppressed in the coupled model. The strength and variability of the AMOC are closely linked to deep convection events at high latitudes, which could be strongly affected by the weather variability. Our results imply that synoptic weather systems are important in driving the AMOC and its variability. Thus, interactions between atmospheric weather variability and AMOC may be an important feedback mechanism of the global climate system and need to be taken into consideration in future climate change studies.

  14. Impact of Turbine Modulation on Variable-Cycle Engine Performance. Phase 4. Additional Hardware Design and Fabrication, Engine Modification, and Altitude Test. Part 3 B

    DTIC Science & Technology

    1974-12-01

    urbofan engine performance. An AiKesearch Model TFE731 -2 Turbofan Engine was modified to incorporate production-type variable-geometry hardware...reliability was shown for the variable- geometry components. The TFE731 , modified to include variable geometry, proved to be an inexpensive...Atm at a Met Thrust of 3300 LBF 929 85 Variable-Cycle Engine TFE731 Exhaust-Nozzle Performance 948 86 Analytical Model Comparisons, Aerodynamic

  15. A hybrid machine learning model to estimate nitrate contamination of production zone groundwater in the Central Valley, California

    NASA Astrophysics Data System (ADS)

    Ransom, K.; Nolan, B. T.; Faunt, C. C.; Bell, A.; Gronberg, J.; Traum, J.; Wheeler, D. C.; Rosecrans, C.; Belitz, K.; Eberts, S.; Harter, T.

    2016-12-01

    A hybrid, non-linear, machine learning statistical model was developed within a statistical learning framework to predict nitrate contamination of groundwater to depths of approximately 500 m below ground surface in the Central Valley, California. A database of 213 predictor variables representing well characteristics, historical and current field and county scale nitrogen mass balance, historical and current landuse, oxidation/reduction conditions, groundwater flow, climate, soil characteristics, depth to groundwater, and groundwater age were assigned to over 6,000 private supply and public supply wells measured previously for nitrate and located throughout the study area. The machine learning method, gradient boosting machine (GBM) was used to screen predictor variables and rank them in order of importance in relation to the groundwater nitrate measurements. The top five most important predictor variables included oxidation/reduction characteristics, historical field scale nitrogen mass balance, climate, and depth to 60 year old water. Twenty-two variables were selected for the final model and final model errors for log-transformed hold-out data were R squared of 0.45 and root mean square error (RMSE) of 1.124. Modeled mean groundwater age was tested separately for error improvement in the model and when included decreased model RMSE by 0.5% compared to the same model without age and by 0.20% compared to the model with all 213 variables. 1D and 2D partial plots were examined to determine how variables behave individually and interact in the model. Some variables behaved as expected: log nitrate decreased with increasing probability of anoxic conditions and depth to 60 year old water, generally decreased with increasing natural landuse surrounding wells and increasing mean groundwater age, generally increased with increased minimum depth to high water table and with increased base flow index value. Other variables exhibited much more erratic or noisy behavior in the model making them more difficult to interpret but highlighting the usefulness of the non-linear machine learning method. 2D interaction plots show probability of anoxic groundwater conditions largely control estimated nitrate concentrations compared to the other predictors.

  16. Modeling soybean canopy resistance from micrometeorological and plant variables for estimating evapotranspiration using one-step Penman-Monteith approach

    NASA Astrophysics Data System (ADS)

    Irmak, Suat; Mutiibwa, Denis; Payero, Jose; Marek, Thomas; Porter, Dana

    2013-12-01

    Canopy resistance (rc) is one of the most important variables in evapotranspiration, agronomy, hydrology and climate change studies that link vegetation response to changing environmental and climatic variables. This study investigates the concept of generalized nonlinear/linear modeling approach of rc from micrometeorological and plant variables for soybean [Glycine max (L.) Merr.] canopy at different climatic zones in Nebraska, USA (Clay Center, Geneva, Holdrege and North Platte). Eight models estimating rc as a function of different combination of micrometeorological and plant variables are presented. The models integrated the linear and non-linear effects of regulating variables (net radiation, Rn; relative humidity, RH; wind speed, U3; air temperature, Ta; vapor pressure deficit, VPD; leaf area index, LAI; aerodynamic resistance, ra; and solar zenith angle, Za) to predict hourly rc. The most complex rc model has all regulating variables and the simplest model has only Rn, Ta and RH. The rc models were developed at Clay Center in the growing season of 2007 and applied to other independent sites and years. The predicted rc for the growing seasons at four locations were then used to estimate actual crop evapotranspiration (ETc) as a one-step process using the Penman-Monteith model and compared to the measured data at all locations. The models were able to account for 66-93% of the variability in measured hourly ETc across locations. Models without LAI generally underperformed and underestimated due to overestimation of rc, especially during full canopy cover stage. Using vapor pressure deficit or relative humidity in the models had similar effect on estimating rc. The root squared error (RSE) between measured and estimated ETc was about 0.07 mm h-1 for most of the models at Clay Center, Geneva and Holdrege. At North Platte, RSE was above 0.10 mm h-1. The results at different sites and different growing seasons demonstrate the robustness and consistency of the models in estimating soybean rc, which is encouraging towards the general application of one-step estimation of soybean canopy ETc in practice using the Penman-Monteith model and could aid in enhancing the utilization of the approach by irrigation and water management community.

  17. Partitioning the impacts of spatial and climatological rainfall variability in urban drainage modeling

    NASA Astrophysics Data System (ADS)

    Peleg, Nadav; Blumensaat, Frank; Molnar, Peter; Fatichi, Simone; Burlando, Paolo

    2017-03-01

    The performance of urban drainage systems is typically examined using hydrological and hydrodynamic models where rainfall input is uniformly distributed, i.e., derived from a single or very few rain gauges. When models are fed with a single uniformly distributed rainfall realization, the response of the urban drainage system to the rainfall variability remains unexplored. The goal of this study was to understand how climate variability and spatial rainfall variability, jointly or individually considered, affect the response of a calibrated hydrodynamic urban drainage model. A stochastic spatially distributed rainfall generator (STREAP - Space-Time Realizations of Areal Precipitation) was used to simulate many realizations of rainfall for a 30-year period, accounting for both climate variability and spatial rainfall variability. The generated rainfall ensemble was used as input into a calibrated hydrodynamic model (EPA SWMM - the US EPA's Storm Water Management Model) to simulate surface runoff and channel flow in a small urban catchment in the city of Lucerne, Switzerland. The variability of peak flows in response to rainfall of different return periods was evaluated at three different locations in the urban drainage network and partitioned among its sources. The main contribution to the total flow variability was found to originate from the natural climate variability (on average over 74 %). In addition, the relative contribution of the spatial rainfall variability to the total flow variability was found to increase with longer return periods. This suggests that while the use of spatially distributed rainfall data can supply valuable information for sewer network design (typically based on rainfall with return periods from 5 to 15 years), there is a more pronounced relevance when conducting flood risk assessments for larger return periods. The results show the importance of using multiple distributed rainfall realizations in urban hydrology studies to capture the total flow variability in the response of the urban drainage systems to heavy rainfall events.

  18. Including long-range dependence in integrate-and-fire models of the high interspike-interval variability of cortical neurons.

    PubMed

    Jackson, B Scott

    2004-10-01

    Many different types of integrate-and-fire models have been designed in order to explain how it is possible for a cortical neuron to integrate over many independent inputs while still producing highly variable spike trains. Within this context, the variability of spike trains has been almost exclusively measured using the coefficient of variation of interspike intervals. However, another important statistical property that has been found in cortical spike trains and is closely associated with their high firing variability is long-range dependence. We investigate the conditions, if any, under which such models produce output spike trains with both interspike-interval variability and long-range dependence similar to those that have previously been measured from actual cortical neurons. We first show analytically that a large class of high-variability integrate-and-fire models is incapable of producing such outputs based on the fact that their output spike trains are always mathematically equivalent to renewal processes. This class of models subsumes a majority of previously published models, including those that use excitation-inhibition balance, correlated inputs, partial reset, or nonlinear leakage to produce outputs with high variability. Next, we study integrate-and-fire models that have (nonPoissonian) renewal point process inputs instead of the Poisson point process inputs used in the preceding class of models. The confluence of our analytical and simulation results implies that the renewal-input model is capable of producing high variability and long-range dependence comparable to that seen in spike trains recorded from cortical neurons, but only if the interspike intervals of the inputs have infinite variance, a physiologically unrealistic condition. Finally, we suggest a new integrate-and-fire model that does not suffer any of the previously mentioned shortcomings. By analyzing simulation results for this model, we show that it is capable of producing output spike trains with interspike-interval variability and long-range dependence that match empirical data from cortical spike trains. This model is similar to the other models in this study, except that its inputs are fractional-gaussian-noise-driven Poisson processes rather than renewal point processes. In addition to this model's success in producing realistic output spike trains, its inputs have long-range dependence similar to that found in most subcortical neurons in sensory pathways, including the inputs to cortex. Analysis of output spike trains from simulations of this model also shows that a tight balance between the amounts of excitation and inhibition at the inputs to cortical neurons is not necessary for high interspike-interval variability at their outputs. Furthermore, in our analysis of this model, we show that the superposition of many fractional-gaussian-noise-driven Poisson processes does not approximate a Poisson process, which challenges the common assumption that the total effect of a large number of inputs on a neuron is well represented by a Poisson process.

  19. Discriminant analysis of cardiovascular and respiratory variables for classification of road cyclists by specialty.

    PubMed

    Nikolić, Biljana; Martinović, Jelena; Matić, Milan; Stefanović, Đorđe

    2018-05-29

    Different variables determine the performance of cyclists, which brings up the question how these parameters may help in their classification by specialty. The aim of the study was to determine differences in cardiorespiratory parameters of male cyclists according to their specialty, flat rider (N=21), hill rider (N=35) and sprinter (N=20) and obtain the multivariate model for further cyclists classification by specialties, based on selected variables. Seventeen variables were measured at submaximal and maximum load on the cycle ergometer Cosmed E 400HK (Cosmed, Rome, Italy) (initial 100W with 25W increase, 90-100 rpm). Multivariate discriminant analysis was used to determine which variables group cyclists within their specialty, and to predict which variables can direct cyclists to a particular specialty. Among nine variables that statistically contribute to the discriminant power of the model, achieved power on the anaerobic threshold and the produced CO2 had the biggest impact. The obtained discriminatory model correctly classified 91.43% of flat riders, 85.71% of hill riders, while sprinters were classified completely correct (100%), i.e. 92.10% of examinees were correctly classified, which point out the strength of the discriminatory model. Respiratory indicators mostly contribute to the discriminant power of the model, which may significantly contribute to training practice and laboratory tests in future.

  20. Using a Functional Simulation of Crisis Management to Test the C2 Agility Model Parameters on Key Performance Variables

    DTIC Science & Technology

    2013-06-01

    1 18th ICCRTS Using a Functional Simulation of Crisis Management to Test the C2 Agility Model Parameters on Key Performance Variables...AND SUBTITLE Using a Functional Simulation of Crisis Management to Test the C2 Agility Model Parameters on Key Performance Variables 5a. CONTRACT...command in crisis management. C2 Agility Model Agility can be conceptualized at a number of different levels; for instance at the team

  1. The Nature of Global Large-scale Sea Level Variability in Relation to Atmospheric Forcing: A Modeling Study

    NASA Technical Reports Server (NTRS)

    Fukumori, I.; Raghunath, R.; Fu, L. L.

    1996-01-01

    The relation between large-scale sea level variability and ocean circulation is studied using a numerical model. A global primitive equaiton model of the ocean is forced by daily winds and climatological heat fluxes corresponding to the period from January 1992 to February 1996. The physical nature of the temporal variability from periods of days to a year, are examined based on spectral analyses of model results and comparisons with satellite altimetry and tide gauge measurements.

  2. The SYSGEN user package

    NASA Technical Reports Server (NTRS)

    Carlson, C. R.

    1981-01-01

    The user documentation of the SYSGEN model and its links with other simulations is described. The SYSGEN is a production costing and reliability model of electric utility systems. Hydroelectric, storage, and time dependent generating units are modeled in addition to conventional generating plants. Input variables, modeling options, output variables, and reports formats are explained. SYSGEN also can be run interactively by using a program called FEPS (Front End Program for SYSGEN). A format for SYSGEN input variables which is designed for use with FEPS is presented.

  3. Single stage queueing/manufacturing system model that involves emission variable

    NASA Astrophysics Data System (ADS)

    Murdapa, P. S.; Pujawan, I. N.; Karningsih, P. D.; Nasution, A. H.

    2018-04-01

    Queueing is commonly occured at every industry. The basic model of queueing theory gives a foundation for modeling a manufacturing system. Nowadays, carbon emission is an important and inevitable issue due to its huge impact to our environment. However, existing model of queuing applied for analysis of single stage manufacturing system has not taken Carbon emissions into consideration. If it is applied to manufacturing context, it may lead to improper decisisions. By taking into account of emission variables into queuing models, not only the model become more comprehensive but also it creates awareness on the issue to many parties that involves in the system. This paper discusses the single stage M/M/1 queueing model that involves emission variable. Hopefully it could be a starting point for the next more complex models. It has a main objective for determining how carbon emissions could fit into the basic queueing theory. It turned out that the involvement of emission variables into the model has modified the traditional model of a single stage queue to a calculation model of production lot quantity allowed per period.

  4. Assessing performance and seasonal bias of pollen-based climate reconstructions in a perfect model world

    NASA Astrophysics Data System (ADS)

    Rehfeld, Kira; Trachsel, Mathias; Telford, Richard J.; Laepple, Thomas

    2016-12-01

    Reconstructions of summer, winter or annual mean temperatures based on the species composition of bio-indicators such as pollen, foraminifera or chironomids are routinely used in climate model-proxy data comparison studies. Most reconstruction algorithms exploit the joint distribution of modern spatial climate and species distribution for the development of the reconstructions. They rely on the space-for-time substitution and the specific assumption that environmental variables other than those reconstructed are not important or that their relationship with the reconstructed variable(s) should be the same in the past as in the modern spatial calibration dataset. Here we test the implications of this "correlative uniformitarianism" assumption on climate reconstructions in an ideal model world, in which climate and vegetation are known at all times. The alternate reality is a climate simulation of the last 6000 years with dynamic vegetation. Transient changes of plant functional types are considered as surrogate pollen counts and allow us to establish, apply and evaluate transfer functions in the modeled world. We find that in our model experiments the transfer function cross validation r2 is of limited use to identify reconstructible climate variables, as it only relies on the modern spatial climate-vegetation relationship. However, ordination approaches that assess the amount of fossil vegetation variance explained by the reconstructions are promising. We furthermore show that correlations between climate variables in the modern climate-vegetation relationship are systematically extended into the reconstructions. Summer temperatures, the most prominent driving variable for modeled vegetation change in the Northern Hemisphere, are accurately reconstructed. However, the amplitude of the model winter and mean annual temperature cooling between the mid-Holocene and present day is overestimated and similar to the summer trend in magnitude. This effect occurs because temporal changes of a dominant climate variable, such as summer temperatures in the model's Arctic, are imprinted on a less important variable, leading to reconstructions biased towards the dominant variable's trends. Our results, although based on a model vegetation that is inevitably simpler than reality, indicate that reconstructions of multiple climate variables based on modern spatial bio-indicator datasets should be treated with caution. Expert knowledge on the ecophysiological drivers of the proxies, as well as statistical methods that go beyond the cross validation on modern calibration datasets, are crucial to avoid misinterpretation.

  5. Using structural equation modeling to investigate relationships among ecological variables

    USGS Publications Warehouse

    Malaeb, Z.A.; Kevin, Summers J.; Pugesek, B.H.

    2000-01-01

    Structural equation modeling is an advanced multivariate statistical process with which a researcher can construct theoretical concepts, test their measurement reliability, hypothesize and test a theory about their relationships, take into account measurement errors, and consider both direct and indirect effects of variables on one another. Latent variables are theoretical concepts that unite phenomena under a single term, e.g., ecosystem health, environmental condition, and pollution (Bollen, 1989). Latent variables are not measured directly but can be expressed in terms of one or more directly measurable variables called indicators. For some researchers, defining, constructing, and examining the validity of latent variables may be the end task of itself. For others, testing hypothesized relationships of latent variables may be of interest. We analyzed the correlation matrix of eleven environmental variables from the U.S. Environmental Protection Agency's (USEPA) Environmental Monitoring and Assessment Program for Estuaries (EMAP-E) using methods of structural equation modeling. We hypothesized and tested a conceptual model to characterize the interdependencies between four latent variables-sediment contamination, natural variability, biodiversity, and growth potential. In particular, we were interested in measuring the direct, indirect, and total effects of sediment contamination and natural variability on biodiversity and growth potential. The model fit the data well and accounted for 81% of the variability in biodiversity and 69% of the variability in growth potential. It revealed a positive total effect of natural variability on growth potential that otherwise would have been judged negative had we not considered indirect effects. That is, natural variability had a negative direct effect on growth potential of magnitude -0.3251 and a positive indirect effect mediated through biodiversity of magnitude 0.4509, yielding a net positive total effect of 0.1258. Natural variability had a positive direct effect on biodiversity of magnitude 0.5347 and a negative indirect effect mediated through growth potential of magnitude -0.1105 yielding a positive total effects of magnitude 0.4242. Sediment contamination had a negative direct effect on biodiversity of magnitude -0.1956 and a negative indirect effect on growth potential via biodiversity of magnitude -0.067. Biodiversity had a positive effect on growth potential of magnitude 0.8432, and growth potential had a positive effect on biodiversity of magnitude 0.3398. The correlation between biodiversity and growth potential was estimated at 0.7658 and that between sediment contamination and natural variability at -0.3769.

  6. Modeling of an Adjustable Beam Solid State Light Project

    NASA Technical Reports Server (NTRS)

    Clark, Toni

    2015-01-01

    This proposal is for the development of a computational model of a prototype variable beam light source using optical modeling software, Zemax Optics Studio. The variable beam light source would be designed to generate flood, spot, and directional beam patterns, while maintaining the same average power usage. The optical model would demonstrate the possibility of such a light source and its ability to address several issues: commonality of design, human task variability, and light source design process improvements. An adaptive lighting solution that utilizes the same electronics footprint and power constraints while addressing variability of lighting needed for the range of exploration tasks can save costs and allow for the development of common avionics for lighting controls.

  7. A Poisson regression approach to model monthly hail occurrence in Northern Switzerland using large-scale environmental variables

    NASA Astrophysics Data System (ADS)

    Madonna, Erica; Ginsbourger, David; Martius, Olivia

    2018-05-01

    In Switzerland, hail regularly causes substantial damage to agriculture, cars and infrastructure, however, little is known about its long-term variability. To study the variability, the monthly number of days with hail in northern Switzerland is modeled in a regression framework using large-scale predictors derived from ERA-Interim reanalysis. The model is developed and verified using radar-based hail observations for the extended summer season (April-September) in the period 2002-2014. The seasonality of hail is explicitly modeled with a categorical predictor (month) and monthly anomalies of several large-scale predictors are used to capture the year-to-year variability. Several regression models are applied and their performance tested with respect to standard scores and cross-validation. The chosen model includes four predictors: the monthly anomaly of the two meter temperature, the monthly anomaly of the logarithm of the convective available potential energy (CAPE), the monthly anomaly of the wind shear and the month. This model well captures the intra-annual variability and slightly underestimates its inter-annual variability. The regression model is applied to the reanalysis data back in time to 1980. The resulting hail day time series shows an increase of the number of hail days per month, which is (in the model) related to an increase in temperature and CAPE. The trend corresponds to approximately 0.5 days per month per decade. The results of the regression model have been compared to two independent data sets. All data sets agree on the sign of the trend, but the trend is weaker in the other data sets.

  8. Cross-country transferability of multi-variable damage models

    NASA Astrophysics Data System (ADS)

    Wagenaar, Dennis; Lüdtke, Stefan; Kreibich, Heidi; Bouwer, Laurens

    2017-04-01

    Flood damage assessment is often done with simple damage curves based only on flood water depth. Additionally, damage models are often transferred in space and time, e.g. from region to region or from one flood event to another. Validation has shown that depth-damage curve estimates are associated with high uncertainties, particularly when applied in regions outside the area where the data for curve development was collected. Recently, progress has been made with multi-variable damage models created with data-mining techniques, i.e. Bayesian Networks and random forest. However, it is still unknown to what extent and under which conditions model transfers are possible and reliable. Model validations in different countries will provide valuable insights into the transferability of multi-variable damage models. In this study we compare multi-variable models developed on basis of flood damage datasets from Germany as well as from The Netherlands. Data from several German floods was collected using computer aided telephone interviews. Data from the 1993 Meuse flood in the Netherlands is available, based on compensations paid by the government. The Bayesian network and random forest based models are applied and validated in both countries on basis of the individual datasets. A major challenge was the harmonization of the variables between both datasets due to factors like differences in variable definitions, and regional and temporal differences in flood hazard and exposure characteristics. Results of model validations and comparisons in both countries are discussed, particularly in respect to encountered challenges and possible solutions for an improvement of model transferability.

  9. Modeling variably saturated multispecies reactive groundwater solute transport with MODFLOW-UZF and RT3D

    USGS Publications Warehouse

    Bailey, Ryan T.; Morway, Eric D.; Niswonger, Richard G.; Gates, Timothy K.

    2013-01-01

    A numerical model was developed that is capable of simulating multispecies reactive solute transport in variably saturated porous media. This model consists of a modified version of the reactive transport model RT3D (Reactive Transport in 3 Dimensions) that is linked to the Unsaturated-Zone Flow (UZF1) package and MODFLOW. Referred to as UZF-RT3D, the model is tested against published analytical benchmarks as well as other published contaminant transport models, including HYDRUS-1D, VS2DT, and SUTRA, and the coupled flow and transport modeling system of CATHY and TRAN3D. Comparisons in one-dimensional, two-dimensional, and three-dimensional variably saturated systems are explored. While several test cases are included to verify the correct implementation of variably saturated transport in UZF-RT3D, other cases are included to demonstrate the usefulness of the code in terms of model run-time and handling the reaction kinetics of multiple interacting species in variably saturated subsurface systems. As UZF1 relies on a kinematic-wave approximation for unsaturated flow that neglects the diffusive terms in Richards equation, UZF-RT3D can be used for large-scale aquifer systems for which the UZF1 formulation is reasonable, that is, capillary-pressure gradients can be neglected and soil parameters can be treated as homogeneous. Decreased model run-time and the ability to include site-specific chemical species and chemical reactions make UZF-RT3D an attractive model for efficient simulation of multispecies reactive transport in variably saturated large-scale subsurface systems.

  10. Improved spectral comparisons of paleoclimate models and observations via proxy system modeling: Implications for multi-decadal variability

    NASA Astrophysics Data System (ADS)

    Dee, S. G.; Parsons, L. A.; Loope, G. R.; Overpeck, J. T.; Ault, T. R.; Emile-Geay, J.

    2017-10-01

    The spectral characteristics of paleoclimate observations spanning the last millennium suggest the presence of significant low-frequency (multi-decadal to centennial scale) variability in the climate system. Since this low-frequency climate variability is critical for climate predictions on societally-relevant scales, it is essential to establish whether General Circulation models (GCMs) are able to simulate it faithfully. Recent studies find large discrepancies between models and paleoclimate data at low frequencies, prompting concerns surrounding the ability of GCMs to predict long-term, high-magnitude variability under greenhouse forcing (Laepple and Huybers, 2014a, 2014b). However, efforts to ground climate model simulations directly in paleoclimate observations are impeded by fundamental differences between models and the proxy data: proxy systems often record a multivariate and/or nonlinear response to climate, precluding a direct comparison to GCM output. In this paper we bridge this gap via a forward proxy modeling approach, coupled to an isotope-enabled GCM. This allows us to disentangle the various contributions to signals embedded in ice cores, speleothem calcite, coral aragonite, tree-ring width, and tree-ring cellulose. The paper addresses the following questions: (1) do forward-modeled ;pseudoproxies; exhibit variability comparable to proxy data? (2) if not, which processes alter the shape of the spectrum of simulated climate variability, and are these processes broadly distinguishable from climate? We apply our method to representative case studies, and broaden these insights with an analysis of the PAGES2k database (PAGES2K Consortium, 2013). We find that current proxy system models (PSMs) can help resolve model-data discrepancies on interannual to decadal timescales, but cannot account for the mismatch in variance on multi-decadal to centennial timescales. We conclude that, specific to this set of PSMs and isotope-enabled model, the paleoclimate record may exhibit larger low-frequency variability than GCMs currently simulate, indicative of incomplete physics and/or forcings.

  11. Flexible Strategies for Coping with Rainfall Variability: Seasonal Adjustments in Cropped Area in the Ganges Basin

    PubMed Central

    Siderius, Christian; Biemans, Hester; van Walsum, Paul E. V.; van Ierland, Ekko C.; Kabat, Pavel; Hellegers, Petra J. G. J.

    2016-01-01

    One of the main manifestations of climate change will be increased rainfall variability. How to deal with this in agriculture will be a major societal challenge. In this paper we explore flexibility in land use, through deliberate seasonal adjustments in cropped area, as a specific strategy for coping with rainfall variability. Such adjustments are not incorporated in hydro-meteorological crop models commonly used for food security analyses. Our paper contributes to the literature by making a comprehensive model assessment of inter-annual variability in crop production, including both variations in crop yield and cropped area. The Ganges basin is used as a case study. First, we assessed the contribution of cropped area variability to overall variability in rice and wheat production by applying hierarchical partitioning on time-series of agricultural statistics. We then introduced cropped area as an endogenous decision variable in a hydro-economic optimization model (WaterWise), coupled to a hydrology-vegetation model (LPJmL), and analyzed to what extent its performance in the estimation of inter-annual variability in crop production improved. From the statistics, we found that in the period 1999–2009 seasonal adjustment in cropped area can explain almost 50% of variability in wheat production and 40% of variability in rice production in the Indian part of the Ganges basin. Our improved model was well capable of mimicking existing variability at different spatial aggregation levels, especially for wheat. The value of flexibility, i.e. the foregone costs of choosing not to crop in years when water is scarce, was quantified at 4% of gross margin of wheat in the Indian part of the Ganges basin and as high as 34% of gross margin of wheat in the drought-prone state of Rajasthan. We argue that flexibility in land use is an important coping strategy to rainfall variability in water stressed regions. PMID:26934389

  12. Multilayer Joint Gait-Pose Manifolds for Human Gait Motion Modeling.

    PubMed

    Ding, Meng; Fan, Guolian

    2015-11-01

    We present new multilayer joint gait-pose manifolds (multilayer JGPMs) for complex human gait motion modeling, where three latent variables are defined jointly in a low-dimensional manifold to represent a variety of body configurations. Specifically, the pose variable (along the pose manifold) denotes a specific stage in a walking cycle; the gait variable (along the gait manifold) represents different walking styles; and the linear scale variable characterizes the maximum stride in a walking cycle. We discuss two kinds of topological priors for coupling the pose and gait manifolds, i.e., cylindrical and toroidal, to examine their effectiveness and suitability for motion modeling. We resort to a topologically-constrained Gaussian process (GP) latent variable model to learn the multilayer JGPMs where two new techniques are introduced to facilitate model learning under limited training data. First is training data diversification that creates a set of simulated motion data with different strides. Second is the topology-aware local learning to speed up model learning by taking advantage of the local topological structure. The experimental results on the Carnegie Mellon University motion capture data demonstrate the advantages of our proposed multilayer models over several existing GP-based motion models in terms of the overall performance of human gait motion modeling.

  13. Prediction of the birch pollen season characteristics in Cracow, Poland using an 18-year data series.

    PubMed

    Dorota, Myszkowska

    2013-03-01

    The aim of the study was to construct the model forecasting the birch pollen season characteristics in Cracow on the basis of an 18-year data series. The study was performed using the volumetric method (Lanzoni/Burkard trap). The 98/95 % method was used to calculate the pollen season. The Spearman's correlation test was applied to find the relationship between the meteorological parameters and pollen season characteristics. To construct the predictive model, the backward stepwise multiple regression analysis was used including the multi-collinearity of variables. The predictive models best fitted the pollen season start and end, especially models containing two independent variables. The peak concentration value was predicted with the higher prediction error. Also the accuracy of the models predicting the pollen season characteristics in 2009 was higher in comparison with 2010. Both, the multi-variable model and one-variable model for the beginning of the pollen season included air temperature during the last 10 days of February, while the multi-variable model also included humidity at the beginning of April. The models forecasting the end of the pollen season were based on temperature in March-April, while the peak day was predicted using the temperature during the last 10 days of March.

  14. Effects and detection of raw material variability on the performance of near-infrared calibration models for pharmaceutical products.

    PubMed

    Igne, Benoit; Shi, Zhenqi; Drennen, James K; Anderson, Carl A

    2014-02-01

    The impact of raw material variability on the prediction ability of a near-infrared calibration model was studied. Calibrations, developed from a quaternary mixture design comprising theophylline anhydrous, lactose monohydrate, microcrystalline cellulose, and soluble starch, were challenged by intentional variation of raw material properties. A design with two theophylline physical forms, three lactose particle sizes, and two starch manufacturers was created to test model robustness. Further challenges to the models were accomplished through environmental conditions. Along with full-spectrum partial least squares (PLS) modeling, variable selection by dynamic backward PLS and genetic algorithms was utilized in an effort to mitigate the effects of raw material variability. In addition to evaluating models based on their prediction statistics, prediction residuals were analyzed by analyses of variance and model diagnostics (Hotelling's T(2) and Q residuals). Full-spectrum models were significantly affected by lactose particle size. Models developed by selecting variables gave lower prediction errors and proved to be a good approach to limit the effect of changing raw material characteristics. Hotelling's T(2) and Q residuals provided valuable information that was not detectable when studying only prediction trends. Diagnostic statistics were demonstrated to be critical in the appropriate interpretation of the prediction of quality parameters. © 2013 Wiley Periodicals, Inc. and the American Pharmacists Association.

  15. Soft sensor modeling based on variable partition ensemble method for nonlinear batch processes

    NASA Astrophysics Data System (ADS)

    Wang, Li; Chen, Xiangguang; Yang, Kai; Jin, Huaiping

    2017-01-01

    Batch processes are always characterized by nonlinear and system uncertain properties, therefore, the conventional single model may be ill-suited. A local learning strategy soft sensor based on variable partition ensemble method is developed for the quality prediction of nonlinear and non-Gaussian batch processes. A set of input variable sets are obtained by bootstrapping and PMI criterion. Then, multiple local GPR models are developed based on each local input variable set. When a new test data is coming, the posterior probability of each best performance local model is estimated based on Bayesian inference and used to combine these local GPR models to get the final prediction result. The proposed soft sensor is demonstrated by applying to an industrial fed-batch chlortetracycline fermentation process.

  16. A numerical solution for a variable-order reaction-diffusion model by using fractional derivatives with non-local and non-singular kernel

    NASA Astrophysics Data System (ADS)

    Coronel-Escamilla, A.; Gómez-Aguilar, J. F.; Torres, L.; Escobar-Jiménez, R. F.

    2018-02-01

    A reaction-diffusion system can be represented by the Gray-Scott model. The reaction-diffusion dynamic is described by a pair of time and space dependent Partial Differential Equations (PDEs). In this paper, a generalization of the Gray-Scott model by using variable-order fractional differential equations is proposed. The variable-orders were set as smooth functions bounded in (0 , 1 ] and, specifically, the Liouville-Caputo and the Atangana-Baleanu-Caputo fractional derivatives were used to express the time differentiation. In order to find a numerical solution of the proposed model, the finite difference method together with the Adams method were applied. The simulations results showed the chaotic behavior of the proposed model when different variable-orders are applied.

  17. Factors Determining Success in Youth Judokas

    PubMed Central

    Krstulović, Saša; Caput, Petra Đapić

    2017-01-01

    Abstract The aim of this study was to compare two models of determining factors for success in judo. The first model (Model A) included testing motor abilities of high-level Croatian judokas in the cadet age category. The sample in Model A consisted of 71 male and female judokas aged 16 ± 0.6 years who were divided into four subsamples according to sex and weight category. The second model (Model B) consisted of interviewing 40 top-level judo experts on the importance of motor abilities for cadets’ success in judo. According to Model A, the greatest impact on the criterion variable of success in males and females of heavier weight categories were variables assessing maximum strength, coordination and jumping ability. In the lighter weight male categories, the highest correlation with the criterion variable of success was the variable assessing agility. However, in the lighter weight female categories, the greatest impact on success had the variable assessing muscular endurance. In Model B, specific endurance was crucial for success in judo, while flexibility was the least important, regardless of sex and weight category. Spearman’s rank correlation coefficients showed that there were no significant correlations in the results obtained in Models A and B for all observed subsamples. Although no significant correlations between the factors for success obtained through Models A and B were found, common determinants of success, regardless of the applied model, were identified. PMID:28469759

  18. Uncertainty and Variability in Physiologically-Based Pharmacokinetic (PBPK) Models: Key Issues and Case Studies (Final Report)

    EPA Science Inventory

    EPA announced the availability of the final report, Uncertainty and Variability in Physiologically-Based Pharmacokinetic (PBPK) Models: Key Issues and Case Studies. This report summarizes some of the recent progress in characterizing uncertainty and variability in physi...

  19. Collaborative Research: Process-resolving Decomposition of the Global Temperature Response to Modes of Low Frequency Variability in a Changing Climate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Ming; Deng, Yi

    2015-02-06

    El Niño-Southern Oscillation (ENSO) and Annular Modes (AMs) represent respectively the most important modes of low frequency variability in the tropical and extratropical circulations. The future projection of the ENSO and AM variability, however, remains highly uncertain with the state-of-the-art coupled general circulation models. A comprehensive understanding of the factors responsible for the inter-model discrepancies in projecting future changes in the ENSO and AM variability, in terms of multiple feedback processes involved, has yet to be achieved. The proposed research aims to identify sources of such uncertainty and establish a set of process-resolving quantitative evaluations of the existing predictions ofmore » the future ENSO and AM variability. The proposed process-resolving evaluations are based on a feedback analysis method formulated in Lu and Cai (2009), which is capable of partitioning 3D temperature anomalies/perturbations into components linked to 1) radiation-related thermodynamic processes such as cloud and water vapor feedbacks, 2) local dynamical processes including convection and turbulent/diffusive energy transfer and 3) non-local dynamical processes such as the horizontal energy transport in the oceans and atmosphere. Taking advantage of the high-resolution, multi-model ensemble products from the Coupled Model Intercomparison Project Phase 5 (CMIP5) soon to be available at the Lawrence Livermore National Lab, we will conduct a process-resolving decomposition of the global three-dimensional (3D) temperature (including SST) response to the ENSO and AM variability in the preindustrial, historical and future climate simulated by these models. Specific research tasks include 1) identifying the model-observation discrepancies in the global temperature response to ENSO and AM variability and attributing such discrepancies to specific feedback processes, 2) delineating the influence of anthropogenic radiative forcing on the key feedback processes operating on ENSO and AM variability and quantifying their relative contributions to the changes in the temperature anomalies associated with different phases of ENSO and AMs, and 3) investigating the linkages between model feedback processes that lead to inter-model differences in time-mean temperature projection and model feedback processes that cause inter-model differences in the simulated ENSO and AM temperature response. Through a thorough model-observation and inter-model comparison of the multiple energetic processes associated with ENSO and AM variability, the proposed research serves to identify key uncertainties in model representation of ENSO and AM variability, and investigate how the model uncertainty in predicting time-mean response is related to the uncertainty in predicting response of the low-frequency modes. The proposal is thus a direct response to the first topical area of the solicitation: Interaction of Climate Change and Low Frequency Modes of Natural Climate Variability. It ultimately supports the accomplishment of the BER climate science activity Long Term Measure (LTM): "Deliver improved scientific data and models about the potential response of the Earth's climate and terrestrial biosphere to increased greenhouse gas levels for policy makers to determine safe levels of greenhouse gases in the atmosphere."« less

  20. A Causal Model of Faculty Research Productivity.

    ERIC Educational Resources Information Center

    Bean, John P.

    A causal model of faculty research productivity was developed through a survey of the literature. Models of organizational behavior, organizational effectiveness, and motivation were synthesized into a causal model of productivity. Two general types of variables were assumed to affect individual research productivity: institutional variables and…

  1. Causal relationship model between variables using linear regression to improve professional commitment of lecturer

    NASA Astrophysics Data System (ADS)

    Setyaningsih, S.

    2017-01-01

    The main element to build a leading university requires lecturer commitment in a professional manner. Commitment is measured through willpower, loyalty, pride, loyalty, and integrity as a professional lecturer. A total of 135 from 337 university lecturers were sampled to collect data. Data were analyzed using validity and reliability test and multiple linear regression. Many studies have found a link on the commitment of lecturers, but the basic cause of the causal relationship is generally neglected. These results indicate that the professional commitment of lecturers affected by variables empowerment, academic culture, and trust. The relationship model between variables is composed of three substructures. The first substructure consists of endogenous variables professional commitment and exogenous three variables, namely the academic culture, empowerment and trust, as well as residue variable ɛ y . The second substructure consists of one endogenous variable that is trust and two exogenous variables, namely empowerment and academic culture and the residue variable ɛ 3. The third substructure consists of one endogenous variable, namely the academic culture and exogenous variables, namely empowerment as well as residue variable ɛ 2. Multiple linear regression was used in the path model for each substructure. The results showed that the hypothesis has been proved and these findings provide empirical evidence that increasing the variables will have an impact on increasing the professional commitment of the lecturers.

  2. Developing a Model for Forecasting Road Traffic Accident (RTA) Fatalities in Yemen

    NASA Astrophysics Data System (ADS)

    Karim, Fareed M. A.; Abdo Saleh, Ali; Taijoobux, Aref; Ševrović, Marko

    2017-12-01

    The aim of this paper is to develop a model for forecasting RTA fatalities in Yemen. The yearly fatalities was modeled as the dependent variable, while the number of independent variables included the population, number of vehicles, GNP, GDP and Real GDP per capita. It was determined that all these variables are highly correlated with the correlation coefficient (r ≈ 0.9); in order to avoid multicollinearity in the model, a single variable with the highest r value was selected (real GDP per capita). A simple regression model was developed; the model was very good (R2=0.916); however, the residuals were serially correlated. The Prais-Winsten procedure was used to overcome this violation of the regression assumption. The data for a 20-year period from 1991-2010 were analyzed to build the model; the model was validated by using data for the years 2011-2013; the historical fit for the period 1991 - 2011 was very good. Also, the validation for 2011-2013 proved accurate.

  3. Effects of Ensemble Configuration on Estimates of Regional Climate Uncertainties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldenson, N.; Mauger, G.; Leung, L. R.

    Internal variability in the climate system can contribute substantial uncertainty in climate projections, particularly at regional scales. Internal variability can be quantified using large ensembles of simulations that are identical but for perturbed initial conditions. Here we compare methods for quantifying internal variability. Our study region spans the west coast of North America, which is strongly influenced by El Niño and other large-scale dynamics through their contribution to large-scale internal variability. Using a statistical framework to simultaneously account for multiple sources of uncertainty, we find that internal variability can be quantified consistently using a large ensemble or an ensemble ofmore » opportunity that includes small ensembles from multiple models and climate scenarios. The latter also produce estimates of uncertainty due to model differences. We conclude that projection uncertainties are best assessed using small single-model ensembles from as many model-scenario pairings as computationally feasible, which has implications for ensemble design in large modeling efforts.« less

  4. Solar array model corrections from Mars Pathfinder lander data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ewell, R.C.; Burger, D.R.

    1997-12-31

    The MESUR solar array power model initially assumed values for input variables. After landing early surface variables such as array tilt and azimuth or early environmental variables such as array temperature can be corrected. Correction of later environmental variables such as tau versus time, spectral shift, dust deposition, and UV darkening is dependent upon time, on-board science instruments, and ability to separate effects of variables. Engineering estimates had to be made for additional shadow losses and Voc sensor temperature corrections. Some variations had not been expected such as tau versus time of day, and spectral shift versus time of day.more » Additions needed to the model are thermal mass of lander petal and correction between Voc sensor and temperature sensor. Conclusions are: the model works well; good battery predictions are difficult; inclusion of Isc and Voc sensors was valuable; and the IMP and MAE science experiments greatly assisted the data analysis and model correction.« less

  5. Integrating models that depend on variable data

    NASA Astrophysics Data System (ADS)

    Banks, A. T.; Hill, M. C.

    2016-12-01

    Models of human-Earth systems are often developed with the goal of predicting the behavior of one or more dependent variables from multiple independent variables, processes, and parameters. Often dependent variable values range over many orders of magnitude, which complicates evaluation of the fit of the dependent variable values to observations. Many metrics and optimization methods have been proposed to address dependent variable variability, with little consensus being achieved. In this work, we evaluate two such methods: log transformation (based on the dependent variable being log-normally distributed with a constant variance) and error-based weighting (based on a multi-normal distribution with variances that tend to increase as the dependent variable value increases). Error-based weighting has the advantage of encouraging model users to carefully consider data errors, such as measurement and epistemic errors, while log-transformations can be a black box for typical users. Placing the log-transformation into the statistical perspective of error-based weighting has not formerly been considered, to the best of our knowledge. To make the evaluation as clear and reproducible as possible, we use multiple linear regression (MLR). Simulations are conducted with MatLab. The example represents stream transport of nitrogen with up to eight independent variables. The single dependent variable in our example has values that range over 4 orders of magnitude. Results are applicable to any problem for which individual or multiple data types produce a large range of dependent variable values. For this problem, the log transformation produced good model fit, while some formulations of error-based weighting worked poorly. Results support previous suggestions fthat error-based weighting derived from a constant coefficient of variation overemphasizes low values and degrades model fit to high values. Applying larger weights to the high values is inconsistent with the log-transformation. Greater consistency is obtained by imposing smaller (by up to a factor of 1/35) weights on the smaller dependent-variable values. From an error-based perspective, the small weights are consistent with large standard deviations. This work considers the consequences of these two common ways of addressing variable data.

  6. Variable pixel size ionospheric tomography

    NASA Astrophysics Data System (ADS)

    Zheng, Dunyong; Zheng, Hongwei; Wang, Yanjun; Nie, Wenfeng; Li, Chaokui; Ao, Minsi; Hu, Wusheng; Zhou, Wei

    2017-06-01

    A novel ionospheric tomography technique based on variable pixel size was developed for the tomographic reconstruction of the ionospheric electron density (IED) distribution. In variable pixel size computerized ionospheric tomography (VPSCIT) model, the IED distribution is parameterized by a decomposition of the lower and upper ionosphere with different pixel sizes. Thus, the lower and upper IED distribution may be very differently determined by the available data. The variable pixel size ionospheric tomography and constant pixel size tomography are similar in most other aspects. There are some differences between two kinds of models with constant and variable pixel size respectively, one is that the segments of GPS signal pay should be assigned to the different kinds of pixel in inversion; the other is smoothness constraint factor need to make the appropriate modified where the pixel change in size. For a real dataset, the variable pixel size method distinguishes different electron density distribution zones better than the constant pixel size method. Furthermore, it can be non-chided that when the effort is spent to identify the regions in a model with best data coverage. The variable pixel size method can not only greatly improve the efficiency of inversion, but also produce IED images with high fidelity which are the same as a used uniform pixel size method. In addition, variable pixel size tomography can reduce the underdetermined problem in an ill-posed inverse problem when the data coverage is irregular or less by adjusting quantitative proportion of pixels with different sizes. In comparison with constant pixel size tomography models, the variable pixel size ionospheric tomography technique achieved relatively good results in a numerical simulation. A careful validation of the reliability and superiority of variable pixel size ionospheric tomography was performed. Finally, according to the results of the statistical analysis and quantitative comparison, the proposed method offers an improvement of 8% compared with conventional constant pixel size tomography models in the forward modeling.

  7. LES/PDF studies of joint statistics of mixture fraction and progress variable in piloted methane jet flames with inhomogeneous inlet flows

    NASA Astrophysics Data System (ADS)

    Zhang, Pei; Barlow, Robert; Masri, Assaad; Wang, Haifeng

    2016-11-01

    The mixture fraction and progress variable are often used as independent variables for describing turbulent premixed and non-premixed flames. There is a growing interest in using these two variables for describing partially premixed flames. The joint statistical distribution of the mixture fraction and progress variable is of great interest in developing models for partially premixed flames. In this work, we conduct predictive studies of the joint statistics of mixture fraction and progress variable in a series of piloted methane jet flames with inhomogeneous inlet flows. The employed models combine large eddy simulations with the Monte Carlo probability density function (PDF) method. The joint PDFs and marginal PDFs are examined in detail by comparing the model predictions and the measurements. Different presumed shapes of the joint PDFs are also evaluated.

  8. The influence of global sea surface temperature variability on the large-scale land surface temperature

    NASA Astrophysics Data System (ADS)

    Tyrrell, Nicholas L.; Dommenget, Dietmar; Frauen, Claudia; Wales, Scott; Rezny, Mike

    2015-04-01

    In global warming scenarios, global land surface temperatures () warm with greater amplitude than sea surface temperatures (SSTs), leading to a land/sea warming contrast even in equilibrium. Similarly, the interannual variability of is larger than the covariant interannual SST variability, leading to a land/sea contrast in natural variability. This work investigates the land/sea contrast in natural variability based on global observations, coupled general circulation model simulations and idealised atmospheric general circulation model simulations with different SST forcings. The land/sea temperature contrast in interannual variability is found to exist in observations and models to a varying extent in global, tropical and extra-tropical bands. There is agreement between models and observations in the tropics but not the extra-tropics. Causality in the land-sea relationship is explored with modelling experiments forced with prescribed SSTs, where an amplification of the imposed SST variability is seen over land. The amplification of to tropical SST anomalies is due to the enhanced upper level atmospheric warming that corresponds with tropical moist convection over oceans leading to upper level temperature variations that are larger in amplitude than the source SST anomalies. This mechanism is similar to that proposed for explaining the equilibrium global warming land/sea warming contrast. The link of the to the dominant mode of tropical and global interannual climate variability, the El Niño Southern Oscillation (ENSO), is found to be an indirect and delayed connection. ENSO SST variability affects the oceans outside the tropical Pacific, which in turn leads to a further, amplified and delayed response of.

  9. Evaluation of Stochastic Rainfall Models in Capturing Climate Variability for Future Drought and Flood Risk Assessment

    NASA Astrophysics Data System (ADS)

    Chowdhury, A. F. M. K.; Lockart, N.; Willgoose, G. R.; Kuczera, G. A.; Kiem, A.; Nadeeka, P. M.

    2016-12-01

    One of the key objectives of stochastic rainfall modelling is to capture the full variability of climate system for future drought and flood risk assessment. However, it is not clear how well these models can capture the future climate variability when they are calibrated to Global/Regional Climate Model data (GCM/RCM) as these datasets are usually available for very short future period/s (e.g. 20 years). This study has assessed the ability of two stochastic daily rainfall models to capture climate variability by calibrating them to a dynamically downscaled RCM dataset in an east Australian catchment for 1990-2010, 2020-2040, and 2060-2080 epochs. The two stochastic models are: (1) a hierarchical Markov Chain (MC) model, which we developed in a previous study and (2) a semi-parametric MC model developed by Mehrotra and Sharma (2007). Our hierarchical model uses stochastic parameters of MC and Gamma distribution, while the semi-parametric model uses a modified MC process with memory of past periods and kernel density estimation. This study has generated multiple realizations of rainfall series by using parameters of each model calibrated to the RCM dataset for each epoch. The generated rainfall series are used to generate synthetic streamflow by using a SimHyd hydrology model. Assessing the synthetic rainfall and streamflow series, this study has found that both stochastic models can incorporate a range of variability in rainfall as well as streamflow generation for both current and future periods. However, the hierarchical model tends to overestimate the multiyear variability of wet spell lengths (therefore, is less likely to simulate long periods of drought and flood), while the semi-parametric model tends to overestimate the mean annual rainfall depths and streamflow volumes (hence, simulated droughts are likely to be less severe). Sensitivity of these limitations of both stochastic models in terms of future drought and flood risk assessment will be discussed.

  10. Report of the Defense Science Board Task Force on Defense Biometrics

    DTIC Science & Technology

    2007-03-01

    certificates, crypto variables, and encoded biometric indices. The Department of Defense has invested prestige and resources in its Common Access Card (CAC...in turn, could be used to unlock an otherwise secret key or crypto variable which would support the remote authentication. A new key variable...The PSA for biometrics should commission development of appropriate threat model(s) and assign responsibility for maintaining currency of the model

  11. Factor Models for Ordinal Variables With Covariate Effects on the Manifest and Latent Variables: A Comparison of LISREL and IRT Approaches

    ERIC Educational Resources Information Center

    Moustaki, Irini; Joreskog, Karl G.; Mavridis, Dimitris

    2004-01-01

    We consider a general type of model for analyzing ordinal variables with covariate effects and 2 approaches for analyzing data for such models, the item response theory (IRT) approach and the PRELIS-LISREL (PLA) approach. We compare these 2 approaches on the basis of 2 examples, 1 involving only covariate effects directly on the ordinal variables…

  12. Impacts of Considering Climate Variability on Investment Decisions in Ethiopia

    NASA Astrophysics Data System (ADS)

    Strzepek, K.; Block, P.; Rosegrant, M.; Diao, X.

    2005-12-01

    In Ethiopia, climate extremes, inducing droughts or floods, are not unusual. Monitoring the effects of these extremes, and climate variability in general, is critical for economic prediction and assessment of the country's future welfare. The focus of this study involves adding climate variability to a deterministic, mean climate-driven agro-economic model, in an attempt to understand its effects and degree of influence on general economic prediction indicators for Ethiopia. Four simulations are examined, including a baseline simulation and three investment strategies: simulations of irrigation investment, roads investment, and a combination investment of both irrigation and roads. The deterministic model is transformed into a stochastic model by dynamically adding year-to-year climate variability through climate-yield factors. Nine sets of actual, historic, variable climate data are individually assembled and implemented into the 12-year stochastic model simulation, producing an ensemble of economic prediction indicators. This ensemble allows for a probabilistic approach to planning and policy making, allowing decision makers to consider risk. The economic indicators from the deterministic and stochastic approaches, including rates of return to investments, are significantly different. The predictions of the deterministic model appreciably overestimate the future welfare of Ethiopia; the predictions of the stochastic model, utilizing actual climate data, tend to give a better semblance of what may be expected. Inclusion of climate variability is vital for proper analysis of the predictor values from this agro-economic model.

  13. Evaluation of Deep Learning Models for Predicting CO2 Flux

    NASA Astrophysics Data System (ADS)

    Halem, M.; Nguyen, P.; Frankel, D.

    2017-12-01

    Artificial neural networks have been employed to calculate surface flux measurements from station data because they are able to fit highly nonlinear relations between input and output variables without knowing the detail relationships between the variables. However, the accuracy in performing neural net estimates of CO2 flux from observations of CO2 and other atmospheric variables is influenced by the architecture of the neural model, the availability, and complexity of interactions between physical variables such as wind, temperature, and indirect variables like latent heat, and sensible heat, etc. We evaluate two deep learning models, feed forward and recurrent neural network models to learn how they each respond to the physical measurements, time dependency of the measurements of CO2 concentration, humidity, pressure, temperature, wind speed etc. for predicting the CO2 flux. In this paper, we focus on a) building neural network models for estimating CO2 flux based on DOE data from tower Atmospheric Radiation Measurement data; b) evaluating the impact of choosing the surface variables and model hyper-parameters on the accuracy and predictions of surface flux; c) assessing the applicability of the neural network models on estimate CO2 flux by using OCO-2 satellite data; d) studying the efficiency of using GPU-acceleration for neural network performance using IBM Power AI deep learning software and packages on IBM Minsky system.

  14. Spectral irradiance variations: comparison between observations and the SATIRE model on solar rotation time scales

    NASA Astrophysics Data System (ADS)

    Unruh, Y. C.; Krivova, N. A.; Solanki, S. K.; Harder, J. W.; Kopp, G.

    2008-07-01

    Aims: We test the reliability of the observed and calculated spectral irradiance variations between 200 and 1600 nm over a time span of three solar rotations in 2004. Methods: We compare our model calculations to spectral irradiance observations taken with SORCE/SIM, SoHO/VIRGO, and UARS/SUSIM. The calculations assume LTE and are based on the SATIRE (Spectral And Total Irradiance REconstruction) model. We analyse the variability as a function of wavelength and present time series in a number of selected wavelength regions covering the UV to the NIR. We also show the facular and spot contributions to the total calculated variability. Results: In most wavelength regions, the variability agrees well between all sets of observations and the model calculations. The model does particularly well between 400 and 1300 nm, but fails below 220 nm, as well as for some of the strong NUV lines. Our calculations clearly show the shift from faculae-dominated variability in the NUV to spot-dominated variability above approximately 400 nm. We also discuss some of the remaining problems, such as the low sensitivity of SUSIM and SORCE for wavelengths between approximately 310 and 350 nm, where currently the model calculations still provide the best estimates of solar variability.

  15. RMS Spectral Modelling - a powerful tool to probe the origin of variability in Active Galactic Nuclei

    NASA Astrophysics Data System (ADS)

    Mallick, Labani; Dewangan, Gulab chand; Misra, Ranjeev

    2016-07-01

    The broadband energy spectra of Active Galactic Nuclei (AGN) are very complex in nature with the contribution from many ingredients: accretion disk, corona, jets, broad-line region (BLR), narrow-line region (NLR) and Compton-thick absorbing cloud or TORUS. The complexity of the broadband AGN spectra gives rise to mean spectral model degeneracy, e.g, there are competing models for the broad feature near 5-7 keV in terms of blurred reflection and complex absorption. In order to overcome the energy spectral model degeneracy, the most reliable approach is to study the RMS variability spectrum which connects the energy spectrum with temporal variability. The origin of variability could be pivoting of the primary continuum, reflection and/or absorption. The study of RMS (Root Mean Square) spectra would help us to connect the energy spectra with the variability. In this work, we study the energy dependent variability of AGN by developing theoretical RMS spectral model in ISIS (Interactive Spectral Interpretation System) for different input energy spectra. In this talk, I would like to present results of RMS spectral modelling for few radio-loud and radio-quiet AGN observed by XMM-Newton, Suzaku, NuSTAR and ASTROSAT and will probe the dichotomy between these two classes of AGN.

  16. The relative roles of environment, history and local dispersal in controlling the distributions of common tree and shrub species in a tropical forest landscape, Panama

    USGS Publications Warehouse

    Svenning, J.-C.; Engelbrecht, B.M.J.; Kinner, D.A.; Kursar, T.A.; Stallard, R.F.; Wright, S.J.

    2006-01-01

    We used regression models and information-theoretic model selection to assess the relative importance of environment, local dispersal and historical contingency as controls of the distributions of 26 common plant species in tropical forest on Barro Colorado Island (BCI), Panama. We censused eighty-eight 0.09-ha plots scattered across the landscape. Environmental control, local dispersal and historical contingency were represented by environmental variables (soil moisture, slope, soil type, distance to shore, old-forest presence), a spatial autoregressive parameter (??), and four spatial trend variables, respectively. We built regression models, representing all combinations of the three hypotheses, for each species. The probability that the best model included the environmental variables, spatial trend variables and ?? averaged 33%, 64% and 50% across the study species, respectively. The environmental variables, spatial trend variables, ??, and a simple intercept model received the strongest support for 4, 15, 5 and 2 species, respectively. Comparing the model results to information on species traits showed that species with strong spatial trends produced few and heavy diaspores, while species with strong soil moisture relationships were particularly drought-sensitive. In conclusion, history and local dispersal appeared to be the dominant controls of the distributions of common plant species on BCI. Copyright ?? 2006 Cambridge University Press.

  17. Natural variability of marine ecosystems inferred from a coupled climate to ecosystem simulation

    NASA Astrophysics Data System (ADS)

    Le Mézo, Priscilla; Lefort, Stelly; Séférian, Roland; Aumont, Olivier; Maury, Olivier; Murtugudde, Raghu; Bopp, Laurent

    2016-01-01

    This modeling study analyzes the simulated natural variability of pelagic ecosystems in the North Atlantic and North Pacific. Our model system includes a global Earth System Model (IPSL-CM5A-LR), the biogeochemical model PISCES and the ecosystem model APECOSM that simulates upper trophic level organisms using a size-based approach and three interactive pelagic communities (epipelagic, migratory and mesopelagic). Analyzing an idealized (e.g., no anthropogenic forcing) 300-yr long pre-industrial simulation, we find that low and high frequency variability is dominant for the large and small organisms, respectively. Our model shows that the size-range exhibiting the largest variability at a given frequency, defined as the resonant range, also depends on the community. At a given frequency, the resonant range of the epipelagic community includes larger organisms than that of the migratory community and similarly, the latter includes larger organisms than the resonant range of the mesopelagic community. This study shows that the simulated temporal variability of marine pelagic organisms' abundance is not only influenced by natural climate fluctuations but also by the structure of the pelagic community. As a consequence, the size- and community-dependent response of marine ecosystems to climate variability could impact the sustainability of fisheries in a warming world.

  18. Neural network models - a novel tool for predicting the efficacy of growth hormone (GH) therapy in children with short stature.

    PubMed

    Smyczynska, Joanna; Hilczer, Maciej; Smyczynska, Urszula; Stawerska, Renata; Tadeusiewicz, Ryszard; Lewinski, Andrzej

    2015-01-01

    The leading method for prediction of growth hormone (GH) therapy effectiveness are multiple linear regression (MLR) models. Best of our knowledge, we are the first to apply artificial neural networks (ANN) to solve this problem. For ANN there is no necessity to assume the functions linking independent and dependent variables. The aim of study is to compare ANN and MLR models of GH therapy effectiveness. Analysis comprised the data of 245 GH-deficient children (170 boys) treated with GH up to final height (FH). Independent variables included: patients' height, pre-treatment height velocity, chronological age, bone age, gender, pubertal status, parental heights, GH peak in 2 stimulation tests, IGF-I concentration. The output variable was FH. For testing dataset, MLR model predicted FH SDS with average error (RMSE) 0.64 SD, explaining 34.3% of its variability; ANN model derived on the same pre-processed data predicted FH SDS with RMSE 0.60 SD, explaining 42.0% of its variability; ANN model derived on raw data predicted FH with RMSE 3.9 cm (0.63 SD), explaining 78.7% of its variability. ANN seem to be valuable tool in prediction of GH treatment effectiveness, especially since they can be applied to raw clinical data.

  19. Influential input classification in probabilistic multimedia models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maddalena, Randy L.; McKone, Thomas E.; Hsieh, Dennis P.H.

    1999-05-01

    Monte Carlo analysis is a statistical simulation method that is often used to assess and quantify the outcome variance in complex environmental fate and effects models. Total outcome variance of these models is a function of (1) the uncertainty and/or variability associated with each model input and (2) the sensitivity of the model outcome to changes in the inputs. To propagate variance through a model using Monte Carlo techniques, each variable must be assigned a probability distribution. The validity of these distributions directly influences the accuracy and reliability of the model outcome. To efficiently allocate resources for constructing distributions onemore » should first identify the most influential set of variables in the model. Although existing sensitivity and uncertainty analysis methods can provide a relative ranking of the importance of model inputs, they fail to identify the minimum set of stochastic inputs necessary to sufficiently characterize the outcome variance. In this paper, we describe and demonstrate a novel sensitivity/uncertainty analysis method for assessing the importance of each variable in a multimedia environmental fate model. Our analyses show that for a given scenario, a relatively small number of input variables influence the central tendency of the model and an even smaller set determines the shape of the outcome distribution. For each input, the level of influence depends on the scenario under consideration. This information is useful for developing site specific models and improving our understanding of the processes that have the greatest influence on the variance in outcomes from multimedia models.« less

  20. Bayesian Model Comparison for the Order Restricted RC Association Model

    ERIC Educational Resources Information Center

    Iliopoulos, G.; Kateri, M.; Ntzoufras, I.

    2009-01-01

    Association models constitute an attractive alternative to the usual log-linear models for modeling the dependence between classification variables. They impose special structure on the underlying association by assigning scores on the levels of each classification variable, which can be fixed or parametric. Under the general row-column (RC)…

  1. Constrained and Unconstrained Partial Adjacent Category Logit Models for Ordinal Response Variables

    ERIC Educational Resources Information Center

    Fullerton, Andrew S.; Xu, Jun

    2018-01-01

    Adjacent category logit models are ordered regression models that focus on comparisons of adjacent categories. These models are particularly useful for ordinal response variables with categories that are of substantive interest. In this article, we consider unconstrained and constrained versions of the partial adjacent category logit model, which…

  2. Modeling and forecasting US presidential election using learning algorithms

    NASA Astrophysics Data System (ADS)

    Zolghadr, Mohammad; Niaki, Seyed Armin Akhavan; Niaki, S. T. A.

    2017-09-01

    The primary objective of this research is to obtain an accurate forecasting model for the US presidential election. To identify a reliable model, artificial neural networks (ANN) and support vector regression (SVR) models are compared based on some specified performance measures. Moreover, six independent variables such as GDP, unemployment rate, the president's approval rate, and others are considered in a stepwise regression to identify significant variables. The president's approval rate is identified as the most significant variable, based on which eight other variables are identified and considered in the model development. Preprocessing methods are applied to prepare the data for the learning algorithms. The proposed procedure significantly increases the accuracy of the model by 50%. The learning algorithms (ANN and SVR) proved to be superior to linear regression based on each method's calculated performance measures. The SVR model is identified as the most accurate model among the other models as this model successfully predicted the outcome of the election in the last three elections (2004, 2008, and 2012). The proposed approach significantly increases the accuracy of the forecast.

  3. Working memory and intraindividual variability as neurocognitive indicators in ADHD: examining competing model predictions.

    PubMed

    Kofler, Michael J; Alderson, R Matt; Raiker, Joseph S; Bolden, Jennifer; Sarver, Dustin E; Rapport, Mark D

    2014-05-01

    The current study examined competing predictions of the default mode, cognitive neuroenergetic, and functional working memory models of attention-deficit/hyperactivity disorder (ADHD) regarding the relation between neurocognitive impairments in working memory and intraindividual variability. Twenty-two children with ADHD and 15 typically developing children were assessed on multiple tasks measuring intraindividual reaction time (RT) variability (ex-Gaussian: tau, sigma) and central executive (CE) working memory. Latent factor scores based on multiple, counterbalanced tasks were created for each construct of interest (CE, tau, sigma) to reflect reliable variance associated with each construct and remove task-specific, test-retest, and random error. Bias-corrected, bootstrapped mediation analyses revealed that CE working memory accounted for 88% to 100% of ADHD-related RT variability across models, and between-group differences in RT variability were no longer detectable after accounting for the mediating role of CE working memory. In contrast, RT variability accounted for 10% to 29% of between-group differences in CE working memory, and large magnitude CE working memory deficits remained after accounting for this partial mediation. Statistical comparison of effect size estimates across models suggests directionality of effects, such that the mediation effects of CE working memory on RT variability were significantly greater than the mediation effects of RT variability on CE working memory. The current findings question the role of RT variability as a primary neurocognitive indicator in ADHD and suggest that ADHD-related RT variability may be secondary to underlying deficits in CE working memory.

  4. Constraining land carbon cycle process understanding with observations of atmospheric CO2 variability

    NASA Astrophysics Data System (ADS)

    Collatz, G. J.; Kawa, S. R.; Liu, Y.; Zeng, F.; Ivanoff, A.

    2013-12-01

    We evaluate our understanding of the land biospheric carbon cycle by benchmarking a model and its variants to atmospheric CO2 observations and to an atmospheric CO2 inversion. Though the seasonal cycle in CO2 observations is well simulated by the model (RMSE/standard deviation of observations <0.5 at most sites north of 15N and <1 for Southern Hemisphere sites) different model setups suggest that the CO2 seasonal cycle provides some constraint on gross photosynthesis, respiration, and fire fluxes revealed in the amplitude and phase at northern latitude sites. CarbonTracker inversions (CT) and model show similar phasing of the seasonal fluxes but agreement in the amplitude varies by region. We also evaluate interannual variability (IAV) in the measured atmospheric CO2 which, in contrast to the seasonal cycle, is not well represented by the model. We estimate the contributions of biospheric and fire fluxes, and atmospheric transport variability to explaining observed variability in measured CO2. Comparisons with CT show that modeled IAV has some correspondence to the inversion results >40N though fluxes match poorly at regional to continental scales. Regional and global fire emissions are strongly correlated with variability observed at northern flask sample sites and in the global atmospheric CO2 growth rate though in the latter case fire emissions anomalies are not large enough to account fully for the observed variability. We discuss remaining unexplained variability in CO2 observations in terms of the representation of fluxes by the model. This work also demonstrates the limitations of the current network of CO2 observations and the potential of new denser surface measurements and space based column measurements for constraining carbon cycle processes in models.

  5. A fuzzy adaptive network approach to parameter estimation in cases where independent variables come from an exponential distribution

    NASA Astrophysics Data System (ADS)

    Dalkilic, Turkan Erbay; Apaydin, Aysen

    2009-11-01

    In a regression analysis, it is assumed that the observations come from a single class in a data cluster and the simple functional relationship between the dependent and independent variables can be expressed using the general model; Y=f(X)+[epsilon]. However; a data cluster may consist of a combination of observations that have different distributions that are derived from different clusters. When faced with issues of estimating a regression model for fuzzy inputs that have been derived from different distributions, this regression model has been termed the [`]switching regression model' and it is expressed with . Here li indicates the class number of each independent variable and p is indicative of the number of independent variables [J.R. Jang, ANFIS: Adaptive-network-based fuzzy inference system, IEEE Transaction on Systems, Man and Cybernetics 23 (3) (1993) 665-685; M. Michel, Fuzzy clustering and switching regression models using ambiguity and distance rejects, Fuzzy Sets and Systems 122 (2001) 363-399; E.Q. Richard, A new approach to estimating switching regressions, Journal of the American Statistical Association 67 (338) (1972) 306-310]. In this study, adaptive networks have been used to construct a model that has been formed by gathering obtained models. There are methods that suggest the class numbers of independent variables heuristically. Alternatively, in defining the optimal class number of independent variables, the use of suggested validity criterion for fuzzy clustering has been aimed. In the case that independent variables have an exponential distribution, an algorithm has been suggested for defining the unknown parameter of the switching regression model and for obtaining the estimated values after obtaining an optimal membership function, which is suitable for exponential distribution.

  6. A Study on the Effects of Spatial Scale on Snow Process in Hyper-Resolution Hydrological Modelling over Mountainous Areas

    NASA Astrophysics Data System (ADS)

    Garousi Nejad, I.; He, S.; Tang, Q.; Ogden, F. L.; Steinke, R. C.; Frazier, N.; Tarboton, D. G.; Ohara, N.; Lin, H.

    2017-12-01

    Spatial scale is one of the main considerations in hydrological modeling of snowmelt in mountainous areas. The size of model elements controls the degree to which variability can be explicitly represented versus what needs to be parameterized using effective properties such as averages or other subgrid variability parameterizations that may degrade the quality of model simulations. For snowmelt modeling terrain parameters such as slope, aspect, vegetation and elevation play an important role in the timing and quantity of snowmelt that serves as an input to hydrologic runoff generation processes. In general, higher resolution enhances the accuracy of the simulation since fine meshes represent and preserve the spatial variability of atmospheric and surface characteristics better than coarse resolution. However, this increases computational cost and there may be a scale beyond which the model response does not improve due to diminishing sensitivity to variability and irreducible uncertainty associated with the spatial interpolation of inputs. This paper examines the influence of spatial resolution on the snowmelt process using simulations of and data from the Animas River watershed, an alpine mountainous area in Colorado, USA, using an unstructured distributed physically based hydrological model developed for a parallel computing environment, ADHydro. Five spatial resolutions (30 m, 100 m, 250 m, 500 m, and 1 km) were used to investigate the variations in hydrologic response. This study demonstrated the importance of choosing the appropriate spatial scale in the implementation of ADHydro to obtain a balance between representing spatial variability and the computational cost. According to the results, variation in the input variables and parameters due to using different spatial resolution resulted in changes in the obtained hydrological variables, especially snowmelt, both at the basin-scale and distributed across the model mesh.

  7. Variabilities in probabilistic seismic hazard maps for natural and induced seismicity in the central and eastern United States

    USGS Publications Warehouse

    Mousavi, S. Mostafa; Beroza, Gregory C.; Hoover, Susan M.

    2018-01-01

    Probabilistic seismic hazard analysis (PSHA) characterizes ground-motion hazard from earthquakes. Typically, the time horizon of a PSHA forecast is long, but in response to induced seismicity related to hydrocarbon development, the USGS developed one-year PSHA models. In this paper, we present a display of the variability in USGS hazard curves due to epistemic uncertainty in its informed submodel using a simple bootstrapping approach. We find that variability is highest in low-seismicity areas. On the other hand, areas of high seismic hazard, such as the New Madrid seismic zone or Oklahoma, exhibit relatively lower variability simply because of more available data and a better understanding of the seismicity. Comparing areas of high hazard, New Madrid, which has a history of large naturally occurring earthquakes, has lower forecast variability than Oklahoma, where the hazard is driven mainly by suspected induced earthquakes since 2009. Overall, the mean hazard obtained from bootstrapping is close to the published model, and variability increased in the 2017 one-year model relative to the 2016 model. Comparing the relative variations caused by individual logic-tree branches, we find that the highest hazard variation (as measured by the 95% confidence interval of bootstrapping samples) in the final model is associated with different ground-motion models and maximum magnitudes used in the logic tree, while the variability due to the smoothing distance is minimal. It should be pointed out that this study is not looking at the uncertainty in the hazard in general, but only as it is represented in the USGS one-year models.

  8. Linking Inflammation, Cardiorespiratory Variability, and Neural Control in Acute Inflammation via Computational Modeling

    PubMed Central

    Dick, Thomas E.; Molkov, Yaroslav I.; Nieman, Gary; Hsieh, Yee-Hsee; Jacono, Frank J.; Doyle, John; Scheff, Jeremy D.; Calvano, Steve E.; Androulakis, Ioannis P.; An, Gary; Vodovotz, Yoram

    2012-01-01

    Acute inflammation leads to organ failure by engaging catastrophic feedback loops in which stressed tissue evokes an inflammatory response and, in turn, inflammation damages tissue. Manifestations of this maladaptive inflammatory response include cardio-respiratory dysfunction that may be reflected in reduced heart rate and ventilatory pattern variabilities. We have developed signal-processing algorithms that quantify non-linear deterministic characteristics of variability in biologic signals. Now, coalescing under the aegis of the NIH Computational Biology Program and the Society for Complexity in Acute Illness, two research teams performed iterative experiments and computational modeling on inflammation and cardio-pulmonary dysfunction in sepsis as well as on neural control of respiration and ventilatory pattern variability. These teams, with additional collaborators, have recently formed a multi-institutional, interdisciplinary consortium, whose goal is to delineate the fundamental interrelationship between the inflammatory response and physiologic variability. Multi-scale mathematical modeling and complementary physiological experiments will provide insight into autonomic neural mechanisms that may modulate the inflammatory response to sepsis and simultaneously reduce heart rate and ventilatory pattern variabilities associated with sepsis. This approach integrates computational models of neural control of breathing and cardio-respiratory coupling with models that combine inflammation, cardiovascular function, and heart rate variability. The resulting integrated model will provide mechanistic explanations for the phenomena of respiratory sinus-arrhythmia and cardio-ventilatory coupling observed under normal conditions, and the loss of these properties during sepsis. This approach holds the potential of modeling cross-scale physiological interactions to improve both basic knowledge and clinical management of acute inflammatory diseases such as sepsis and trauma. PMID:22783197

  9. AGN Variability: Probing Black Hole Accretion

    NASA Astrophysics Data System (ADS)

    Moreno, Jackeline; O'Brien, Jack; Vogeley, Michael S.; Richards, Gordon T.; Kasliwal, Vishal P.

    2017-01-01

    We combine the long temporal baseline of Sloan Digital Sky Survey (SDSS) for quasars in Stripe 82 with the high precision photometry of the Kepler/K2 Satellite to study the physics of optical variability in the accretion disk and supermassive black hole engine. We model the lightcurves directly as Continuous-time Auto Regressive Moving Average processes (C-ARMA) with the Kali analysis package (Kasliwal et al. 2016). These models are extremely robust to irregular sampling and can capture aperiodic variability structure on various timescales. We also estimate the power spectral density and structure function of both the model family and the data. A Green's function kernel may also be estimated for the resulting C-ARMA parameter fit, which may be interpreted as the response to driving impulses such as hotspots in the accretion disk. We also examine available spectra for our AGN sample to relate observed and modelled behavior to spectral properties. The objective of this work is twofold: to explore the proper physical interpretation of different families of C-ARMA models applied to AGN optical flux variability and to relate empirical characteristic timescales of our AGN sample to physical theory or to properties estimated from spectra or simulations like the disk viscosity and temperature. We find that AGN with strong variability features on timescales resolved by K2 are well modelled by a low order C-ARMA family while K2 lightcurves with weak amplitude variability are dominated by outliers and measurement errors which force higher order model fits. This work explores a novel approach to combining SDSS and K2 data sets and presents recovered characteristic timescales of AGN variability.

  10. Review of Factors, Methods, and Outcome Definition in Designing Opioid Abuse Predictive Models.

    PubMed

    Alzeer, Abdullah H; Jones, Josette; Bair, Matthew J

    2018-05-01

    Several opioid risk assessment tools are available to prescribers to evaluate opioid analgesic abuse among chronic patients. The objectives of this study are to 1) identify variables available in the literature to predict opioid abuse; 2) explore and compare methods (population, database, and analysis) used to develop statistical models that predict opioid abuse; and 3) understand how outcomes were defined in each statistical model predicting opioid abuse. The OVID database was searched for this study. The search was limited to articles written in English and published from January 1990 to April 2016. This search generated 1,409 articles. Only seven studies and nine models met our inclusion-exclusion criteria. We found nine models and identified 75 distinct variables. Three studies used administrative claims data, and four studies used electronic health record data. The majority, four out of seven articles (six out of nine models), were primarily dependent on the presence or absence of opioid abuse or dependence (ICD-9 diagnosis code) to define opioid abuse. However, two articles used a predefined list of opioid-related aberrant behaviors. We identified variables used to predict opioid abuse from electronic health records and administrative data. Medication variables are the recurrent variables in the articles reviewed (33 variables). Age and gender are the most consistent demographic variables in predicting opioid abuse. Overall, there is similarity in the sampling method and inclusion/exclusion criteria (age, number of prescriptions, follow-up period, and data analysis methods). Intuitive research to utilize unstructured data may increase opioid abuse models' accuracy.

  11. Linking Inflammation, Cardiorespiratory Variability, and Neural Control in Acute Inflammation via Computational Modeling.

    PubMed

    Dick, Thomas E; Molkov, Yaroslav I; Nieman, Gary; Hsieh, Yee-Hsee; Jacono, Frank J; Doyle, John; Scheff, Jeremy D; Calvano, Steve E; Androulakis, Ioannis P; An, Gary; Vodovotz, Yoram

    2012-01-01

    Acute inflammation leads to organ failure by engaging catastrophic feedback loops in which stressed tissue evokes an inflammatory response and, in turn, inflammation damages tissue. Manifestations of this maladaptive inflammatory response include cardio-respiratory dysfunction that may be reflected in reduced heart rate and ventilatory pattern variabilities. We have developed signal-processing algorithms that quantify non-linear deterministic characteristics of variability in biologic signals. Now, coalescing under the aegis of the NIH Computational Biology Program and the Society for Complexity in Acute Illness, two research teams performed iterative experiments and computational modeling on inflammation and cardio-pulmonary dysfunction in sepsis as well as on neural control of respiration and ventilatory pattern variability. These teams, with additional collaborators, have recently formed a multi-institutional, interdisciplinary consortium, whose goal is to delineate the fundamental interrelationship between the inflammatory response and physiologic variability. Multi-scale mathematical modeling and complementary physiological experiments will provide insight into autonomic neural mechanisms that may modulate the inflammatory response to sepsis and simultaneously reduce heart rate and ventilatory pattern variabilities associated with sepsis. This approach integrates computational models of neural control of breathing and cardio-respiratory coupling with models that combine inflammation, cardiovascular function, and heart rate variability. The resulting integrated model will provide mechanistic explanations for the phenomena of respiratory sinus-arrhythmia and cardio-ventilatory coupling observed under normal conditions, and the loss of these properties during sepsis. This approach holds the potential of modeling cross-scale physiological interactions to improve both basic knowledge and clinical management of acute inflammatory diseases such as sepsis and trauma.

  12. The EPOCH Project. I. Periodic variable stars in the EROS-2 LMC database

    NASA Astrophysics Data System (ADS)

    Kim, Dae-Won; Protopapas, Pavlos; Bailer-Jones, Coryn A. L.; Byun, Yong-Ik; Chang, Seo-Won; Marquette, Jean-Baptiste; Shin, Min-Su

    2014-06-01

    The EPOCH (EROS-2 periodic variable star classification using machine learning) project aims to detect periodic variable stars in the EROS-2 light curve database. In this paper, we present the first result of the classification of periodic variable stars in the EROS-2 LMC database. To classify these variables, we first built a training set by compiling known variables in the Large Magellanic Cloud area from the OGLE and MACHO surveys. We crossmatched these variables with the EROS-2 sources and extracted 22 variability features from 28 392 light curves of the corresponding EROS-2 sources. We then used the random forest method to classify the EROS-2 sources in the training set. We designed the model to separate not only δ Scuti stars, RR Lyraes, Cepheids, eclipsing binaries, and long-period variables, the superclasses, but also their subclasses, such as RRab, RRc, RRd, and RRe for RR Lyraes, and similarly for the other variable types. The model trained using only the superclasses shows 99% recall and precision, while the model trained on all subclasses shows 87% recall and precision. We applied the trained model to the entire EROS-2 LMC database, which contains about 29 million sources, and found 117 234 periodic variable candidates. Out of these 117 234 periodic variables, 55 285 have not been discovered by either OGLE or MACHO variability studies. This set comprises 1906 δ Scuti stars, 6607 RR Lyraes, 638 Cepheids, 178 Type II Cepheids, 34 562 eclipsing binaries, and 11 394 long-period variables. catalog of these EROS-2 LMC periodic variable stars is available at http://stardb.yonsei.ac.kr and at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/566/A43

  13. Modeling longitudinal data, I: principles of multivariate analysis.

    PubMed

    Ravani, Pietro; Barrett, Brendan; Parfrey, Patrick

    2009-01-01

    Statistical models are used to study the relationship between exposure and disease while accounting for the potential role of other factors' impact on outcomes. This adjustment is useful to obtain unbiased estimates of true effects or to predict future outcomes. Statistical models include a systematic component and an error component. The systematic component explains the variability of the response variable as a function of the predictors and is summarized in the effect estimates (model coefficients). The error element of the model represents the variability in the data unexplained by the model and is used to build measures of precision around the point estimates (confidence intervals).

  14. Attributing runoff changes to climate variability and human activities: uncertainty analysis using four monthly water balance models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Shuai; Xiong, Lihua; Li, Hong-Yi

    2015-05-26

    Hydrological simulations to delineate the impacts of climate variability and human activities are subjected to uncertainties related to both parameter and structure of the hydrological models. To analyze the impact of these uncertainties on the model performance and to yield more reliable simulation results, a global calibration and multimodel combination method that integrates the Shuffled Complex Evolution Metropolis (SCEM) and Bayesian Model Averaging (BMA) of four monthly water balance models was proposed. The method was applied to the Weihe River Basin (WRB), the largest tributary of the Yellow River, to determine the contribution of climate variability and human activities tomore » runoff changes. The change point, which was used to determine the baseline period (1956-1990) and human-impacted period (1991-2009), was derived using both cumulative curve and Pettitt’s test. Results show that the combination method from SCEM provides more skillful deterministic predictions than the best calibrated individual model, resulting in the smallest uncertainty interval of runoff changes attributed to climate variability and human activities. This combination methodology provides a practical and flexible tool for attribution of runoff changes to climate variability and human activities by hydrological models.« less

  15. Classification tree models for predicting distributions of michigan stream fish from landscape variables

    USGS Publications Warehouse

    Steen, P.J.; Zorn, T.G.; Seelbach, P.W.; Schaeffer, J.S.

    2008-01-01

    Traditionally, fish habitat requirements have been described from local-scale environmental variables. However, recent studies have shown that studying landscape-scale processes improves our understanding of what drives species assemblages and distribution patterns across the landscape. Our goal was to learn more about constraints on the distribution of Michigan stream fish by examining landscape-scale habitat variables. We used classification trees and landscape-scale habitat variables to create and validate presence-absence models and relative abundance models for Michigan stream fishes. We developed 93 presence-absence models that on average were 72% correct in making predictions for an independent data set, and we developed 46 relative abundance models that were 76% correct in making predictions for independent data. The models were used to create statewide predictive distribution and abundance maps that have the potential to be used for a variety of conservation and scientific purposes. ?? Copyright by the American Fisheries Society 2008.

  16. State variable modeling of the integrated engine and aircraft dynamics

    NASA Astrophysics Data System (ADS)

    Rotaru, Constantin; Sprinţu, Iuliana

    2014-12-01

    This study explores the dynamic characteristics of the combined aircraft-engine system, based on the general theory of the state variables for linear and nonlinear systems, with details leading first to the separate formulation of the longitudinal and the lateral directional state variable models, followed by the merging of the aircraft and engine models into a single state variable model. The linearized equations were expressed in a matrix form and the engine dynamics was included in terms of variation of thrust following a deflection of the throttle. The linear model of the shaft dynamics for a two-spool jet engine was derived by extending the one-spool model. The results include the discussion of the thrust effect upon the aircraft response when the thrust force associated with the engine has a sizable moment arm with respect to the aircraft center of gravity for creating a compensating moment.

  17. An analysis of input errors in precipitation-runoff models using regression with errors in the independent variables

    USGS Publications Warehouse

    Troutman, Brent M.

    1982-01-01

    Errors in runoff prediction caused by input data errors are analyzed by treating precipitation-runoff models as regression (conditional expectation) models. Independent variables of the regression consist of precipitation and other input measurements; the dependent variable is runoff. In models using erroneous input data, prediction errors are inflated and estimates of expected storm runoff for given observed input variables are biased. This bias in expected runoff estimation results in biased parameter estimates if these parameter estimates are obtained by a least squares fit of predicted to observed runoff values. The problems of error inflation and bias are examined in detail for a simple linear regression of runoff on rainfall and for a nonlinear U.S. Geological Survey precipitation-runoff model. Some implications for flood frequency analysis are considered. A case study using a set of data from Turtle Creek near Dallas, Texas illustrates the problems of model input errors.

  18. Mapping and monitoring Mount Graham red squirrel habitat with Lidar and Landsat imagery

    USGS Publications Warehouse

    Hatten, James R.

    2014-01-01

    The Mount Graham red squirrel (Tamiasciurus hudsonicus grahamensis) is an endemic subspecies located in the Pinaleño Mountains of southeast Arizona. Living in a conifer forest on a sky-island surrounded by desert, the Mount Graham red squirrel is one of the rarest mammals in North America. Over the last two decades, drought, insect infestations, and fire destroyed much of its habitat. A federal recovery team is working on a plan to recover the squirrel and detailed information is necessary on its habitat requirements and population dynamics. Toward that goal I developed and compared three probabilistic models of Mount Graham red squirrel habitat with a geographic information system and logistic regression. Each model contained the same topographic variables (slope, aspect, elevation), but the Landsat model contained a greenness variable (Normalized Difference Vegetation Index) extracted from Landsat, the Lidar model contained three forest-inventory variables extracted from lidar, while the Hybrid model contained Landsat and lidar variables. The Hybrid model produced the best habitat classification accuracy, followed by the Landsat and Lidar models, respectively. Landsat-derived forest greenness was the best predictor of habitat, followed by topographic (elevation, slope, aspect) and lidar (tree height, canopy bulk density, and live basal area) variables, respectively. The Landsat model's probabilities were significantly correlated with all 12 lidar variables, indicating its utility for habitat mapping. While the Hybrid model produced the best classification results, only the Landsat model was suitable for creating a habitat time series or habitat–population function between 1986 and 2013. The techniques I highlight should prove valuable in the development of Landsat- or lidar-based habitat models range wide.

  19. Affected States Soft Independent Modeling by Class Analogy from the Relation Between Independent Variables, Number of Independent Variables and Sample Size

    PubMed Central

    Kanık, Emine Arzu; Temel, Gülhan Orekici; Erdoğan, Semra; Kaya, İrem Ersöz

    2013-01-01

    Objective: The aim of study is to introduce method of Soft Independent Modeling of Class Analogy (SIMCA), and to express whether the method is affected from the number of independent variables, the relationship between variables and sample size. Study Design: Simulation study. Material and Methods: SIMCA model is performed in two stages. In order to determine whether the method is influenced by the number of independent variables, the relationship between variables and sample size, simulations were done. Conditions in which sample sizes in both groups are equal, and where there are 30, 100 and 1000 samples; where the number of variables is 2, 3, 5, 10, 50 and 100; moreover where the relationship between variables are quite high, in medium level and quite low were mentioned. Results: Average classification accuracy of simulation results which were carried out 1000 times for each possible condition of trial plan were given as tables. Conclusion: It is seen that diagnostic accuracy results increase as the number of independent variables increase. SIMCA method is a method in which the relationship between variables are quite high, the number of independent variables are many in number and where there are outlier values in the data that can be used in conditions having outlier values. PMID:25207065

  20. Affected States soft independent modeling by class analogy from the relation between independent variables, number of independent variables and sample size.

    PubMed

    Kanık, Emine Arzu; Temel, Gülhan Orekici; Erdoğan, Semra; Kaya, Irem Ersöz

    2013-03-01

    The aim of study is to introduce method of Soft Independent Modeling of Class Analogy (SIMCA), and to express whether the method is affected from the number of independent variables, the relationship between variables and sample size. Simulation study. SIMCA model is performed in two stages. In order to determine whether the method is influenced by the number of independent variables, the relationship between variables and sample size, simulations were done. Conditions in which sample sizes in both groups are equal, and where there are 30, 100 and 1000 samples; where the number of variables is 2, 3, 5, 10, 50 and 100; moreover where the relationship between variables are quite high, in medium level and quite low were mentioned. Average classification accuracy of simulation results which were carried out 1000 times for each possible condition of trial plan were given as tables. It is seen that diagnostic accuracy results increase as the number of independent variables increase. SIMCA method is a method in which the relationship between variables are quite high, the number of independent variables are many in number and where there are outlier values in the data that can be used in conditions having outlier values.

  1. NAO and its relationship with the Northern Hemisphere mean surface temperature in CMIP5 simulations

    NASA Astrophysics Data System (ADS)

    Wang, Xiaofan; Li, Jianping; Sun, Cheng; Liu, Ting

    2017-04-01

    The North Atlantic Oscillation (NAO) is one of the most prominent teleconnection patterns in the Northern Hemisphere and has recently been found to be both an internal source and useful predictor of the multidecadal variability of the Northern Hemisphere mean surface temperature (NHT). In this study, we examine how well the variability of the NAO and NHT are reproduced in historical simulations generated by the 40 models that constitute Phase 5 of the Coupled Model Intercomparison Project (CMIP5). All of the models are able to capture the basic characteristics of the interannual NAO pattern reasonably well, whereas the simulated decadal NAO patterns show less consistency with the observations. The NAO fluctuations over multidecadal time scales are underestimated by almost all models. Regarding the NHT multidecadal variability, the models generally represent the externally forced variations well but tend to underestimate the internal NHT. With respect to the performance of the models in reproducing the NAO-NHT relationship, 14 models capture the observed decadal lead of the NAO, and model discrepancies in the representation of this linkage are derived mainly from their different interpretation of the underlying physical processes associated with the Atlantic Multidecadal Oscillation (AMO) and the Atlantic meridional overturning circulation (AMOC). This study suggests that one way to improve the simulation of the multidecadal variability of the internal NHT lies in better simulation of the multidecadal variability of the NAO and its delayed effect on the NHT variability via slow ocean processes.

  2. Hydrological excitation of polar motion by different variables of the GLDAS models

    NASA Astrophysics Data System (ADS)

    Wińska, Małgorzata; Nastula, Jolanta

    Continental hydrological loading, by land water, snow, and ice, is an element that is strongly needed for a full understanding of the excitation of polar motion. In this study we compute different estimations of hydrological excitation functions of polar motion (Hydrological Angular Momentum - HAM) using various variables from the Global Land Data Assimilation System (GLDAS) models of land hydrosphere. The main aim of this study is to show the influence of different variables for example: total evapotranspiration, runoff, snowmelt, soil moisture to polar motion excitations in annual and short term scale. In our consideration we employ several realizations of the GLDAS model as: GLDAS Common Land Model (CLM), GLDAS Mosaic Model, GLDAS National Centers for Environmental Prediction/Oregon State University/Air Force/Hydrologic Research Lab Model (Noah), GLDAS Variable Infiltration Capacity (VIC) Model. Hydrological excitation functions of polar motion, both global and regional, are determined by using selected variables of these GLDAS realizations. First we compare a timing, spectra and phase diagrams of different regional and global HAMs with each other. Next, we estimate, the hydrological signal in geodetically observed polar motion excitation by subtracting the atmospheric -- AAM (pressure + wind) and oceanic -- OAM (bottom pressure + currents) contributions. Finally, the hydrological excitations are compared to these hydrological signal in observed polar motion excitation series. The results help us understand which variables of considered hydrological models are the most important for the polar motion excitation and how well we can close polar motion excitation budget in the seasonal and inter-annual spectral ranges.

  3. The CESM Large Ensemble Project: Inspiring New Ideas and Understanding

    NASA Astrophysics Data System (ADS)

    Kay, J. E.; Deser, C.

    2016-12-01

    While internal climate variability is known to affect climate projections, its influence is often underappreciated and confused with model error. Why? In general, modeling centers contribute a small number of realizations to international climate model assessments [e.g., phase 5 of the Coupled Model Intercomparison Project (CMIP5)]. As a result, model error and internal climate variability are difficult, and at times impossible, to disentangle. In response, the Community Earth System Model (CESM) community designed the CESM Large Ensemble (CESM-LE) with the explicit goal of enabling assessment of climate change in the presence of internal climate variability. All CESM-LE simulations use a single CMIP5 model (CESM with the Community Atmosphere Model, version 5). The core simulations replay the twenty to twenty-first century (1920-2100) 40+ times under historical and representative concentration pathway 8.5 external forcing with small initial condition differences. Two companion 2000+-yr-long preindustrial control simulations (fully coupled, prognostic atmosphere and land only) allow assessment of internal climate variability in the absence of climate change. Comprehensive outputs, including many daily fields, are available as single-variable time series on the Earth System Grid for anyone to use. Examples of scientists and stakeholders that are using the CESM-LE outputs to help interpret the observational record, to understand projection spread and to plan for a range of possible futures influenced by both internal climate variability and forced climate change will be highlighted the presentation.

  4. Learning abstract visual concepts via probabilistic program induction in a Language of Thought.

    PubMed

    Overlan, Matthew C; Jacobs, Robert A; Piantadosi, Steven T

    2017-11-01

    The ability to learn abstract concepts is a powerful component of human cognition. It has been argued that variable binding is the key element enabling this ability, but the computational aspects of variable binding remain poorly understood. Here, we address this shortcoming by formalizing the Hierarchical Language of Thought (HLOT) model of rule learning. Given a set of data items, the model uses Bayesian inference to infer a probability distribution over stochastic programs that implement variable binding. Because the model makes use of symbolic variables as well as Bayesian inference and programs with stochastic primitives, it combines many of the advantages of both symbolic and statistical approaches to cognitive modeling. To evaluate the model, we conducted an experiment in which human subjects viewed training items and then judged which test items belong to the same concept as the training items. We found that the HLOT model provides a close match to human generalization patterns, significantly outperforming two variants of the Generalized Context Model, one variant based on string similarity and the other based on visual similarity using features from a deep convolutional neural network. Additional results suggest that variable binding happens automatically, implying that binding operations do not add complexity to peoples' hypothesized rules. Overall, this work demonstrates that a cognitive model combining symbolic variables with Bayesian inference and stochastic program primitives provides a new perspective for understanding people's patterns of generalization. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. A method for simplifying the analysis of traffic accidents injury severity on two-lane highways using Bayesian networks.

    PubMed

    Mujalli, Randa Oqab; de Oña, Juan

    2011-10-01

    This study describes a method for reducing the number of variables frequently considered in modeling the severity of traffic accidents. The method's efficiency is assessed by constructing Bayesian networks (BN). It is based on a two stage selection process. Several variable selection algorithms, commonly used in data mining, are applied in order to select subsets of variables. BNs are built using the selected subsets and their performance is compared with the original BN (with all the variables) using five indicators. The BNs that improve the indicators' values are further analyzed for identifying the most significant variables (accident type, age, atmospheric factors, gender, lighting, number of injured, and occupant involved). A new BN is built using these variables, where the results of the indicators indicate, in most of the cases, a statistically significant improvement with respect to the original BN. It is possible to reduce the number of variables used to model traffic accidents injury severity through BNs without reducing the performance of the model. The study provides the safety analysts a methodology that could be used to minimize the number of variables used in order to determine efficiently the injury severity of traffic accidents without reducing the performance of the model. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. [Application of characteristic NIR variables selection in portable detection of soluble solids content of apple by near infrared spectroscopy].

    PubMed

    Fan, Shu-Xiang; Huang, Wen-Qian; Li, Jiang-Bo; Guo, Zhi-Ming; Zhaq, Chun-Jiang

    2014-10-01

    In order to detect the soluble solids content(SSC)of apple conveniently and rapidly, a ring fiber probe and a portable spectrometer were applied to obtain the spectroscopy of apple. Different wavelength variable selection methods, including unin- formative variable elimination (UVE), competitive adaptive reweighted sampling (CARS) and genetic algorithm (GA) were pro- posed to select effective wavelength variables of the NIR spectroscopy of the SSC in apple based on PLS. The back interval LS- SVM (BiLS-SVM) and GA were used to select effective wavelength variables based on LS-SVM. Selected wavelength variables and full wavelength range were set as input variables of PLS model and LS-SVM model, respectively. The results indicated that PLS model built using GA-CARS on 50 characteristic variables selected from full-spectrum which had 1512 wavelengths achieved the optimal performance. The correlation coefficient (Rp) and root mean square error of prediction (RMSEP) for prediction sets were 0.962, 0.403°Brix respectively for SSC. The proposed method of GA-CARS could effectively simplify the portable detection model of SSC in apple based on near infrared spectroscopy and enhance the predictive precision. The study can provide a reference for the development of portable apple soluble solids content spectrometer.

  7. Predictive Inference Using Latent Variables with Covariates*

    PubMed Central

    Schofield, Lynne Steuerle; Junker, Brian; Taylor, Lowell J.; Black, Dan A.

    2014-01-01

    Plausible Values (PVs) are a standard multiple imputation tool for analysis of large education survey data that measures latent proficiency variables. When latent proficiency is the dependent variable, we reconsider the standard institutionally-generated PV methodology and find it applies with greater generality than shown previously. When latent proficiency is an independent variable, we show that the standard institutional PV methodology produces biased inference because the institutional conditioning model places restrictions on the form of the secondary analysts’ model. We offer an alternative approach that avoids these biases based on the mixed effects structural equations (MESE) model of Schofield (2008). PMID:25231627

  8. Assessing performance and seasonal bias of pollen-based climate reconstructions in a perfect model world

    NASA Astrophysics Data System (ADS)

    Trachsel, M.; Rehfeld, K.; Telford, R.; Laepple, T.

    2017-12-01

    Reconstructions of summer, winter or annual mean temperatures based on the species composition of bio-indicators such as pollen are routinely used in climate model-proxy data comparison studies. Most reconstruction algorithms exploit the joint distribution of modern spatial climate and species distribution for the development of the reconstructions. They rely on the space-for-time substitution and the specific assumption that environmental variables other than those reconstructed are not important or that their relationship with the reconstructed variable(s) should be the same in the past as in the modern spatial calibration dataset. Here we test the implications of this "correlative uniformitarianism" assumption on climate reconstructions in an ideal model world, in which climate and vegetation are known at all times. The alternate reality is a climate simulation of the last 6000 years with dynamic vegetation. Transient changes of plant functional types are considered as surrogate pollen counts and allow us to establish, apply and evaluate transfer functions in the modeled world. We find that the transfer function cross validation r2 is of limited use to identify reconstructible climate variables, as it only relies on the modern spatial climate-vegetation relationship. However, ordination approaches that assess the amount of fossil vegetation variance explained by the reconstructions are promising. We show that correlations between climate variables in the modern climate-vegetation relationship are systematically extended into the reconstructions. Summer temperatures, the most prominent driving variable for modeled vegetation change in the Northern Hemisphere, are accurately reconstructed. However, the amplitude of the model winter and mean annual temperature cooling between the mid-Holocene and present day is overestimated and similar to the summer trend in magnitude. This effect occurs because temporal changes of a dominant climate variable are imprinted on a less important variable, leading to reconstructions biased towards the dominant variable's trends. Our results, although based on a model vegetation that is inevitably simpler than reality, indicate that reconstructions of multiple climate variables based on modern spatial bio-indicator datasets should be treated with caution.

  9. The use of process models to inform and improve statistical models of nitrate occurrence, Great Miami River Basin, southwestern Ohio

    USGS Publications Warehouse

    Walter, Donald A.; Starn, J. Jeffrey

    2013-01-01

    Statistical models of nitrate occurrence in the glacial aquifer system of the northern United States, developed by the U.S. Geological Survey, use observed relations between nitrate concentrations and sets of explanatory variables—representing well-construction, environmental, and source characteristics— to predict the probability that nitrate, as nitrogen, will exceed a threshold concentration. However, the models do not explicitly account for the processes that control the transport of nitrogen from surface sources to a pumped well and use area-weighted mean spatial variables computed from within a circular buffer around the well as a simplified source-area conceptualization. The use of models that explicitly represent physical-transport processes can inform and, potentially, improve these statistical models. Specifically, groundwater-flow models simulate advective transport—predominant in many surficial aquifers— and can contribute to the refinement of the statistical models by (1) providing for improved, physically based representations of a source area to a well, and (2) allowing for more detailed estimates of environmental variables. A source area to a well, known as a contributing recharge area, represents the area at the water table that contributes recharge to a pumped well; a well pumped at a volumetric rate equal to the amount of recharge through a circular buffer will result in a contributing recharge area that is the same size as the buffer but has a shape that is a function of the hydrologic setting. These volume-equivalent contributing recharge areas will approximate circular buffers in areas of relatively flat hydraulic gradients, such as near groundwater divides, but in areas with steep hydraulic gradients will be elongated in the upgradient direction and agree less with the corresponding circular buffers. The degree to which process-model-estimated contributing recharge areas, which simulate advective transport and therefore account for local hydrologic settings, would inform and improve the development of statistical models can be implicitly estimated by evaluating the differences between explanatory variables estimated from the contributing recharge areas and the circular buffers used to develop existing statistical models. The larger the difference in estimated variables, the more likely that statistical models would be changed, and presumably improved, if explanatory variables estimated from contributing recharge areas were used in model development. Comparing model predictions from the two sets of estimated variables would further quantify—albeit implicitly—how an improved, physically based estimate of explanatory variables would be reflected in model predictions. Differences between the two sets of estimated explanatory variables and resultant model predictions vary spatially; greater differences are associated with areas of steep hydraulic gradients. A direct comparison, however, would require the development of a separate set of statistical models using explanatory variables from contributing recharge areas. Area-weighted means of three environmental variables—silt content, alfisol content, and depth to water from the U.S. Department of Agriculture State Soil Geographic (STATSGO) data—and one nitrogen-source variable (fertilizer-application rate from county data mapped to Enhanced National Land Cover Data 1992 (NLCDe 92) agricultural land use) can vary substantially between circular buffers and volume-equivalent contributing recharge areas and among contributing recharge areas for different sets of well variables. The differences in estimated explanatory variables are a function of the same factors affecting the contributing recharge areas as well as the spatial resolution and local distribution of the underlying spatial data. As a result, differences in estimated variables between circular buffers and contributing recharge areas are complex and site specific as evidenced by differences in estimated variables for circular buffers and contributing recharge areas of existing public-supply and network wells in the Great Miami River Basin. Large differences in areaweighted mean environmental variables are observed at the basin scale, determined by using the network of uniformly spaced hypothetical wells; the differences have a spatial pattern that generally is similar to spatial patterns in the underlying STATSGO data. Generally, the largest differences were observed for area-weighted nitrogen-application rate from county and national land-use data; the basin-scale differences ranged from -1,600 (indicating a larger value from within the volume-equivalent contributing recharge area) to 1,900 kilograms per year (kg/yr); the range in the underlying spatial data was from 0 to 2,200 kg/yr. Silt content, alfisol content, and nitrogen-application rate are defined by the underlying spatial data and are external to the groundwater system; however, depth to water is an environmental variable that can be estimated in more detail and, presumably, in a more physically based manner using a groundwater-flow model than using the spatial data. Model-calculated depths to water within circular buffers in the Great Miami River Basin differed substantially from values derived from the spatial data and had a much larger range. Differences in estimates of area-weighted spatial variables result in corresponding differences in predictions of nitrate occurrence in the aquifer. In addition to the factors affecting contributing recharge areas and estimated explanatory variables, differences in predictions also are a function of the specific set of explanatory variables used and the fitted slope coefficients in a given model. For models that predicted the probability of exceeding 1 and 4 milligrams per liter as nitrogen (mg/L as N), predicted probabilities using variables estimated from circular buffers and contributing recharge areas generally were correlated but differed significantly at the local and basin scale. The scale and distribution of prediction differences can be explained by the underlying differences in the estimated variables and the relative weight of the variables in the statistical models. Differences in predictions of exceeding 1 mg/L as N, which only includes environmental variables, generally correlated with the underlying differences in STATSGO data, whereas differences in exceeding 4 mg/L as N were more spatially extensive because that model included environmental and nitrogen-source variables. Using depths to water from within circular buffers derived from the spatial data and depths to water within the circular buffers calculated from the groundwater-flow model, restricted to the same range, resulted in large differences in predicted probabilities. The differences in estimated explanatory variables between contributing recharge areas and circular buffers indicate incorporation of physically based contributing recharge area likely would result in a different set of explanatory variables and an improved set of statistical models. The use of a groundwater-flow model to improve representations of source areas or to provide more-detailed estimates of specific explanatory variables includes a number of limitations and technical considerations. An assumption in these analyses is that (1) there is a state of mass balance between recharge and pumping, and (2) transport to a pumped well is under a steady state flow field. Comparison of volumeequivalent contributing recharge areas under steady-state and transient transport conditions at a location in the southeastern part of the basin shows the steady-state contributing recharge area is a reasonable approximation of the transient contributing recharge area after between 10 and 20 years of pumping. The first assumption is a more important consideration for this analysis. A gradient effect refers to a condition where simulated pumping from a well is less than recharge through the corresponding contributing recharge area. This generally takes place in areas with steep hydraulic gradients, such as near discharge locations, and can be mitigated using a finer model discretization. A boundary effect refers to a condition where recharge through the contributing recharge area is less than pumping. This indicates other sources of water to the simulated well and could reflect a real hydrologic process. In the Great Miami River Basin, large gradient and boundary effects—defined as the balance between pumping and recharge being less than half—occurred in 5 and 14 percent of the basin, respectively. The agreement between circular buffers and volume-equivalent contributing recharge areas, differences in estimated variables, and the effect on statisticalmodel predictions between the population of wells with a balance between pumping and recharge within 10 percent and the population of all wells were similar. This indicated process-model limitations did not affect the overall findings in the Great Miami River Basin; however, this would be model specific, and prudent use of a process model needs to entail a limitations analysis and, if necessary, alterations to the model.

  10. Sources of Sex Discrimination in Educational Systems: A Conceptual Model

    ERIC Educational Resources Information Center

    Kutner, Nancy G.; Brogan, Donna

    1976-01-01

    A conceptual model is presented relating numerous variables contributing to sexism in American education. Discrimination is viewed as intervening between two sets of interrelated independent variables and the dependent variable of sex inequalities in educational attainment. Sex-role orientation changes are the key to significant change in the…

  11. The Latent Variable Approach as Applied to Transitive Reasoning

    ERIC Educational Resources Information Center

    Bouwmeester, Samantha; Vermunt, Jeroen K.; Sijtsma, Klaas

    2012-01-01

    We discuss the limitations of hypothesis testing using (quasi-) experiments in the study of cognitive development and suggest latent variable modeling as a viable alternative to experimentation. Latent variable models allow testing a theory as a whole, incorporating individual differences with respect to developmental processes or abilities in the…

  12. 20180311 - Variability of LD50 Values from Rat Oral Acute Toxicity Studies: Implications for Alternative Model Development (SOT)

    EPA Science Inventory

    Alternative models developed for estimating acute systemic toxicity are generally evaluated using in vivo LD50 values. However, in vivo acute systemic toxicity studies can produce variable results, even when conducted according to accepted test guidelines. This variability can ma...

  13. Much Ado about Nothing--Or at Best, Very Little

    ERIC Educational Resources Information Center

    Widaman, Keith F.

    2014-01-01

    Latent variable structural equation modeling has become the analytic method of choice in many domains of research in psychology and allied social sciences. One important aspect of a latent variable model concerns the relations hypothesized to hold between latent variables and their indicators. The most common specification of structural equation…

  14. Five-Factor Model of Personality and Career Exploration

    ERIC Educational Resources Information Center

    Reed, Mary Beth; Bruch, Monroe A.; Haase, Richard F.

    2004-01-01

    This study investigates whether the dimensions of the five-factor model (FFM) of personality are related to specific career exploration variables. Based on the FFM, predictions were made about the relevance of particular traits to career exploration variables. Results from a canonical correlation analysis showed that variable loadings on three…

  15. Variability of LD50 Values from Rat Oral Acute Toxicity Studies: Implications for Alternative Model Development

    EPA Science Inventory

    Alternative models developed for estimating acute systemic toxicity are generally evaluated using in vivo LD50 values. However, in vivo acute systemic toxicity studies can produce variable results, even when conducted according to accepted test guidelines. This variability can ma...

  16. Assessing Mediational Models: Testing and Interval Estimation for Indirect Effects

    ERIC Educational Resources Information Center

    Biesanz, Jeremy C.; Falk, Carl F.; Savalei, Victoria

    2010-01-01

    Theoretical models specifying indirect or mediated effects are common in the social sciences. An indirect effect exists when an independent variable's influence on the dependent variable is mediated through an intervening variable. Classic approaches to assessing such mediational hypotheses (Baron & Kenny, 1986; Sobel, 1982) have in recent years…

  17. Variable-Structure Control of a Model Glider Airplane

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.; Anderson, Mark R.

    2008-01-01

    A variable-structure control system designed to enable a fuselage-heavy airplane to recover from spin has been demonstrated in a hand-launched, instrumented model glider airplane. Variable-structure control is a high-speed switching feedback control technique that has been developed for control of nonlinear dynamic systems.

  18. A Multivariate Model of Parent-Adolescent Relationship Variables in Early Adolescence

    ERIC Educational Resources Information Center

    McKinney, Cliff; Renk, Kimberly

    2011-01-01

    Given the importance of predicting outcomes for early adolescents, this study examines a multivariate model of parent-adolescent relationship variables, including parenting, family environment, and conflict. Participants, who completed measures assessing these variables, included 710 culturally diverse 11-14-year-olds who were attending a middle…

  19. Structural Equations and Path Analysis for Discrete Data.

    ERIC Educational Resources Information Center

    Winship, Christopher; Mare, Robert D.

    1983-01-01

    Presented is an approach to causal models in which some or all variables are discretely measured, showing that path analytic methods permit quantification of causal relationships among variables with the same flexibility and power of interpretation as is feasible in models including only continuous variables. Examples are provided. (Author/IS)

  20. Sex determination of the Acadian Flycatcher using discriminant analysis

    USGS Publications Warehouse

    Wilson, R.R.

    1999-01-01

    I used five morphometric variables from 114 individuals captured in Arkansas to develop a discriminant model to predict the sex of Acadian Flycatchers (Empidonax virescens). Stepwise discriminant function analyses selected wing chord and tail length as the most parsimonious subset of variables for discriminating sex. This two-variable model correctly classified 80% of females and 97% of males used to develop the model. Validation of the model using 19 individuals from Louisiana and Virginia resulted in 100% correct classification of males and females. This model provides criteria for sexing monomorphic Acadian Flycatchers during the breeding season and possibly during the winter.

  1. Censored Hurdle Negative Binomial Regression (Case Study: Neonatorum Tetanus Case in Indonesia)

    NASA Astrophysics Data System (ADS)

    Yuli Rusdiana, Riza; Zain, Ismaini; Wulan Purnami, Santi

    2017-06-01

    Hurdle negative binomial model regression is a method that can be used for discreate dependent variable, excess zero and under- and overdispersion. It uses two parts approach. The first part estimates zero elements from dependent variable is zero hurdle model and the second part estimates not zero elements (non-negative integer) from dependent variable is called truncated negative binomial models. The discrete dependent variable in such cases is censored for some values. The type of censor that will be studied in this research is right censored. This study aims to obtain the parameter estimator hurdle negative binomial regression for right censored dependent variable. In the assessment of parameter estimation methods used Maximum Likelihood Estimator (MLE). Hurdle negative binomial model regression for right censored dependent variable is applied on the number of neonatorum tetanus cases in Indonesia. The type data is count data which contains zero values in some observations and other variety value. This study also aims to obtain the parameter estimator and test statistic censored hurdle negative binomial model. Based on the regression results, the factors that influence neonatorum tetanus case in Indonesia is the percentage of baby health care coverage and neonatal visits.

  2. The spatial and temporal variability of groundwater recharge in a forested basin in northern Wisconsin

    USGS Publications Warehouse

    Dripps, W.R.; Bradbury, K.R.

    2010-01-01

    Recharge varies spatially and temporally as it depends on a wide variety of factors (e.g. vegetation, precipitation, climate, topography, geology, and soil type), making it one of the most difficult, complex, and uncertain hydrologic parameters to quantify. Despite its inherent variability, groundwater modellers, planners, and policy makers often ignore recharge variability and assume a single average recharge value for an entire watershed. Relatively few attempts have been made to quantify or incorporate spatial and temporal recharge variability into water resource planning or groundwater modelling efforts. In this study, a simple, daily soil-water balance model was developed and used to estimate the spatial and temporal distribution of groundwater recharge of the Trout Lake basin of northern Wisconsin for 1996-2000 as a means to quantify recharge variability. For the 5 years of study, annual recharge varied spatially by as much as 18 cm across the basin; vegetation was the predominant control on this variability. Recharge also varied temporally with a threefold annual difference over the 5-year period. Intra-annually, recharge was limited to a few isolated events each year and exhibited a distinct seasonal pattern. The results suggest that ignoring recharge variability may not only be inappropriate, but also, depending on the application, may invalidate model results and predictions for regional and local water budget calculations, water resource management, nutrient cycling, and contaminant transport studies. Recharge is spatially and temporally variable, and should be modelled as such. Copyright ?? 2009 John Wiley & Sons, Ltd.

  3. How ocean lateral mixing changes Southern Ocean variability in coupled climate models

    NASA Astrophysics Data System (ADS)

    Pradal, M. A. S.; Gnanadesikan, A.; Thomas, J. L.

    2016-02-01

    The lateral mixing of tracers represents a major uncertainty in the formulation of coupled climate models. The mixing of tracers along density surfaces in the interior and horizontally within the mixed layer is often parameterized using a mixing coefficient ARedi. The models used in the Coupled Model Intercomparison Project 5 exhibit more than an order of magnitude range in the values of this coefficient used within the Southern Ocean. The impacts of such uncertainty on Southern Ocean variability have remained unclear, even as recent work has shown that this variability differs between different models. In this poster, we change the lateral mixing coefficient within GFDL ESM2Mc, a coarse-resolution Earth System model that nonetheless has a reasonable circulation within the Southern Ocean. As the coefficient varies from 400 to 2400 m2/s the amplitude of the variability varies significantly. The low-mixing case shows strong decadal variability with an annual mean RMS temperature variability exceeding 1C in the Circumpolar Current. The highest-mixing case shows a very similar spatial pattern of variability, but with amplitudes only about 60% as large. The suppression of mixing is larger in the Atlantic Sector of the Southern Ocean relatively to the Pacific sector. We examine the salinity budgets of convective regions, paying particular attention to the extent to which high mixing prevents the buildup of low-saline waters that are capable of shutting off deep convection entirely.

  4. Data mining of tree-based models to analyze freeway accident frequency.

    PubMed

    Chang, Li-Yen; Chen, Wen-Chieh

    2005-01-01

    Statistical models, such as Poisson or negative binomial regression models, have been employed to analyze vehicle accident frequency for many years. However, these models have their own model assumptions and pre-defined underlying relationship between dependent and independent variables. If these assumptions are violated, the model could lead to erroneous estimation of accident likelihood. Classification and Regression Tree (CART), one of the most widely applied data mining techniques, has been commonly employed in business administration, industry, and engineering. CART does not require any pre-defined underlying relationship between target (dependent) variable and predictors (independent variables) and has been shown to be a powerful tool, particularly for dealing with prediction and classification problems. This study collected the 2001-2002 accident data of National Freeway 1 in Taiwan. A CART model and a negative binomial regression model were developed to establish the empirical relationship between traffic accidents and highway geometric variables, traffic characteristics, and environmental factors. The CART findings indicated that the average daily traffic volume and precipitation variables were the key determinants for freeway accident frequencies. By comparing the prediction performance between the CART and the negative binomial regression models, this study demonstrates that CART is a good alternative method for analyzing freeway accident frequencies. By comparing the prediction performance between the CART and the negative binomial regression models, this study demonstrates that CART is a good alternative method for analyzing freeway accident frequencies.

  5. Stress-based animal models of depression: Do we actually know what we are doing?

    PubMed

    Yin, Xin; Guven, Nuri; Dietis, Nikolas

    2016-12-01

    Depression is one of the leading causes of disability and a significant health-concern worldwide. Much of our current understanding on the pathogenesis of depression and the pharmacology of antidepressant drugs is based on pre-clinical models. Three of the most popular stress-based rodent models are the forced swimming test, the chronic mild stress paradigm and the learned helplessness model. Despite their recognizable advantages and limitations, they are associated with an immense variability due to the high number of design parameters that define them. Only few studies have reported how minor modifications of these parameters affect the model phenotype. Thus, the existing variability in how these models are used has been a strong barrier for drug development as well as benchmark and evaluation of these pre-clinical models of depression. It also has been the source of confusing variability in the experimental outcomes between research groups using the same models. In this review, we summarize the known variability in the experimental protocols, identify the main and relevant parameters for each model and describe the variable values using characteristic examples. Our view of depression and our efforts to discover novel and effective antidepressants is largely based on our detailed knowledge of these testing paradigms, and requires a sound understanding around the importance of individual parameters to optimize and improve these pre-clinical models. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. The prediction of nonlinear dynamic loads on helicopters from flight variables using artificial neural networks

    NASA Technical Reports Server (NTRS)

    Cook, A. B.; Fuller, C. R.; O'Brien, W. F.; Cabell, R. H.

    1992-01-01

    A method of indirectly monitoring component loads through common flight variables is proposed which requires an accurate model of the underlying nonlinear relationships. An artificial neural network (ANN) model learns relationships through exposure to a database of flight variable records and corresponding load histories from an instrumented military helicopter undergoing standard maneuvers. The ANN model, utilizing eight standard flight variables as inputs, is trained to predict normalized time-varying mean and oscillatory loads on two critical components over a range of seven maneuvers. Both interpolative and extrapolative capabilities are demonstrated with agreement between predicted and measured loads on the order of 90 percent to 95 percent. This work justifies pursuing the ANN method of predicting loads from flight variables.

  7. A Short Guide to the Climatic Variables of the Last Glacial Maximum for Biogeographers.

    PubMed

    Varela, Sara; Lima-Ribeiro, Matheus S; Terribile, Levi Carina

    2015-01-01

    Ecological niche models are widely used for mapping the distribution of species during the last glacial maximum (LGM). Although the selection of the variables and General Circulation Models (GCMs) used for constructing those maps determine the model predictions, we still lack a discussion about which variables and which GCM should be included in the analysis and why. Here, we analyzed the climatic predictions for the LGM of 9 different GCMs in order to help biogeographers to select their GCMs and climatic layers for mapping the species ranges in the LGM. We 1) map the discrepancies between the climatic predictions of the nine GCMs available for the LGM, 2) analyze the similarities and differences between the GCMs and group them to help researchers choose the appropriate GCMs for calibrating and projecting their ecological niche models (ENM) during the LGM, and 3) quantify the agreement of the predictions for each bioclimatic variable to help researchers avoid the environmental variables with a poor consensus between models. Our results indicate that, in absolute values, GCMs have a strong disagreement in their temperature predictions for temperate areas, while the uncertainties for the precipitation variables are in the tropics. In spite of the discrepancies between model predictions, temperature variables (BIO1-BIO11) are highly correlated between models. Precipitation variables (BIO12-BIO19) show no correlation between models, and specifically, BIO14 (precipitation of the driest month) and BIO15 (Precipitation Seasonality (Coefficient of Variation)) show the highest level of discrepancy between GCMs. Following our results, we strongly recommend the use of different GCMs for constructing or projecting ENMs, particularly when predicting the distribution of species that inhabit the tropics and the temperate areas of the Northern and Southern Hemispheres, because climatic predictions for those areas vary greatly among GCMs. We also recommend the exclusion of BIO14 and BIO15 from ENMs because those variables show a high level of discrepancy between GCMs. Thus, by excluding them, we decrease the level of uncertainty of our predictions. All the climatic layers produced for this paper are freely available in http://ecoclimate.org/.

  8. A Short Guide to the Climatic Variables of the Last Glacial Maximum for Biogeographers

    PubMed Central

    Varela, Sara; Lima-Ribeiro, Matheus S.; Terribile, Levi Carina

    2015-01-01

    Ecological niche models are widely used for mapping the distribution of species during the last glacial maximum (LGM). Although the selection of the variables and General Circulation Models (GCMs) used for constructing those maps determine the model predictions, we still lack a discussion about which variables and which GCM should be included in the analysis and why. Here, we analyzed the climatic predictions for the LGM of 9 different GCMs in order to help biogeographers to select their GCMs and climatic layers for mapping the species ranges in the LGM. We 1) map the discrepancies between the climatic predictions of the nine GCMs available for the LGM, 2) analyze the similarities and differences between the GCMs and group them to help researchers choose the appropriate GCMs for calibrating and projecting their ecological niche models (ENM) during the LGM, and 3) quantify the agreement of the predictions for each bioclimatic variable to help researchers avoid the environmental variables with a poor consensus between models. Our results indicate that, in absolute values, GCMs have a strong disagreement in their temperature predictions for temperate areas, while the uncertainties for the precipitation variables are in the tropics. In spite of the discrepancies between model predictions, temperature variables (BIO1-BIO11) are highly correlated between models. Precipitation variables (BIO12- BIO19) show no correlation between models, and specifically, BIO14 (precipitation of the driest month) and BIO15 (Precipitation Seasonality (Coefficient of Variation)) show the highest level of discrepancy between GCMs. Following our results, we strongly recommend the use of different GCMs for constructing or projecting ENMs, particularly when predicting the distribution of species that inhabit the tropics and the temperate areas of the Northern and Southern Hemispheres, because climatic predictions for those areas vary greatly among GCMs. We also recommend the exclusion of BIO14 and BIO15 from ENMs because those variables show a high level of discrepancy between GCMs. Thus, by excluding them, we decrease the level of uncertainty of our predictions. All the climatic layers produced for this paper are freely available in http://ecoclimate.org/. PMID:26068930

  9. Soil Cd, Cr, Cu, Ni, Pb and Zn sorption and retention models using SVM: Variable selection and competitive model.

    PubMed

    González Costa, J J; Reigosa, M J; Matías, J M; Covelo, E F

    2017-09-01

    The aim of this study was to model the sorption and retention of Cd, Cu, Ni, Pb and Zn in soils. To that extent, the sorption and retention of these metals were studied and the soil characterization was performed separately. Multiple stepwise regression was used to produce multivariate models with linear techniques and with support vector machines, all of which included 15 explanatory variables characterizing soils. When the R-squared values are represented, two different groups are noticed. Cr, Cu and Pb sorption and retention show a higher R-squared; the most explanatory variables being humified organic matter, Al oxides and, in some cases, cation-exchange capacity (CEC). The other group of metals (Cd, Ni and Zn) shows a lower R-squared, and clays are the most explanatory variables, including a percentage of vermiculite and slime. In some cases, quartz, plagioclase or hematite percentages also show some explanatory capacity. Support Vector Machine (SVM) regression shows that the different models are not as regular as in multiple regression in terms of number of variables, the regression for nickel adsorption being the one with the highest number of variables in its optimal model. On the other hand, there are cases where the most explanatory variables are the same for two metals, as it happens with Cd and Cr adsorption. A similar adsorption mechanism is thus postulated. These patterns of the introduction of variables in the model allow us to create explainability sequences. Those which are the most similar to the selectivity sequences obtained by Covelo (2005) are Mn oxides in multiple regression and change capacity in SVM. Among all the variables, the only one that is explanatory for all the metals after applying the maximum parsimony principle is the percentage of sand in the retention process. In the competitive model arising from the aforementioned sequences, the most intense competitiveness for the adsorption and retention of different metals appears between Cr and Cd, Cu and Zn in multiple regression; and between Cr and Cd in SVM regression. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Multi-objective optimization for evaluation of simulation fidelity for precipitation, cloudiness and insolation in regional climate models

    NASA Astrophysics Data System (ADS)

    Lee, H.

    2016-12-01

    Precipitation is one of the most important climate variables that are taken into account in studying regional climate. Nevertheless, how precipitation will respond to a changing climate and even its mean state in the current climate are not well represented in regional climate models (RCMs). Hence, comprehensive and mathematically rigorous methodologies to evaluate precipitation and related variables in multiple RCMs are required. The main objective of the current study is to evaluate the joint variability of climate variables related to model performance in simulating precipitation and condense multiple evaluation metrics into a single summary score. We use multi-objective optimization, a mathematical process that provides a set of optimal tradeoff solutions based on a range of evaluation metrics, to characterize the joint representation of precipitation, cloudiness and insolation in RCMs participating in the North American Regional Climate Change Assessment Program (NARCCAP) and Coordinated Regional Climate Downscaling Experiment-North America (CORDEX-NA). We also leverage ground observations, NASA satellite data and the Regional Climate Model Evaluation System (RCMES). Overall, the quantitative comparison of joint probability density functions between the three variables indicates that performance of each model differs markedly between sub-regions and also shows strong seasonal dependence. Because of the large variability across the models, it is important to evaluate models systematically and make future projections using only models showing relatively good performance. Our results indicate that the optimized multi-model ensemble always shows better performance than the arithmetic ensemble mean and may guide reliable future projections.

  11. Forward Modeling of Oxygen Isotope Variability in Tropical Andean Ice Cores

    NASA Astrophysics Data System (ADS)

    Vuille, M. F.; Hurley, J. V.; Hardy, D. R.

    2016-12-01

    Ice core records from the tropical Andes serve as important archives of past tropical Pacific SST variability and changes in monsoon intensity upstream over the Amazon basin. Yet the interpretation of the oxygen isotopic signal in these ice cores remains controversial. Based on 10 years of continuous on-site glaciologic, meteorologic and isotopic measurements at the summit of the world's largest tropical ice cap, Quelccaya, in southern Peru, we developed a process-based physical forward model (proxy system model), capable of simulating intraseasonal, seasonal and interannual variability in delta-18O as observed in snow pits and short cores. Our results highlight the importance of taking into account post-depositional effects (sublimation and isotopic enrichment) to properly simulate the seasonal cycle. Intraseasonal variability is underestimated in our model unless the effects of cold air incursions, triggering significant monsoonal snowfall and more negative delta-18O values, are included. A number of sensitivity test highlight the influence of changing boundary conditions on the final snow isotopic profile. Such tests also show that our model provides much more realistic data than applying direct model output of precipitation delta-18O from isotope-enabled climate models (SWING ensemble). The forward model was calibrated with and run under present-day conditions, but it can also be driven with past climate forcings to reconstruct paleo-monsoon variability and investigate the influence of changes in radiative forcings (solar, volcanic) on delta-18O variability in Andean snow. The model is transferable and may be used to render a paleoclimatic context at other ice core locations.

  12. Parameter estimation of variable-parameter nonlinear Muskingum model using excel solver

    NASA Astrophysics Data System (ADS)

    Kang, Ling; Zhou, Liwei

    2018-02-01

    Abstract . The Muskingum model is an effective flood routing technology in hydrology and water resources Engineering. With the development of optimization technology, more and more variable-parameter Muskingum models were presented to improve effectiveness of the Muskingum model in recent decades. A variable-parameter nonlinear Muskingum model (NVPNLMM) was proposed in this paper. According to the results of two real and frequently-used case studies by various models, the NVPNLMM could obtain better values of evaluation criteria, which are used to describe the superiority of the estimated outflows and compare the accuracies of flood routing using various models, and the optimal estimated outflows by the NVPNLMM were closer to the observed outflows than the ones by other models.

  13. Effects of input uncertainty on cross-scale crop modeling

    NASA Astrophysics Data System (ADS)

    Waha, Katharina; Huth, Neil; Carberry, Peter

    2014-05-01

    The quality of data on climate, soils and agricultural management in the tropics is in general low or data is scarce leading to uncertainty in process-based modeling of cropping systems. Process-based crop models are common tools for simulating crop yields and crop production in climate change impact studies, studies on mitigation and adaptation options or food security studies. Crop modelers are concerned about input data accuracy as this, together with an adequate representation of plant physiology processes and choice of model parameters, are the key factors for a reliable simulation. For example, assuming an error in measurements of air temperature, radiation and precipitation of ± 0.2°C, ± 2 % and ± 3 % respectively, Fodor & Kovacs (2005) estimate that this translates into an uncertainty of 5-7 % in yield and biomass simulations. In our study we seek to answer the following questions: (1) are there important uncertainties in the spatial variability of simulated crop yields on the grid-cell level displayed on maps, (2) are there important uncertainties in the temporal variability of simulated crop yields on the aggregated, national level displayed in time-series, and (3) how does the accuracy of different soil, climate and management information influence the simulated crop yields in two crop models designed for use at different spatial scales? The study will help to determine whether more detailed information improves the simulations and to advise model users on the uncertainty related to input data. We analyse the performance of the point-scale crop model APSIM (Keating et al., 2003) and the global scale crop model LPJmL (Bondeau et al., 2007) with different climate information (monthly and daily) and soil conditions (global soil map and African soil map) under different agricultural management (uniform and variable sowing dates) for the low-input maize-growing areas in Burkina Faso/West Africa. We test the models' response to different levels of input data from very little to very detailed information, and compare the models' abilities to represent the spatial variability and temporal variability in crop yields. We display the uncertainty in crop yield simulations from different input data and crop models in Taylor diagrams which are a graphical summary of the similarity between simulations and observations (Taylor, 2001). The observed spatial variability can be represented well from both models (R=0.6-0.8) but APSIM predicts higher spatial variability than LPJmL due to its sensitivity to soil parameters. Simulations with the same crop model, climate and sowing dates have similar statistics and therefore similar skill to reproduce the observed spatial variability. Soil data is less important for the skill of a crop model to reproduce the observed spatial variability. However, the uncertainty in simulated spatial variability from the two crop models is larger than from input data settings and APSIM is more sensitive to input data then LPJmL. Even with a detailed, point-scale crop model and detailed input data it is difficult to capture the complexity and diversity in maize cropping systems.

  14. Physiologically Based Pharmacokinetic (PBPK) Modeling of ...

    EPA Pesticide Factsheets

    Background: Quantitative estimation of toxicokinetic variability in the human population is a persistent challenge in risk assessment of environmental chemicals. Traditionally, inter-individual differences in the population are accounted for by default assumptions or, in rare cases, are based on human toxicokinetic data.Objectives: To evaluate the utility of genetically diverse mouse strains for estimating toxicokinetic population variability for risk assessment, using trichloroethylene (TCE) metabolism as a case study. Methods: We used data on oxidative and glutathione conjugation metabolism of TCE in 16 inbred and one hybrid mouse strains to calibrate and extend existing physiologically-based pharmacokinetic (PBPK) models. We added one-compartment models for glutathione metabolites and a two-compartment model for dichloroacetic acid (DCA). A Bayesian population analysis of inter-strain variability was used to quantify variability in TCE metabolism. Results: Concentration-time profiles for TCE metabolism to oxidative and glutathione conjugation metabolites varied across strains. Median predictions for the metabolic flux through oxidation was less variable (5-fold range) than that through glutathione conjugation (10-fold range). For oxidative metabolites, median predictions of trichloroacetic acid production was less variable (2-fold range) than DCA production (5-fold range), although uncertainty bounds for DCA exceeded the predicted variability. Conclusions:

  15. Estimating the urban bias of surface shelter temperatures using upper-air and satellite data. Part 1: Development of models predicting surface shelter temperatures

    NASA Technical Reports Server (NTRS)

    Epperson, David L.; Davis, Jerry M.; Bloomfield, Peter; Karl, Thomas R.; Mcnab, Alan L.; Gallo, Kevin P.

    1995-01-01

    Multiple regression techniques were used to predict surface shelter temperatures based on the time period 1986-89 using upper-air data from the European Centre for Medium-Range Weather Forecasts (ECMWF) to represent the background climate and site-specific data to represent the local landscape. Global monthly mean temperature models were developed using data from over 5000 stations available in the Global Historical Climate Network (GHCN). Monthly maximum, mean, and minimum temperature models for the United States were also developed using data from over 1000 stations available in the U.S. Cooperative (COOP) Network and comparative monthly mean temperature models were developed using over 1150 U.S. stations in the GHCN. Three-, six-, and full-variable models were developed for comparative purposes. Inferences about the variables selected for the various models were easier for the GHCN models, which displayed month-to-month consistency in which variables were selected, than for the COOP models, which were assigned a different list of variables for nearly every month. These and other results suggest that global calibration is preferred because data from the global spectrum of physical processes that control surface temperatures are incorporated in a global model. All of the models that were developed in this study validated relatively well, especially the global models. Recalibration of the models with validation data resulted in only slightly poorer regression statistics, indicating that the calibration list of variables was valid. Predictions using data from the validation dataset in the calibrated equation were better for the GHCN models, and the globally calibrated GHCN models generally provided better U.S. predictions than the U.S.-calibrated COOP models. Overall, the GHCN and COOP models explained approximately 64%-95% of the total variance of surface shelter temperatures, depending on the month and the number of model variables. In addition, root-mean-square errors (rmse's) were over 3 C for GHCN models and over 2 C for COOP models for winter months, and near 2 C for GHCN models and near 1.5 C for COOP models for summer months.

  16. Improvement in latent variable indirect response modeling of multiple categorical clinical endpoints: application to modeling of guselkumab treatment effects in psoriatic patients.

    PubMed

    Hu, Chuanpu; Randazzo, Bruce; Sharma, Amarnath; Zhou, Honghui

    2017-10-01

    Exposure-response modeling plays an important role in optimizing dose and dosing regimens during clinical drug development. The modeling of multiple endpoints is made possible in part by recent progress in latent variable indirect response (IDR) modeling for ordered categorical endpoints. This manuscript aims to investigate the level of improvement achievable by jointly modeling two such endpoints in the latent variable IDR modeling framework through the sharing of model parameters. This is illustrated with an application to the exposure-response of guselkumab, a human IgG1 monoclonal antibody in clinical development that blocks IL-23. A Phase 2b study was conducted in 238 patients with psoriasis for which disease severity was assessed using Psoriasis Area and Severity Index (PASI) and Physician's Global Assessment (PGA) scores. A latent variable Type I IDR model was developed to evaluate the therapeutic effect of guselkumab dosing on 75, 90 and 100% improvement of PASI scores from baseline and PGA scores, with placebo effect empirically modeled. The results showed that the joint model is able to describe the observed data better with fewer parameters compared with the common approach of separately modeling the endpoints.

  17. Decadal variability of the Tropical Atlantic Ocean Surface Temperature in shipboard measurements and in a Global Ocean-Atmosphere model

    NASA Technical Reports Server (NTRS)

    Mehta, Vikram M.; Delworth, Thomas

    1995-01-01

    Sea surface temperature (SST) variability was investigated in a 200-yr integration of a global model of the coupled oceanic and atmospheric general circulations developed at the Geophysical Fluid Dynamics Laboratory (GFDL). The second 100 yr of SST in the coupled model's tropical Atlantic region were analyzed with a variety of techniques. Analyses of SST time series, averaged over approximately the same subregions as the Global Ocean Surface Temperature Atlas (GOSTA) time series, showed that the GFDL SST anomalies also undergo pronounced quasi-oscillatory decadal and multidecadal variability but at somewhat shorter timescales than the GOSTA SST anomalies. Further analyses of the horizontal structures of the decadal timescale variability in the GFDL coupled model showed the existence of two types of variability in general agreement with results of the GOSTA SST time series analyses. One type, characterized by timescales between 8 and 11 yr, has high spatial coherence within each hemisphere but not between the two hemispheres of the tropical Atlantic. A second type, characterized by timescales between 12 and 20 yr, has high spatial coherence between the two hemispheres. The second type of variability is considerably weaker than the first. As in the GOSTA time series, the multidecadal variability in the GFDL SST time series has approximately opposite phases between the tropical North and South Atlantic Oceans. Empirical orthogonal function analyses of the tropical Atlantic SST anomalies revealed a north-south bipolar pattern as the dominant pattern of decadal variability. It is suggested that the bipolar pattern can be interpreted as decadal variability of the interhemispheric gradient of SST anomalies. The decadal and multidecadal timescale variability of the tropical Atlantic SST, both in the actual and in the GFDL model, stands out significantly above the background 'red noise' and is coherent within each of the time series, suggesting that specific sets of processes may be responsible for the choice of the decadal and multidecadal timescales. Finally, it must be emphasized that the GFDL coupled ocean-atmosphere model generates the decadal and multidecadal timescale variability without any externally applied force, solar or lunar, at those timescales.

  18. Confidence Intervals for a Semiparametric Approach to Modeling Nonlinear Relations among Latent Variables

    ERIC Educational Resources Information Center

    Pek, Jolynn; Losardo, Diane; Bauer, Daniel J.

    2011-01-01

    Compared to parametric models, nonparametric and semiparametric approaches to modeling nonlinearity between latent variables have the advantage of recovering global relationships of unknown functional form. Bauer (2005) proposed an indirect application of finite mixtures of structural equation models where latent components are estimated in the…

  19. Specifying and Refining a Complex Measurement Model.

    ERIC Educational Resources Information Center

    Levy, Roy; Mislevy, Robert J.

    This paper aims to describe a Bayesian approach to modeling and estimating cognitive models both in terms of statistical machinery and actual instrument development. Such a method taps the knowledge of experts to provide initial estimates for the probabilistic relationships among the variables in a multivariate latent variable model and refines…

  20. Ecosystem functioning is enveloped by hydrometeorological variability.

    PubMed

    Pappas, Christoforos; Mahecha, Miguel D; Frank, David C; Babst, Flurin; Koutsoyiannis, Demetris

    2017-09-01

    Terrestrial ecosystem processes, and the associated vegetation carbon dynamics, respond differently to hydrometeorological variability across timescales, and so does our scientific understanding of the underlying mechanisms. Long-term variability of the terrestrial carbon cycle is not yet well constrained and the resulting climate-biosphere feedbacks are highly uncertain. Here we present a comprehensive overview of hydrometeorological and ecosystem variability from hourly to decadal timescales integrating multiple in situ and remote-sensing datasets characterizing extra-tropical forest sites. We find that ecosystem variability at all sites is confined within a hydrometeorological envelope across sites and timescales. Furthermore, ecosystem variability demonstrates long-term persistence, highlighting ecological memory and slow ecosystem recovery rates after disturbances. However, simulation results with state-of-the-art process-based models do not reflect this long-term persistent behaviour in ecosystem functioning. Accordingly, we develop a cross-time-scale stochastic framework that captures hydrometeorological and ecosystem variability. Our analysis offers a perspective for terrestrial ecosystem modelling and paves the way for new model-data integration opportunities in Earth system sciences.

  1. Overcoming multicollinearity in multiple regression using correlation coefficient

    NASA Astrophysics Data System (ADS)

    Zainodin, H. J.; Yap, S. J.

    2013-09-01

    Multicollinearity happens when there are high correlations among independent variables. In this case, it would be difficult to distinguish between the contributions of these independent variables to that of the dependent variable as they may compete to explain much of the similar variance. Besides, the problem of multicollinearity also violates the assumption of multiple regression: that there is no collinearity among the possible independent variables. Thus, an alternative approach is introduced in overcoming the multicollinearity problem in achieving a well represented model eventually. This approach is accomplished by removing the multicollinearity source variables on the basis of the correlation coefficient values based on full correlation matrix. Using the full correlation matrix can facilitate the implementation of Excel function in removing the multicollinearity source variables. It is found that this procedure is easier and time-saving especially when dealing with greater number of independent variables in a model and a large number of all possible models. Hence, in this paper detailed insight of the procedure is shown, compared and implemented.

  2. A Linear Regression Model Identifying the Primary Factors Contributing to Maintenance Man Hours for the C-17 Globemaster III in the Air National Guard

    DTIC Science & Technology

    2012-06-15

    Maintenance AFSCs ................................................................................................. 14 2. Variation Inflation Factors...total variability in the data. It is an indication of how much of the   20    variation in the data can be accounted for in the regression model. In... Variation Inflation Factors for each independent variable (predictor) as regressed against all of the other independent variables in the model. The

  3. AMOC decadal variability in Earth system models: Mechanisms and climate impacts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fedorov, Alexey

    This is the final report for the project titled "AMOC decadal variability in Earth system models: Mechanisms and climate impacts". The central goal of this one-year research project was to understand the mechanisms of decadal and multi-decadal variability of the Atlantic Meridional Overturning Circulation (AMOC) within a hierarchy of climate models ranging from realistic ocean GCMs to Earth system models. The AMOC is a key element of ocean circulation responsible for oceanic transport of heat from low to high latitudes and controlling, to a large extent, climate variations in the North Atlantic. The questions of the AMOC stability, variability andmore » predictability, directly relevant to the questions of climate predictability, were at the center of the research work.« less

  4. Plasticity models of material variability based on uncertainty quantification techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Reese E.; Rizzi, Francesco; Boyce, Brad

    The advent of fabrication techniques like additive manufacturing has focused attention on the considerable variability of material response due to defects and other micro-structural aspects. This variability motivates the development of an enhanced design methodology that incorporates inherent material variability to provide robust predictions of performance. In this work, we develop plasticity models capable of representing the distribution of mechanical responses observed in experiments using traditional plasticity models of the mean response and recently developed uncertainty quantification (UQ) techniques. Lastly, we demonstrate that the new method provides predictive realizations that are superior to more traditional ones, and how these UQmore » techniques can be used in model selection and assessing the quality of calibrated physical parameters.« less

  5. Investigating the impact of diurnal cycle of SST on the intraseasonal and climate variability

    NASA Astrophysics Data System (ADS)

    Tseng, W. L.; Hsu, H. H.; Chang, C. W. J.; Keenlyside, N. S.; Lan, Y. Y.; Tsuang, B. J.; Tu, C. Y.

    2016-12-01

    The diurnal cycle is a prominent feature of our climate system and the most familiar example of externally forced variability. Despite this it remains poorly simulated in state-of-the-art climate models. A particular problem is the diurnal cycle in sea surface temperature (SST), which is a key variable in air-sea heat flux exchange. In most models the diurnal cycle in SST is not well resolved, due to insufficient vertical resolution in the upper ocean mixed-layer and insufficiently frequent ocean-atmosphere coupling. Here, we coupled a 1-dimensional ocean model (SIT) to two atmospheric general circulation model (ECHAM5 and CAM5). In particular, we focus on improving the representations of the diurnal cycle in SST in a climate model, and investigate the role of the diurnal cycle in climate and intraseasonal variability.

  6. Data-driven process decomposition and robust online distributed modelling for large-scale processes

    NASA Astrophysics Data System (ADS)

    Shu, Zhang; Lijuan, Li; Lijuan, Yao; Shipin, Yang; Tao, Zou

    2018-02-01

    With the increasing attention of networked control, system decomposition and distributed models show significant importance in the implementation of model-based control strategy. In this paper, a data-driven system decomposition and online distributed subsystem modelling algorithm was proposed for large-scale chemical processes. The key controlled variables are first partitioned by affinity propagation clustering algorithm into several clusters. Each cluster can be regarded as a subsystem. Then the inputs of each subsystem are selected by offline canonical correlation analysis between all process variables and its controlled variables. Process decomposition is then realised after the screening of input and output variables. When the system decomposition is finished, the online subsystem modelling can be carried out by recursively block-wise renewing the samples. The proposed algorithm was applied in the Tennessee Eastman process and the validity was verified.

  7. Research on ionospheric tomography based on variable pixel height

    NASA Astrophysics Data System (ADS)

    Zheng, Dunyong; Li, Peiqing; He, Jie; Hu, Wusheng; Li, Chaokui

    2016-05-01

    A novel ionospheric tomography technique based on variable pixel height was developed for the tomographic reconstruction of the ionospheric electron density distribution. The method considers the height of each pixel as an unknown variable, which is retrieved during the inversion process together with the electron density values. In contrast to conventional computerized ionospheric tomography (CIT), which parameterizes the model with a fixed pixel height, the variable-pixel-height computerized ionospheric tomography (VHCIT) model applies a disturbance to the height of each pixel. In comparison with conventional CIT models, the VHCIT technique achieved superior results in a numerical simulation. A careful validation of the reliability and superiority of VHCIT was performed. According to the results of the statistical analysis of the average root mean square errors, the proposed model offers an improvement by 15% compared with conventional CIT models.

  8. Rank-based estimation in the {ell}1-regularized partly linear model for censored outcomes with application to integrated analyses of clinical predictors and gene expression data.

    PubMed

    Johnson, Brent A

    2009-10-01

    We consider estimation and variable selection in the partial linear model for censored data. The partial linear model for censored data is a direct extension of the accelerated failure time model, the latter of which is a very important alternative model to the proportional hazards model. We extend rank-based lasso-type estimators to a model that may contain nonlinear effects. Variable selection in such partial linear model has direct application to high-dimensional survival analyses that attempt to adjust for clinical predictors. In the microarray setting, previous methods can adjust for other clinical predictors by assuming that clinical and gene expression data enter the model linearly in the same fashion. Here, we select important variables after adjusting for prognostic clinical variables but the clinical effects are assumed nonlinear. Our estimator is based on stratification and can be extended naturally to account for multiple nonlinear effects. We illustrate the utility of our method through simulation studies and application to the Wisconsin prognostic breast cancer data set.

  9. Evaluating the Bias of Alternative Cost Progress Models: Tests Using Aerospace Industry Acquisition Programs

    DTIC Science & Technology

    1992-12-01

    suspect :mat, -n2 extent predict:.on cas jas ccsiziveiv crrei:=e amonc e v:arious models, :he fandom *.;aik, learn ha r ur e, i;<ea- variable and Bemis...Functions, Production Rate Adjustment Model, Learning Curve Model. Random Walk Model. Bemis Model. Evaluating Model Bias, Cost Prediction Bias. Cost...of four cost progress models--a random walk model, the tradiuonai learning curve model, a production rate model Ifixed-variable model). and a model

  10. An explanatory model of academic achievement based on aptitudes, goal orientations, self-concept and learning strategies.

    PubMed

    Miñano Pérez, Pablo; Castejón Costa, Juan-Luis; Gilar Corbí, Raquel

    2012-03-01

    As a result of studies examining factors involved in the learning process, various structural models have been developed to explain the direct and indirect effects that occur between the variables in these models. The objective was to evaluate a structural model of cognitive and motivational variables predicting academic achievement, including general intelligence, academic self-concept, goal orientations, effort and learning strategies. The sample comprised of 341 Spanish students in the first year of compulsory secondary education. Different tests and questionnaires were used to evaluate each variable, and Structural Equation Modelling (SEM) was applied to contrast the relationships of the initial model. The model proposed had a satisfactory fit, and all the hypothesised relationships were significant. General intelligence was the variable most able to explain academic achievement. Also important was the direct influence of academic self-concept on achievement, goal orientations and effort, as well as the mediating ability of effort and learning strategies between academic goals and final achievement.

  11. General phase spaces: from discrete variables to rotor and continuum limits

    NASA Astrophysics Data System (ADS)

    Albert, Victor V.; Pascazio, Saverio; Devoret, Michel H.

    2017-12-01

    We provide a basic introduction to discrete-variable, rotor, and continuous-variable quantum phase spaces, explaining how the latter two can be understood as limiting cases of the first. We extend the limit-taking procedures used to travel between phase spaces to a general class of Hamiltonians (including many local stabilizer codes) and provide six examples: the Harper equation, the Baxter parafermionic spin chain, the Rabi model, the Kitaev toric code, the Haah cubic code (which we generalize to qudits), and the Kitaev honeycomb model. We obtain continuous-variable generalizations of all models, some of which are novel. The Baxter model is mapped to a chain of coupled oscillators and the Rabi model to the optomechanical radiation pressure Hamiltonian. The procedures also yield rotor versions of all models, five of which are novel many-body extensions of the almost Mathieu equation. The toric and cubic codes are mapped to lattice models of rotors, with the toric code case related to U(1) lattice gauge theory.

  12. An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 2. Application to Owens Valley, California

    USGS Publications Warehouse

    Guymon, Gary L.; Yen, Chung-Cheng

    1990-01-01

    The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.

  13. An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 2. Application to Owens Valley, California

    NASA Astrophysics Data System (ADS)

    Guymon, Gary L.; Yen, Chung-Cheng

    1990-07-01

    The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.

  14. Variables influencing food perception reviewed for consumer-oriented product development.

    PubMed

    Sijtsema, Siet; Linnemann, Anita; van Gaasbeek, Ton; Dagevos, Hans; Jongen, Wim

    2002-01-01

    Consumer wishes have to be translated into product characteristics to implement consumer-oriented product development. Before this step can be made, insight in food-related behavior and perception of consumers is necessary to make the right, useful, and successful translation. Food choice behavior and consumers' perception are studied in many disciplines. Models of food behavior and preferences therefore were studied from a multidisciplinary perspective. Nearly all models structure the determinants related to the person, the food, and the environment. Consequently, the overview of models was used as a basis to structure the variables influencing food perception into a model for consumer-oriented product development. To this new model, referred to as food perception model, other variables like time and place as part of consumption moment were added. These are important variables influencing consumers' perception, and therefore of increasing importance to consumer-oriented product development nowadays. In further research, the presented food perception model is used as a tool to implement successful consumer-oriented product development.

  15. A variable resolution nonhydrostatic global atmospheric semi-implicit semi-Lagrangian model

    NASA Astrophysics Data System (ADS)

    Pouliot, George Antoine

    2000-10-01

    The objective of this project is to develop a variable-resolution finite difference adiabatic global nonhydrostatic semi-implicit semi-Lagrangian (SISL) model based on the fully compressible nonhydrostatic atmospheric equations. To achieve this goal, a three-dimensional variable resolution dynamical core was developed and tested. The main characteristics of the dynamical core can be summarized as follows: Spherical coordinates were used in a global domain. A hydrostatic/nonhydrostatic switch was incorporated into the dynamical equations to use the fully compressible atmospheric equations. A generalized horizontal variable resolution grid was developed and incorporated into the model. For a variable resolution grid, in contrast to a uniform resolution grid, the order of accuracy of finite difference approximations is formally lost but remains close to the order of accuracy associated with the uniform resolution grid provided the grid stretching is not too significant. The SISL numerical scheme was implemented for the fully compressible set of equations. In addition, the generalized minimum residual (GMRES) method with restart and preconditioner was used to solve the three-dimensional elliptic equation derived from the discretized system of equations. The three-dimensional momentum equation was integrated in vector-form to incorporate the metric terms in the calculations of the trajectories. Using global re-analysis data for a specific test case, the model was compared to similar SISL models previously developed. Reasonable agreement between the model and the other independently developed models was obtained. The Held-Suarez test for dynamical cores was used for a long integration and the model was successfully integrated for up to 1200 days. Idealized topography was used to test the variable resolution component of the model. Nonhydrostatic effects were simulated at grid spacings of 400 meters with idealized topography and uniform flow. Using a high-resolution topographic data set and the variable resolution grid, sets of experiments with increasing resolution were performed over specific regions of interest. Using realistic initial conditions derived from re-analysis fields, nonhydrostatic effects were significant for grid spacings on the order of 0.1 degrees with orographic forcing. If the model code was adapted for use in a message passing interface (MPI) on a parallel supercomputer today, it was estimated that a global grid spacing of 0.1 degrees would be achievable for a global model. In this case, nonhydrostatic effects would be significant for most areas. A variable resolution grid in a global model provides a unified and flexible approach to many climate and numerical weather prediction problems. The ability to configure the model from very fine to very coarse resolutions allows for the simulation of atmospheric phenomena at different scales using the same code. We have developed a dynamical core illustrating the feasibility of using a variable resolution in a global model.

  16. Mathematical model for production of an industry focusing on worker status

    NASA Astrophysics Data System (ADS)

    Visalakshi, V.; kiran kumari, Sheshma

    2018-04-01

    Productivity improvement is posing a great challenge for industry everyday because of the difficulties in keeping track and priorising the variables that have significant impact on the productivity. The variation in production depends on the linguistic variables such as worker commitment, worker motivation and worker skills. Since the variables are linguistic we try to propose a model which gives an appropriate production of an industry. Fuzzy models aids the relationship between the factors and status. The model will support the industry to focus on the mentality of worker to increase the production.

  17. Variable selection in discrete survival models including heterogeneity.

    PubMed

    Groll, Andreas; Tutz, Gerhard

    2017-04-01

    Several variable selection procedures are available for continuous time-to-event data. However, if time is measured in a discrete way and therefore many ties occur models for continuous time are inadequate. We propose penalized likelihood methods that perform efficient variable selection in discrete survival modeling with explicit modeling of the heterogeneity in the population. The method is based on a combination of ridge and lasso type penalties that are tailored to the case of discrete survival. The performance is studied in simulation studies and an application to the birth of the first child.

  18. A black box optimization approach to parameter estimation in a model for long/short term variations dynamics of commodity prices

    NASA Astrophysics Data System (ADS)

    De Santis, Alberto; Dellepiane, Umberto; Lucidi, Stefano

    2012-11-01

    In this paper we investigate the estimation problem for a model of the commodity prices. This model is a stochastic state space dynamical model and the problem unknowns are the state variables and the system parameters. Data are represented by the commodity spot prices, very seldom time series of Futures contracts are available for free. Both the system joint likelihood function (state variables and parameters) and the system marginal likelihood (the state variables are eliminated) function are addressed.

  19. Integrating Ecosystem Carbon Dynamics into State-and-Transition Simulation Models of Land Use/Land Cover Change

    NASA Astrophysics Data System (ADS)

    Sleeter, B. M.; Daniel, C.; Frid, L.; Fortin, M. J.

    2016-12-01

    State-and-transition simulation models (STSMs) provide a general approach for incorporating uncertainty into forecasts of landscape change. Using a Monte Carlo approach, STSMs generate spatially-explicit projections of the state of a landscape based upon probabilistic transitions defined between states. While STSMs are based on the basic principles of Markov chains, they have additional properties that make them applicable to a wide range of questions and types of landscapes. A current limitation of STSMs is that they are only able to track the fate of discrete state variables, such as land use/land cover (LULC) classes. There are some landscape modelling questions, however, for which continuous state variables - for example carbon biomass - are also required. Here we present a new approach for integrating continuous state variables into spatially-explicit STSMs. Specifically we allow any number of continuous state variables to be defined for each spatial cell in our simulations; the value of each continuous variable is then simulated forward in discrete time as a stochastic process based upon defined rates of change between variables. These rates can be defined as a function of the realized states and transitions of each cell in the STSM, thus providing a connection between the continuous variables and the dynamics of the landscape. We demonstrate this new approach by (1) developing a simple IPCC Tier 3 compliant model of ecosystem carbon biomass, where the continuous state variables are defined as terrestrial carbon biomass pools and the rates of change as carbon fluxes between pools, and (2) integrating this carbon model with an existing LULC change model for the state of Hawaii, USA.

  20. The Rasch Rating Model and the Disordered Threshold Controversy

    ERIC Educational Resources Information Center

    Adams, Raymond J.; Wu, Margaret L.; Wilson, Mark

    2012-01-01

    The Rasch rating (or partial credit) model is a widely applied item response model that is used to model ordinal observed variables that are assumed to collectively reflect a common latent variable. In the application of the model there is considerable controversy surrounding the assessment of fit. This controversy is most notable when the set of…

  1. Detecting Mixtures from Structural Model Differences Using Latent Variable Mixture Modeling: A Comparison of Relative Model Fit Statistics

    ERIC Educational Resources Information Center

    Henson, James M.; Reise, Steven P.; Kim, Kevin H.

    2007-01-01

    The accuracy of structural model parameter estimates in latent variable mixture modeling was explored with a 3 (sample size) [times] 3 (exogenous latent mean difference) [times] 3 (endogenous latent mean difference) [times] 3 (correlation between factors) [times] 3 (mixture proportions) factorial design. In addition, the efficacy of several…

  2. Latent Transition Analysis with a Mixture Item Response Theory Measurement Model

    ERIC Educational Resources Information Center

    Cho, Sun-Joo; Cohen, Allan S.; Kim, Seock-Ho; Bottge, Brian

    2010-01-01

    A latent transition analysis (LTA) model was described with a mixture Rasch model (MRM) as the measurement model. Unlike the LTA, which was developed with a latent class measurement model, the LTA-MRM permits within-class variability on the latent variable, making it more useful for measuring treatment effects within latent classes. A simulation…

  3. Newtonian Nudging For A Richards Equation-based Distributed Hydrological Model

    NASA Astrophysics Data System (ADS)

    Paniconi, C.; Marrocu, M.; Putti, M.; Verbunt, M.

    In this study a relatively simple data assimilation method has been implemented in a relatively complex hydrological model. The data assimilation technique is Newtonian relaxation or nudging, in which model variables are driven towards observations by a forcing term added to the model equations. The forcing term is proportional to the difference between simulation and observation (relaxation component) and contains four-dimensional weighting functions that can incorporate prior knowledge about the spatial and temporal variability and characteristic scales of the state variable(s) being assimilated. The numerical model couples a three-dimensional finite element Richards equation solver for variably saturated porous media and a finite difference diffusion wave approximation based on digital elevation data for surface water dynamics. We describe the implementation of the data assimilation algorithm for the coupled model and report on the numerical and hydrological performance of the resulting assimila- tion scheme. Nudging is shown to be successful in improving the hydrological sim- ulation results, and it introduces little computational cost, in terms of CPU and other numerical aspects of the model's behavior, in some cases even improving numerical performance compared to model runs without nudging. We also examine the sensitiv- ity of the model to nudging term parameters including the spatio-temporal influence coefficients in the weighting functions. Overall the nudging algorithm is quite flexi- ble, for instance in dealing with concurrent observation datasets, gridded or scattered data, and different state variables, and the implementation presented here can be read- ily extended to any features not already incorporated. Moreover the nudging code and tests can serve as a basis for implementation of more sophisticated data assimilation techniques in a Richards equation-based hydrological model.

  4. Selecting the process variables for filament winding

    NASA Technical Reports Server (NTRS)

    Calius, E.; Springer, G. S.

    1986-01-01

    A model is described which can be used to determine the appropriate values of the process variables for filament winding cylinders. The process variables which can be selected by the model include the winding speed, fiber tension, initial resin degree of cure, and the temperatures applied during winding, curing, and post-curing. The effects of these process variables on the properties of the cylinder during and after manufacture are illustrated by a numerical example.

  5. A proposed Kalman filter algorithm for estimation of unmeasured output variables for an F100 turbofan engine

    NASA Technical Reports Server (NTRS)

    Alag, Gurbux S.; Gilyard, Glenn B.

    1990-01-01

    To develop advanced control systems for optimizing aircraft engine performance, unmeasurable output variables must be estimated. The estimation has to be done in an uncertain environment and be adaptable to varying degrees of modeling errors and other variations in engine behavior over its operational life cycle. This paper represented an approach to estimate unmeasured output variables by explicitly modeling the effects of off-nominal engine behavior as biases on the measurable output variables. A state variable model accommodating off-nominal behavior is developed for the engine, and Kalman filter concepts are used to estimate the required variables. Results are presented from nonlinear engine simulation studies as well as the application of the estimation algorithm on actual flight data. The formulation presented has a wide range of application since it is not restricted or tailored to the particular application described.

  6. Southern Hemisphere extratropical circulation: Recent trends and natural variability

    NASA Astrophysics Data System (ADS)

    Thomas, Jordan L.; Waugh, Darryn W.; Gnanadesikan, Anand

    2015-07-01

    Changes in the Southern Annular Mode (SAM), Southern Hemisphere (SH) westerly jet location, and magnitude are linked with changes in ocean circulation along with ocean heat and carbon uptake. Recent trends have been observed in these fields but not much is known about the natural variability. Here we aim to quantify the natural variability of the SH extratropical circulation by using Coupled Model Intercomparison Project Phase 5 (CMIP5) preindustrial control model runs and compare with the observed trends in SAM, jet magnitude, and jet location. We show that trends in SAM are due partly to external forcing but are not outside the natural variability as described by these models. Trends in jet location and magnitude, however, lie outside the unforced natural variability but can be explained by a combination of natural variability and the ensemble mean forced trend. These results indicate that trends in these three diagnostics cannot be used interchangeably.

  7. Method and system to estimate variables in an integrated gasification combined cycle (IGCC) plant

    DOEpatents

    Kumar, Aditya; Shi, Ruijie; Dokucu, Mustafa

    2013-09-17

    System and method to estimate variables in an integrated gasification combined cycle (IGCC) plant are provided. The system includes a sensor suite to measure respective plant input and output variables. An extended Kalman filter (EKF) receives sensed plant input variables and includes a dynamic model to generate a plurality of plant state estimates and a covariance matrix for the state estimates. A preemptive-constraining processor is configured to preemptively constrain the state estimates and covariance matrix to be free of constraint violations. A measurement-correction processor may be configured to correct constrained state estimates and a constrained covariance matrix based on processing of sensed plant output variables. The measurement-correction processor is coupled to update the dynamic model with corrected state estimates and a corrected covariance matrix. The updated dynamic model may be configured to estimate values for at least one plant variable not originally sensed by the sensor suite.

  8. Advanced statistics: linear regression, part II: multiple linear regression.

    PubMed

    Marill, Keith A

    2004-01-01

    The applications of simple linear regression in medical research are limited, because in most situations, there are multiple relevant predictor variables. Univariate statistical techniques such as simple linear regression use a single predictor variable, and they often may be mathematically correct but clinically misleading. Multiple linear regression is a mathematical technique used to model the relationship between multiple independent predictor variables and a single dependent outcome variable. It is used in medical research to model observational data, as well as in diagnostic and therapeutic studies in which the outcome is dependent on more than one factor. Although the technique generally is limited to data that can be expressed with a linear function, it benefits from a well-developed mathematical framework that yields unique solutions and exact confidence intervals for regression coefficients. Building on Part I of this series, this article acquaints the reader with some of the important concepts in multiple regression analysis. These include multicollinearity, interaction effects, and an expansion of the discussion of inference testing, leverage, and variable transformations to multivariate models. Examples from the first article in this series are expanded on using a primarily graphic, rather than mathematical, approach. The importance of the relationships among the predictor variables and the dependence of the multivariate model coefficients on the choice of these variables are stressed. Finally, concepts in regression model building are discussed.

  9. The origin of Total Solar Irradiance variability on timescales less than a day

    NASA Astrophysics Data System (ADS)

    Shapiro, Alexander; Krivova, Natalie; Schmutz, Werner; Solanki, Sami K.; Leng Yeo, Kok; Cameron, Robert; Beeck, Benjamin

    2016-07-01

    Total Solar Irradiance (TSI) varies on timescales from minutes to decades. It is generally accepted that variability on timescales of a day and longer is dominated by solar surface magnetic fields. For shorter time scales, several additional sources of variability have been proposed, including convection and oscillation. However, available simplified and highly parameterised models could not accurately explain the observed variability in high-cadence TSI records. We employed the high-cadence solar imagery from the Helioseismic and Magnetic Imager onboard the Solar Dynamics Observatory and the SATIRE (Spectral And Total Irradiance Reconstruction) model of solar irradiance variability to recreate the magnetic component of TSI variability. The recent 3D simulations of solar near-surface convection with MURAM code have been used to calculate the TSI variability caused by convection. This allowed us to determine the threshold timescale between TSI variability caused by the magnetic field and convection. Our model successfully replicates the TSI measurements by the PICARD/PREMOS radiometer which span the period of July 2010 to February 2014 at 2-minute cadence. Hence, we demonstrate that solar magnetism and convection can account for TSI variability at all timescale it has ever been measured (sans the 5-minute component from p-modes).

  10. Two complementary approaches to quantify variability in heat resistance of spores of Bacillus subtilis.

    PubMed

    den Besten, Heidy M W; Berendsen, Erwin M; Wells-Bennik, Marjon H J; Straatsma, Han; Zwietering, Marcel H

    2017-07-17

    Realistic prediction of microbial inactivation in food requires quantitative information on variability introduced by the microorganisms. Bacillus subtilis forms heat resistant spores and in this study the impact of strain variability on spore heat resistance was quantified using 20 strains. In addition, experimental variability was quantified by using technical replicates per heat treatment experiment, and reproduction variability was quantified by using two biologically independent spore crops for each strain that were heat treated on different days. The fourth-decimal reduction times and z-values were estimated by a one-step and two-step model fitting procedure. Grouping of the 20 B. subtilis strains into two statistically distinguishable groups could be confirmed based on their spore heat resistance. The reproduction variability was higher than experimental variability, but both variabilities were much lower than strain variability. The model fitting approach did not significantly affect the quantification of variability. Remarkably, when strain variability in spore heat resistance was quantified using only the strains producing low-level heat resistant spores, then this strain variability was comparable with the previously reported strain variability in heat resistance of vegetative cells of Listeria monocytogenes, although in a totally other temperature range. Strains that produced spores with high-level heat resistance showed similar temperature range for growth as strains that produced low-level heat resistance. Strain variability affected heat resistance of spores most, and therefore integration of this variability factor in modelling of spore heat resistance will make predictions more realistic. Copyright © 2017. Published by Elsevier B.V.

  11. Global Gridded Crop Model Evaluation: Benchmarking, Skills, Deficiencies and Implications.

    NASA Technical Reports Server (NTRS)

    Muller, Christoph; Elliott, Joshua; Chryssanthacopoulos, James; Arneth, Almut; Balkovic, Juraj; Ciais, Philippe; Deryng, Delphine; Folberth, Christian; Glotter, Michael; Hoek, Steven; hide

    2017-01-01

    Crop models are increasingly used to simulate crop yields at the global scale, but so far there is no general framework on how to assess model performance. Here we evaluate the simulation results of 14 global gridded crop modeling groups that have contributed historic crop yield simulations for maize, wheat, rice and soybean to the Global Gridded Crop Model Intercomparison (GGCMI) of the Agricultural Model Intercomparison and Improvement Project (AgMIP). Simulation results are compared to reference data at global, national and grid cell scales and we evaluate model performance with respect to time series correlation, spatial correlation and mean bias. We find that global gridded crop models (GGCMs) show mixed skill in reproducing time series correlations or spatial patterns at the different spatial scales. Generally, maize, wheat and soybean simulations of many GGCMs are capable of reproducing larger parts of observed temporal variability (time series correlation coefficients (r) of up to 0.888 for maize, 0.673 for wheat and 0.643 for soybean at the global scale) but rice yield variability cannot be well reproduced by most models. Yield variability can be well reproduced for most major producing countries by many GGCMs and for all countries by at least some. A comparison with gridded yield data and a statistical analysis of the effects of weather variability on yield variability shows that the ensemble of GGCMs can explain more of the yield variability than an ensemble of regression models for maize and soybean, but not for wheat and rice. We identify future research needs in global gridded crop modeling and for all individual crop modeling groups. In the absence of a purely observation-based benchmark for model evaluation, we propose that the best performing crop model per crop and region establishes the benchmark for all others, and modelers are encouraged to investigate how crop model performance can be increased. We make our evaluation system accessible to all crop modelers so that other modeling groups can also test their model performance against the reference data and the GGCMI benchmark.

  12. Confirmatory Factor Analysis of Ordinal Variables with Misspecified Models

    ERIC Educational Resources Information Center

    Yang-Wallentin, Fan; Joreskog, Karl G.; Luo, Hao

    2010-01-01

    Ordinal variables are common in many empirical investigations in the social and behavioral sciences. Researchers often apply the maximum likelihood method to fit structural equation models to ordinal data. This assumes that the observed measures have normal distributions, which is not the case when the variables are ordinal. A better approach is…

  13. Latent variable models are network models.

    PubMed

    Molenaar, Peter C M

    2010-06-01

    Cramer et al. present an original and interesting network perspective on comorbidity and contrast this perspective with a more traditional interpretation of comorbidity in terms of latent variable theory. My commentary focuses on the relationship between the two perspectives; that is, it aims to qualify the presumed contrast between interpretations in terms of networks and latent variables.

  14. Suppressor Variables: The Difference between "Is" versus "Acting As"

    ERIC Educational Resources Information Center

    Ludlow, Larry; Klein, Kelsey

    2014-01-01

    Correlated predictors in regression models are a fact of life in applied social science research. The extent to which they are correlated will influence the estimates and statistics associated with the other variables they are modeled along with. These effects, for example, may include enhanced regression coefficients for the other variables--a…

  15. Narrow gap laser welding

    DOEpatents

    Milewski, John O.; Sklar, Edward

    1998-01-01

    A laser welding process including: (a) using optical ray tracing to make a model of a laser beam and the geometry of a joint to be welded; (b) adjusting variables in the model to choose variables for use in making a laser weld; and (c) laser welding the joint to be welded using the chosen variables.

  16. Narrow gap laser welding

    DOEpatents

    Milewski, J.O.; Sklar, E.

    1998-06-02

    A laser welding process including: (a) using optical ray tracing to make a model of a laser beam and the geometry of a joint to be welded; (b) adjusting variables in the model to choose variables for use in making a laser weld; and (c) laser welding the joint to be welded using the chosen variables. 34 figs.

  17. Causal Models with Unmeasured Variables: An Introduction to LISREL.

    ERIC Educational Resources Information Center

    Wolfle, Lee M.

    Whenever one uses ordinary least squares regression, one is making an implicit assumption that all of the independent variables have been measured without error. Such an assumption is obviously unrealistic for most social data. One approach for estimating such regression models is to measure implied coefficients between latent variables for which…

  18. Use of real-time monitoring to predict concentrations of select constituents in the Menomonee River drainage basin, Southeast Wisconsin, 2008-9

    USGS Publications Warehouse

    Baldwin, Austin K.; Graczyk, David J.; Robertson, Dale M.; Saad, David A.; Magruder, Christopher

    2012-01-01

    The models to estimate chloride concentrations all used specific conductance as the explanatory variable, except for the model for the Little Menomonee River near Freistadt, which used both specific conductance and turbidity as explanatory variables. Adjusted R2 values for the chloride models ranged from 0.74 to 0.97. Models to estimate total suspended solids and total phosphorus used turbidity as the only explanatory variable. Adjusted R2 values ranged from 0.77 to 0.94 for the total suspended solids models and from 0.55 to 0.75 for the total phosphorus models. Models to estimate indicator bacteria used water temperature and turbidity as the explanatory variables, with adjusted R2 values from 0.54 to 0.69 for Escherichia coli bacteria models and from 0.54 to 0.74 for fecal coliform bacteria models. Dissolved oxygen was not used in any of the final models. These models may help managers measure the effects of land-use changes and improvement projects, establish total maximum daily loads, estimate important water-quality indicators such as bacteria concentrations, and enable informed decision making in the future.

  19. Effects of rotation and tidal distortions on the shapes of radial velocity curves of polytropic models of pulsating variable stars

    NASA Astrophysics Data System (ADS)

    Kumar, Tarun; Lal, Arvind Kumar; Pathania, Ankush

    2018-06-01

    Anharmonic oscillations of rotating stars have been studied by various authors in literature to explain the observed features of certain variable stars. However, there is no study available in literature that has discussed the combined effect of rotation and tidal distortions on the anharmonic oscillations of stars. In this paper, we have created a model to determine the effect of rotation and tidal distortions on the anharmonic radial oscillations associated with various polytropic models of pulsating variable stars. For this study we have used the theory of Rosseland to obtain the anharmonic pulsation equation for rotationally and tidally distorted polytropicmodels of pulsating variable stars. The main objective of this study is to investigate the effect of rotation and tidal distortions on the shapes of the radial velocity curves for rotationally and tidally distorted polytropic models of pulsating variable stars. The results of the present study show that the rotational effects cause more deviations in the shapes of radial velocity curves of pulsating variable stars as compared to tidal effects.

  20. Quantifying inter- and intra-population niche variability using hierarchical bayesian stable isotope mixing models.

    PubMed

    Semmens, Brice X; Ward, Eric J; Moore, Jonathan W; Darimont, Chris T

    2009-07-09

    Variability in resource use defines the width of a trophic niche occupied by a population. Intra-population variability in resource use may occur across hierarchical levels of population structure from individuals to subpopulations. Understanding how levels of population organization contribute to population niche width is critical to ecology and evolution. Here we describe a hierarchical stable isotope mixing model that can simultaneously estimate both the prey composition of a consumer diet and the diet variability among individuals and across levels of population organization. By explicitly estimating variance components for multiple scales, the model can deconstruct the niche width of a consumer population into relevant levels of population structure. We apply this new approach to stable isotope data from a population of gray wolves from coastal British Columbia, and show support for extensive intra-population niche variability among individuals, social groups, and geographically isolated subpopulations. The analytic method we describe improves mixing models by accounting for diet variability, and improves isotope niche width analysis by quantitatively assessing the contribution of levels of organization to the niche width of a population.

Top