Source Term Model for Vortex Generator Vanes in a Navier-Stokes Computer Code
NASA Technical Reports Server (NTRS)
Waithe, Kenrick A.
2004-01-01
A source term model for an array of vortex generators was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the side force created by a vortex generator vane. The model is obtained by introducing a side force to the momentum and energy equations that can adjust its strength automatically based on the local flow. The model was tested and calibrated by comparing data from numerical simulations and experiments of a single low profile vortex generator vane on a flat plate. In addition, the model was compared to experimental data of an S-duct with 22 co-rotating, low profile vortex generators. The source term model allowed a grid reduction of about seventy percent when compared with the numerical simulations performed on a fully gridded vortex generator on a flat plate without adversely affecting the development and capture of the vortex created. The source term model was able to predict the shape and size of the stream-wise vorticity and velocity contours very well when compared with both numerical simulations and experimental data. The peak vorticity and its location were also predicted very well when compared to numerical simulations and experimental data. The circulation predicted by the source term model matches the prediction of the numerical simulation. The source term model predicted the engine fan face distortion and total pressure recovery of the S-duct with 22 co-rotating vortex generators very well. The source term model allows a researcher to quickly investigate different locations of individual or a row of vortex generators. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.
Poggi, L A; Malizia, A; Ciparisse, J F; Gaudio, P
2016-10-01
An open issue still under investigation by several international entities working on the safety and security field for the foreseen nuclear fusion reactors is the estimation of source terms that are a hazard for the operators and public, and for the machine itself in terms of efficiency and integrity in case of severe accident scenarios. Source term estimation is a crucial key safety issue to be addressed in the future reactors safety assessments, and the estimates available at the time are not sufficiently satisfactory. The lack of neutronic data along with the insufficiently accurate methodologies used until now, calls for an integrated methodology for source term estimation that can provide predictions with an adequate accuracy. This work proposes a complete methodology to estimate dust source terms starting from a broad information gathering. The wide number of parameters that can influence dust source term production is reduced with statistical tools using a combination of screening, sensitivity analysis, and uncertainty analysis. Finally, a preliminary and simplified methodology for dust source term production prediction for future devices is presented.
Uncertainty, variability, and earthquake physics in ground‐motion prediction equations
Baltay, Annemarie S.; Hanks, Thomas C.; Abrahamson, Norm A.
2017-01-01
Residuals between ground‐motion data and ground‐motion prediction equations (GMPEs) can be decomposed into terms representing earthquake source, path, and site effects. These terms can be cast in terms of repeatable (epistemic) residuals and the random (aleatory) components. Identifying the repeatable residuals leads to a GMPE with reduced uncertainty for a specific source, site, or path location, which in turn can yield a lower hazard level at small probabilities of exceedance. We illustrate a schematic framework for this residual partitioning with a dataset from the ANZA network, which straddles the central San Jacinto fault in southern California. The dataset consists of more than 3200 1.15≤M≤3 earthquakes and their peak ground accelerations (PGAs), recorded at close distances (R≤20 km). We construct a small‐magnitude GMPE for these PGA data, incorporating VS30 site conditions and geometrical spreading. Identification and removal of the repeatable source, path, and site terms yield an overall reduction in the standard deviation from 0.97 (in ln units) to 0.44, for a nonergodic assumption, that is, for a single‐source location, single site, and single path. We give examples of relationships between independent seismological observables and the repeatable terms. We find a correlation between location‐based source terms and stress drops in the San Jacinto fault zone region; an explanation of the site term as a function of kappa, the near‐site attenuation parameter; and a suggestion that the path component can be related directly to elastic structure. These correlations allow the repeatable source location, site, and path terms to be determined a priori using independent geophysical relationships. Those terms could be incorporated into location‐specific GMPEs for more accurate and precise ground‐motion prediction.
PHENOstruct: Prediction of human phenotype ontology terms using heterogeneous data sources.
Kahanda, Indika; Funk, Christopher; Verspoor, Karin; Ben-Hur, Asa
2015-01-01
The human phenotype ontology (HPO) was recently developed as a standardized vocabulary for describing the phenotype abnormalities associated with human diseases. At present, only a small fraction of human protein coding genes have HPO annotations. But, researchers believe that a large portion of currently unannotated genes are related to disease phenotypes. Therefore, it is important to predict gene-HPO term associations using accurate computational methods. In this work we demonstrate the performance advantage of the structured SVM approach which was shown to be highly effective for Gene Ontology term prediction in comparison to several baseline methods. Furthermore, we highlight a collection of informative data sources suitable for the problem of predicting gene-HPO associations, including large scale literature mining data.
Prediction of discretization error using the error transport equation
NASA Astrophysics Data System (ADS)
Celik, Ismail B.; Parsons, Don Roscoe
2017-06-01
This study focuses on an approach to quantify the discretization error associated with numerical solutions of partial differential equations by solving an error transport equation (ETE). The goal is to develop a method that can be used to adequately predict the discretization error using the numerical solution on only one grid/mesh. The primary problem associated with solving the ETE is the formulation of the error source term which is required for accurately predicting the transport of the error. In this study, a novel approach is considered which involves fitting the numerical solution with a series of locally smooth curves and then blending them together with a weighted spline approach. The result is a continuously differentiable analytic expression that can be used to determine the error source term. Once the source term has been developed, the ETE can easily be solved using the same solver that is used to obtain the original numerical solution. The new methodology is applied to the two-dimensional Navier-Stokes equations in the laminar flow regime. A simple unsteady flow case is also considered. The discretization error predictions based on the methodology presented in this study are in good agreement with the 'true error'. While in most cases the error predictions are not quite as accurate as those from Richardson extrapolation, the results are reasonable and only require one numerical grid. The current results indicate that there is much promise going forward with the newly developed error source term evaluation technique and the ETE.
Long-Term Temporal Trends of Polychlorinated Biphenyls and Their Controlling Sources in China.
Zhao, Shizhen; Breivik, Knut; Liu, Guorui; Zheng, Minghui; Jones, Kevin C; Sweetman, Andrew J
2017-03-07
Polychlorinated biphenyls (PCBs) are industrial organic contaminants identified as persistent, bioaccumulative, toxic (PBT), and subject to long-range transport (LRT) with global scale significance. This study focuses on a reconstruction and prediction for China of long-term emission trends of intentionally and unintentionally produced (UP) ∑ 7 PCBs (UP-PCBs, from the manufacture of steel, cement and sinter iron) and their re-emissions from secondary sources (e.g., soils and vegetation) using a dynamic fate model (BETR-Global). Contemporary emission estimates combined with predictions from the multimedia fate model suggest that primary sources still dominate, although unintentional sources are predicted to become a main contributor from 2035 for PCB-28. Imported e-waste is predicted to play an increasing role until 2020-2030 on a national scale due to the decline of intentionally produced (IP) emissions. Hypothetical emission scenarios suggest that China could become a potential source to neighboring regions with a net output of ∼0.4 t year -1 by around 2050. However, future emission scenarios and hence model results will be dictated by the efficiency of control measures.
Hybrid BEM/empirical approach for scattering of correlated sources in rocket noise prediction
NASA Astrophysics Data System (ADS)
Barbarino, Mattia; Adamo, Francesco P.; Bianco, Davide; Bartoccini, Daniele
2017-09-01
Empirical models such as the Eldred standard model are commonly used for rocket noise prediction. Such models directly provide a definition of the Sound Pressure Level through the quadratic pressure term by uncorrelated sources. In this paper, an improvement of the Eldred Standard model has been formulated. This new formulation contains an explicit expression for the acoustic pressure of each noise source, in terms of amplitude and phase, in order to investigate the sources correlation effects and to propagate them through a wave equation. In particular, the correlation effects between adjacent and not-adjacent sources have been modeled and analyzed. The noise prediction obtained with the revised Eldred-based model has then been used for formulating an empirical/BEM (Boundary Element Method) hybrid approach that allows an evaluation of the scattering effects. In the framework of the European Space Agency funded program VECEP (VEga Consolidation and Evolution Programme), these models have been applied for the prediction of the aeroacoustics loads of the VEGA (Vettore Europeo di Generazione Avanzata - Advanced Generation European Carrier Rocket) launch vehicle at lift-off and the results have been compared with experimental data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
O`Kula, K.R.
1994-03-01
The Nuclear Installations Inspectorate (NII) of the United Kingdom (UK) suggested the use of an accident progression logic model method developed by Westinghouse Savannah River Company (WSRC) and Science Applications International Corporation (SAIC) for K Reactor to predict the magnitude and timing of radioactivity releases (the source term) based on an advanced logic model methodology. Predicted releases are output from the personal computer-based model in a level-of-confidence format. Additional technical discussions eventually led to a request from the NII to develop a proposal for assembling a similar technology to predict source terms for the UK`s advanced gas-cooled reactor (AGR) type.more » To respond to this request, WSRC is submitting a proposal to provide contractual assistance as specified in the Scope of Work. The work will produce, document, and transfer technology associated with a Decision-Oriented Source Term Estimator for Emergency Preparedness (DOSE-EP) for the NII to apply to AGRs in the United Kingdom. This document, Appendix A is a part of this proposal.« less
The Effect of Data Quality on Short-term Growth Model Projections
David Gartner
2005-01-01
This study was designed to determine the effect of FIA's data quality on short-term growth model projections. The data from Georgia's 1996 statewide survey were used for the Southern variant of the Forest Vegetation Simulator to predict Georgia's first annual panel. The effect of several data error sources on growth modeling prediction errors...
NASA Astrophysics Data System (ADS)
Smith, R. A.; Moore, R. B.; Shanley, J. B.; Miller, E. K.; Kamman, N. C.; Nacci, D.
2009-12-01
Mercury (Hg) concentrations in fish and aquatic wildlife are complex functions of atmospheric Hg deposition rate, terrestrial and aquatic watershed characteristics that influence Hg methylation and export, and food chain characteristics determining Hg bioaccumulation. Because of the complexity and incomplete understanding of these processes, regional-scale models of fish tissue Hg concentration are necessarily empirical in nature, typically constructed through regression analysis of fish tissue Hg concentration data from many sampling locations on a set of potential explanatory variables. Unless the data sets are unusually long and show clear time trends, the empirical basis for model building must be based solely on spatial correlation. Predictive regional scale models are highly useful for improving understanding of the relevant biogeochemical processes, as well as for practical fish and wildlife management and human health protection. Mechanistically, the logical arrangement of explanatory variables is to multiply each of the individual Hg source terms (e.g. dry, wet, and gaseous deposition rates, and residual watershed Hg) for a given fish sampling location by source-specific terms pertaining to methylation, watershed transport, and biological uptake for that location (e.g. SO4 availability, hill slope, lake size). This mathematical form has the desirable property that predicted tissue concentration will approach zero as all individual source terms approach zero. One complication with this form, however, is that it is inconsistent with the standard linear multiple regression equation in which all terms (including those for sources and physical conditions) are additive. An important practical disadvantage of a model in which the Hg source terms are additive (rather than multiplicative) with their modifying factors is that predicted concentration is not zero when all sources are zero, making it unreliable for predicting the effects of large future reductions in Hg deposition. In this paper we compare the results of using several different linear and non-linear models in an analysis of watershed and fish Hg data for 450 New England lakes. The differences in model results pertain to both their utility in interpreting methylation and export processes as well as in fisheries management.
Bayesian source term estimation of atmospheric releases in urban areas using LES approach.
Xue, Fei; Kikumoto, Hideki; Li, Xiaofeng; Ooka, Ryozo
2018-05-05
The estimation of source information from limited measurements of a sensor network is a challenging inverse problem, which can be viewed as an assimilation process of the observed concentration data and the predicted concentration data. When dealing with releases in built-up areas, the predicted data are generally obtained by the Reynolds-averaged Navier-Stokes (RANS) equations, which yields building-resolving results; however, RANS-based models are outperformed by large-eddy simulation (LES) in the predictions of both airflow and dispersion. Therefore, it is important to explore the possibility of improving the estimation of the source parameters by using the LES approach. In this paper, a novel source term estimation method is proposed based on LES approach using Bayesian inference. The source-receptor relationship is obtained by solving the adjoint equations constructed using the time-averaged flow field simulated by the LES approach based on the gradient diffusion hypothesis. A wind tunnel experiment with a constant point source downwind of a single building model is used to evaluate the performance of the proposed method, which is compared with that of the existing method using a RANS model. The results show that the proposed method reduces the errors of source location and releasing strength by 77% and 28%, respectively. Copyright © 2018 Elsevier B.V. All rights reserved.
Stochastic Short-term High-resolution Prediction of Solar Irradiance and Photovoltaic Power Output
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melin, Alexander M.; Olama, Mohammed M.; Dong, Jin
The increased penetration of solar photovoltaic (PV) energy sources into electric grids has increased the need for accurate modeling and prediction of solar irradiance and power production. Existing modeling and prediction techniques focus on long-term low-resolution prediction over minutes to years. This paper examines the stochastic modeling and short-term high-resolution prediction of solar irradiance and PV power output. We propose a stochastic state-space model to characterize the behaviors of solar irradiance and PV power output. This prediction model is suitable for the development of optimal power controllers for PV sources. A filter-based expectation-maximization and Kalman filtering mechanism is employed tomore » estimate the parameters and states in the state-space model. The mechanism results in a finite dimensional filter which only uses the first and second order statistics. The structure of the scheme contributes to a direct prediction of the solar irradiance and PV power output without any linearization process or simplifying assumptions of the signal’s model. This enables the system to accurately predict small as well as large fluctuations of the solar signals. The mechanism is recursive allowing the solar irradiance and PV power to be predicted online from measurements. The mechanism is tested using solar irradiance and PV power measurement data collected locally in our lab.« less
Huang, Weidong; Li, Kun; Wang, Gan; Wang, Yingzhe
2013-11-01
In this article, we present a newly designed inverse umbrella surface aerator, and tested its performance in driving flow of an oxidation ditch. Results show that it has a better performance in driving the oxidation ditch than the original one with higher average velocity and more uniform flow field. We also present a computational fluid dynamics model for predicting the flow field in an oxidation ditch driven by a surface aerator. The improved momentum source term approach to simulate the flow field of the oxidation ditch driven by an inverse umbrella surface aerator was developed and validated through experiments. Four kinds of turbulent models were investigated with the approach, including the standard k - ɛ model, RNG k - ɛ model, realizable k - ɛ model, and Reynolds stress model, and the predicted data were compared with those calculated with the multiple rotating reference frame approach (MRF) and sliding mesh approach (SM). Results of the momentum source term approach are in good agreement with the experimental data, and its prediction accuracy is better than MRF, close to SM. It is also found that the momentum source term approach has lower computational expenses, is simpler to preprocess, and is easier to use.
A theoretical prediction of the acoustic pressure generated by turbulence-flame front interactions
NASA Technical Reports Server (NTRS)
Huff, R. G.
1984-01-01
The equations of momentum annd continuity are combined and linearized yielding the one dimensional nonhomogeneous acoustic wave equation. Three terms in the non-homogeneous equation act as acoustic sources and are taken to be forcing functions acting on the homogeneous wave equation. The three source terms are: fluctuating entropy, turbulence gradients, and turbulence-flame interactions. Each source term is discussed. The turbulence-flame interaction source is used as the basis for computing the source acoustic pressure from the Fourier transformed wave equation. Pressure fluctuations created in turbopump gas generators and turbines may act as a forcing function for turbine and propellant tube vibrations in Earth to orbit space propulsion systems and could reduce their life expectancy. A preliminary assessment of the acoustic pressure fluctuations in such systems is presented.
A theoretical prediction of the acoustic pressure generated by turbulence-flame front interactions
NASA Technical Reports Server (NTRS)
Huff, R. G.
1984-01-01
The equations of momentum and continuity are combined and linearized yielding the one dimensional nonhomogeneous acoustic wave equation. Three terms in the non-homogeneous equation act as acoustic sources and are taken to be forcing functions acting on the homogeneous wave equation. The three source terms are: fluctuating entropy, turbulence gradients, and turbulence-flame interactions. Each source term is discussed. The turbulence-flame interaction source is used as the basis for computing the source acoustic pressure from the Fourier transformed wave equation. Pressure fluctuations created in turbopump gas generators and turbines may act as a forcing function for turbine and propellant tube vibrations in earth to orbit space propulsion systems and could reduce their life expectancy. A preliminary assessment of the acoustic pressure fluctuations in such systems is presented.
Reducing mortality risk by targeting specific air pollution sources: Suva, Fiji.
Isley, C F; Nelson, P F; Taylor, M P; Stelcer, E; Atanacio, A J; Cohen, D D; Mani, F S; Maata, M
2018-01-15
Health implications of air pollution vary dependent upon pollutant sources. This work determines the value, in terms of reduced mortality, of reducing ambient particulate matter (PM 2.5 : effective aerodynamic diameter 2.5μm or less) concentration due to different emission sources. Suva, a Pacific Island city with substantial input from combustion sources, is used as a case-study. Elemental concentration was determined, by ion beam analysis, for PM 2.5 samples from Suva, spanning one year. Sources of PM 2.5 have been quantified by positive matrix factorisation. A review of recent literature has been carried out to delineate the mortality risk associated with these sources. Risk factors have then been applied for Suva, to calculate the possible mortality reduction that may be achieved through reduction in pollutant levels. Higher risk ratios for black carbon and sulphur resulted in mortality predictions for PM 2.5 from fossil fuel combustion, road vehicle emissions and waste burning that surpass predictions for these sources based on health risk of PM 2.5 mass alone. Predicted mortality for Suva from fossil fuel smoke exceeds the national toll from road accidents in Fiji. The greatest benefit for Suva, in terms of reduced mortality, is likely to be accomplished by reducing emissions from fossil fuel combustion (diesel), vehicles and waste burning. Copyright © 2017. Published by Elsevier B.V.
ERIC Educational Resources Information Center
Arslan, Ali
2012-01-01
The purpose of this study is to reveal the extent to which the sources of 6th- 8th grade students' self-efficacy beliefs predict their self-efficacy beliefs for learning and performance. The study is correlational and was conducted on a total of 1049 students during the fall term of the educational year 2010-2011. The data of the study were…
Zhang, X L; Su, G F; Yuan, H Y; Chen, J G; Huang, Q Y
2014-09-15
Atmospheric dispersion models play an important role in nuclear power plant accident management. A reliable estimation of radioactive material distribution in short range (about 50 km) is in urgent need for population sheltering and evacuation planning. However, the meteorological data and the source term which greatly influence the accuracy of the atmospheric dispersion models are usually poorly known at the early phase of the emergency. In this study, a modified ensemble Kalman filter data assimilation method in conjunction with a Lagrangian puff-model is proposed to simultaneously improve the model prediction and reconstruct the source terms for short range atmospheric dispersion using the off-site environmental monitoring data. Four main uncertainty parameters are considered: source release rate, plume rise height, wind speed and wind direction. Twin experiments show that the method effectively improves the predicted concentration distribution, and the temporal profiles of source release rate and plume rise height are also successfully reconstructed. Moreover, the time lag in the response of ensemble Kalman filter is shortened. The method proposed here can be a useful tool not only in the nuclear power plant accident emergency management but also in other similar situation where hazardous material is released into the atmosphere. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Park, Junghyun; Hayward, Chris; Stump, Brian W.
2018-06-01
Ground truth sources in Utah during 2003-2013 are used to assess the contribution of temporal atmospheric conditions to infrasound detection and the predictive capabilities of atmospheric models. Ground truth sources consist of 28 long duration static rocket motor burn tests and 28 impulsive rocket body demolitions. Automated infrasound detections from a hybrid of regional seismometers and infrasound arrays use a combination of short-term time average/long-term time average ratios and spectral analyses. These detections are grouped into station triads using a Delaunay triangulation network and then associated to estimate phase velocity and azimuth to filter signals associated with a particular source location. The resulting range and azimuth distribution from sources to detecting stations varies seasonally and is consistent with predictions based on seasonal atmospheric models. Impulsive signals from rocket body detonations are observed at greater distances (>700 km) than the extended duration signals generated by the rocket burn test (up to 600 km). Infrasound energy attenuation associated with the two source types is quantified as a function of range and azimuth from infrasound amplitude measurements. Ray-tracing results using Ground-to-Space atmospheric specifications are compared to these observations and illustrate the degree to which the time variations in characteristics of the observations can be predicted over a multiple year time period.
Huang, Weidong; Li, Kun; Wang, Gan; Wang, Yingzhe
2013-01-01
Abstract In this article, we present a newly designed inverse umbrella surface aerator, and tested its performance in driving flow of an oxidation ditch. Results show that it has a better performance in driving the oxidation ditch than the original one with higher average velocity and more uniform flow field. We also present a computational fluid dynamics model for predicting the flow field in an oxidation ditch driven by a surface aerator. The improved momentum source term approach to simulate the flow field of the oxidation ditch driven by an inverse umbrella surface aerator was developed and validated through experiments. Four kinds of turbulent models were investigated with the approach, including the standard k−ɛ model, RNG k−ɛ model, realizable k−ɛ model, and Reynolds stress model, and the predicted data were compared with those calculated with the multiple rotating reference frame approach (MRF) and sliding mesh approach (SM). Results of the momentum source term approach are in good agreement with the experimental data, and its prediction accuracy is better than MRF, close to SM. It is also found that the momentum source term approach has lower computational expenses, is simpler to preprocess, and is easier to use. PMID:24302850
Bendixen, Alexandra; Scharinger, Mathias; Strauß, Antje; Obleser, Jonas
2014-04-01
Speech signals are often compromised by disruptions originating from external (e.g., masking noise) or internal (e.g., inaccurate articulation) sources. Speech comprehension thus entails detecting and replacing missing information based on predictive and restorative neural mechanisms. The present study targets predictive mechanisms by investigating the influence of a speech segment's predictability on early, modality-specific electrophysiological responses to this segment's omission. Predictability was manipulated in simple physical terms in a single-word framework (Experiment 1) or in more complex semantic terms in a sentence framework (Experiment 2). In both experiments, final consonants of the German words Lachs ([laks], salmon) or Latz ([lats], bib) were occasionally omitted, resulting in the syllable La ([la], no semantic meaning), while brain responses were measured with multi-channel electroencephalography (EEG). In both experiments, the occasional presentation of the fragment La elicited a larger omission response when the final speech segment had been predictable. The omission response occurred ∼125-165 msec after the expected onset of the final segment and showed characteristics of the omission mismatch negativity (MMN), with generators in auditory cortical areas. Suggestive of a general auditory predictive mechanism at work, this main observation was robust against varying source of predictive information or attentional allocation, differing between the two experiments. Source localization further suggested the omission response enhancement by predictability to emerge from left superior temporal gyrus and left angular gyrus in both experiments, with additional experiment-specific contributions. These results are consistent with the existence of predictive coding mechanisms in the central auditory system, and suggestive of the general predictive properties of the auditory system to support spoken word recognition. Copyright © 2014 Elsevier Ltd. All rights reserved.
Supersonic jet noise - Its generation, prediction and effects on people and structures
NASA Technical Reports Server (NTRS)
Preisser, J. S.; Golub, R. A.; Seiner, J. M.; Powell, C. A.
1990-01-01
This paper presents the results of a study aimed at quantifying the effects of jet source noise reduction, increases in aircraft lift, and reduced aircraft thrust on the take-off noise associated with supersonic civil transports. Supersonic jet noise sources are first described, and their frequency and directivity dependence are defined. The study utilizes NASA's Aircraft Noise Prediction Program in a parametric study to weigh the relative benefits of several approaches to low noise. The baseline aircraft concept used in these predictions is the AST-205-1 powered by GE21/J11-B14A scaled engines. Noise assessment is presented in terms of effective perceived noise levels at the FAA's centerline and sideline measuring locations for current subsonic aircraft, and in terms of audiologically perceived sound of people and other indirect effects. The results show that significant noise benefit can be achieved through proper understanding and utilization of all available approaches.
A hybrid approach for nonlinear computational aeroacoustics predictions
NASA Astrophysics Data System (ADS)
Sassanis, Vasileios; Sescu, Adrian; Collins, Eric M.; Harris, Robert E.; Luke, Edward A.
2017-01-01
In many aeroacoustics applications involving nonlinear waves and obstructions in the far-field, approaches based on the classical acoustic analogy theory or the linearised Euler equations are unable to fully characterise the acoustic field. Therefore, computational aeroacoustics hybrid methods that incorporate nonlinear wave propagation have to be constructed. In this study, a hybrid approach coupling Navier-Stokes equations in the acoustic source region with nonlinear Euler equations in the acoustic propagation region is introduced and tested. The full Navier-Stokes equations are solved in the source region to identify the acoustic sources. The flow variables of interest are then transferred from the source region to the acoustic propagation region, where the full nonlinear Euler equations with source terms are solved. The transition between the two regions is made through a buffer zone where the flow variables are penalised via a source term added to the Euler equations. Tests were conducted on simple acoustic and vorticity disturbances, two-dimensional jets (Mach 0.9 and 2), and a three-dimensional jet (Mach 1.5), impinging on a wall. The method is proven to be effective and accurate in predicting sound pressure levels associated with the propagation of linear and nonlinear waves in the near- and far-field regions.
MODELING MINERAL NITROGEN EXPORT FROM A FOREST TERRESTRIAL ECOSYSTEM TO STREAMS
Terrestrial ecosystems are major sources of N pollution to aquatic ecosystems. Predicting N export to streams is a critical goal of non-point source modeling. This study was conducted to assess the effect of terrestrial N cycling on stream N export using long-term monitoring da...
Source Term Model for Steady Micro Jets in a Navier-Stokes Computer Code
NASA Technical Reports Server (NTRS)
Waithe, Kenrick A.
2005-01-01
A source term model for steady micro jets was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the mass flow and momentum created by a steady blowing micro jet. The model is obtained by adding the momentum and mass flow created by the jet to the Navier-Stokes equations. The model was tested by comparing with data from numerical simulations of a single, steady micro jet on a flat plate in two and three dimensions. The source term model predicted the velocity distribution well compared to the two-dimensional plate using a steady mass flow boundary condition, which was used to simulate a steady micro jet. The model was also compared to two three-dimensional flat plate cases using a steady mass flow boundary condition to simulate a steady micro jet. The three-dimensional comparison included a case with a grid generated to capture the circular shape of the jet and a case without a grid generated for the micro jet. The case without the jet grid mimics the application of the source term. The source term model compared well with both of the three-dimensional cases. Comparisons of velocity distribution were made before and after the jet and Mach and vorticity contours were examined. The source term model allows a researcher to quickly investigate different locations of individual or several steady micro jets. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.
NASA Astrophysics Data System (ADS)
Moruzzi, G.; Murphy, R. J.; Lees, R. M.; Predoi-Cross, A.; Billinghurst, B. E.
2010-09-01
The Fourier transform spectrum of the ? isotopologue of methanol has been recorded in the 120-350 cm-1 far-infrared region at a resolution of 0.00096 cm-1 using synchrotron source radiation at the Canadian Light Source. The study, motivated by astrophysical applications, is aimed at generating a sufficiently accurate set of energy level term values for the ground vibrational state to allow prediction of the centres of the quadrupole hyperfine multiplets for astronomically observable sub-millimetre transitions to within an uncertainty of a few MHz. To expedite transition identification, a new function was added to the Ritz program in which predicted spectral line positions were generated by an adjustable interpolation between the known assignments for the ? and ? isotopologues. By displaying the predictions along with the experimental spectrum on the computer monitor and adjusting the predictions to match observed features, rapid assignment of numerous ? sub-bands was possible. The least squares function of the Ritz program was then used to generate term values for the identified levels. For each torsion-K-rotation substate, the term values were fitted to a Taylor-series expansion in powers of J(J + 1) to determine the substate origin energy and effective B-value. In this first phase of the study we did not attempt a full global fit to the assigned transitions, but instead fitted the sub-band J-independent origins to a restricted Hamiltonian containing the principal torsional and K-dependent terms. These included structural and torsional potential parameters plus quartic distortional and torsion-rotation interaction terms.
NASA Astrophysics Data System (ADS)
Bonhoff, H. A.; Petersson, B. A. T.
2010-08-01
For the characterization of structure-borne sound sources with multi-point or continuous interfaces, substantial simplifications and physical insight can be obtained by incorporating the concept of interface mobilities. The applicability of interface mobilities, however, relies upon the admissibility of neglecting the so-called cross-order terms. Hence, the objective of the present paper is to clarify the importance and significance of cross-order terms for the characterization of vibrational sources. From previous studies, four conditions have been identified for which the cross-order terms can become more influential. Such are non-circular interface geometries, structures with distinctively differing transfer paths as well as a suppression of the zero-order motion and cases where the contact forces are either in phase or out of phase. In a theoretical study, the former four conditions are investigated regarding the frequency range and magnitude of a possible strengthening of the cross-order terms. For an experimental analysis, two source-receiver installations are selected, suitably designed to obtain strong cross-order terms. The transmitted power and the source descriptors are predicted by the approximations of the interface mobility approach and compared with the complete calculations. Neglecting the cross-order terms can result in large misinterpretations at certain frequencies. On average, however, the cross-order terms are found to be insignificant and can be neglected with good approximation. The general applicability of interface mobilities for structure-borne sound source characterization and the description of the transmission process thereby is confirmed.
The central purpose of our study was to examine the performance of the United States Environmental Protection Agency's (EPA) nonreactive Gaussian air quality dispersion model, the Industrial Source Complex Short Term Model (ISCST3) Version 98226, in predicting polychlorinated dib...
Assessment of macroseismic intensity in the Nile basin, Egypt
NASA Astrophysics Data System (ADS)
Fergany, Elsayed
2018-01-01
This work intends to assess deterministic seismic hazard and risk analysis in terms of the maximum expected intensity map of the Egyptian Nile basin sector. Seismic source zone model of Egypt was delineated based on updated compatible earthquake catalog in 2015, focal mechanisms, and the common tectonic elements. Four effective seismic source zones were identified along the Nile basin. The observed macroseismic intensity data along the basin was used to develop intensity prediction equation defined in terms of moment magnitude. Expected maximum intensity map was proven based on the developed intensity prediction equation, identified effective seismic source zones, and maximum expected magnitude for each zone along the basin. The earthquake hazard and risk analysis was discussed and analyzed in view of the maximum expected moment magnitude and the maximum expected intensity values for each effective source zone. Moderate expected magnitudes are expected to put high risk at Cairo and Aswan regions. The results of this study could be a recommendation for the planners in charge to mitigate the seismic risk at these strategic zones of Egypt.
Two Machine Learning Approaches for Short-Term Wind Speed Time-Series Prediction.
Ak, Ronay; Fink, Olga; Zio, Enrico
2016-08-01
The increasing liberalization of European electricity markets, the growing proportion of intermittent renewable energy being fed into the energy grids, and also new challenges in the patterns of energy consumption (such as electric mobility) require flexible and intelligent power grids capable of providing efficient, reliable, economical, and sustainable energy production and distribution. From the supplier side, particularly, the integration of renewable energy sources (e.g., wind and solar) into the grid imposes an engineering and economic challenge because of the limited ability to control and dispatch these energy sources due to their intermittent characteristics. Time-series prediction of wind speed for wind power production is a particularly important and challenging task, wherein prediction intervals (PIs) are preferable results of the prediction, rather than point estimates, because they provide information on the confidence in the prediction. In this paper, two different machine learning approaches to assess PIs of time-series predictions are considered and compared: 1) multilayer perceptron neural networks trained with a multiobjective genetic algorithm and 2) extreme learning machines combined with the nearest neighbors approach. The proposed approaches are applied for short-term wind speed prediction from a real data set of hourly wind speed measurements for the region of Regina in Saskatchewan, Canada. Both approaches demonstrate good prediction precision and provide complementary advantages with respect to different evaluation criteria.
Directional stability of crack propagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Streit, R.D.; Finnie, I.
Despite many alternative models, the original Erdogan and Sih (1963) hypothesis that a crack will grow in the direction perpendicular to the maximum circumferential stress sigma/sub theta/ is seen to be adequate for predicting the angle of crack growth under the condition of mixed mode loading. Their predictions, which were based on the singularity terms in the series expansion for the Mode I and Mode II stress fields, can be improved if the second term in the series is also included. Although conceptually simple, their predictions of the crack growth direction fit very closely to the data obtained from manymore » sources.« less
The Application of Function Points to Predict Source Lines of Code for Software Development
1992-09-01
there are some disadvantages. Software estimating tools are expensive. A single tool may cost more than $15,000 due to the high market value of the...term and Lang variables simultaneously onlN added marginal improvements over models with these terms included singularly. Using all the available
Characterizing SRAM Single Event Upset in Terms of Single and Double Node Charge Collection
NASA Technical Reports Server (NTRS)
Black, J. D.; Ball, D. R., II; Robinson, W. H.; Fleetwood, D. M.; Schrimpf, R. D.; Reed, R. A.; Black, D. A.; Warren, K. M.; Tipton, A. D.; Dodd, P. E.;
2008-01-01
A well-collapse source-injection mode for SRAM SEU is demonstrated through TCAD modeling. The recovery of the SRAM s state is shown to be based upon the resistive path from the p+-sources in the SRAM to the well. Multiple cell upset patterns for direct charge collection and the well-collapse source-injection mechanisms are then predicted and compared to recent SRAM test data.
NASA Astrophysics Data System (ADS)
Faes, Luca; Marinazzo, Daniele; Stramaglia, Sebastiano; Jurysta, Fabrice; Porta, Alberto; Giandomenico, Nollo
2016-05-01
This work introduces a framework to study the network formed by the autonomic component of heart rate variability (cardiac process η) and the amplitude of the different electroencephalographic waves (brain processes δ, θ, α, σ, β) during sleep. The framework exploits multivariate linear models to decompose the predictability of any given target process into measures of self-, causal and interaction predictability reflecting respectively the information retained in the process and related to its physiological complexity, the information transferred from the other source processes, and the information modified during the transfer according to redundant or synergistic interaction between the sources. The framework is here applied to the η, δ, θ, α, σ, β time series measured from the sleep recordings of eight severe sleep apnoea-hypopnoea syndrome (SAHS) patients studied before and after long-term treatment with continuous positive airway pressure (CPAP) therapy, and 14 healthy controls. Results show that the full and self-predictability of η, δ and θ decreased significantly in SAHS compared with controls, and were restored with CPAP for δ and θ but not for η. The causal predictability of η and δ occurred through significantly redundant source interaction during healthy sleep, which was lost in SAHS and recovered after CPAP. These results indicate that predictability analysis is a viable tool to assess the modifications of complexity and causality of the cerebral and cardiac processes induced by sleep disorders, and to monitor the restoration of the neuroautonomic control of these processes during long-term treatment.
Chen, Tianle; Zeng, Donglin
2015-01-01
Summary Predicting disease risk and progression is one of the main goals in many clinical research studies. Cohort studies on the natural history and etiology of chronic diseases span years and data are collected at multiple visits. Although kernel-based statistical learning methods are proven to be powerful for a wide range of disease prediction problems, these methods are only well studied for independent data but not for longitudinal data. It is thus important to develop time-sensitive prediction rules that make use of the longitudinal nature of the data. In this paper, we develop a novel statistical learning method for longitudinal data by introducing subject-specific short-term and long-term latent effects through a designed kernel to account for within-subject correlation of longitudinal measurements. Since the presence of multiple sources of data is increasingly common, we embed our method in a multiple kernel learning framework and propose a regularized multiple kernel statistical learning with random effects to construct effective nonparametric prediction rules. Our method allows easy integration of various heterogeneous data sources and takes advantage of correlation among longitudinal measures to increase prediction power. We use different kernels for each data source taking advantage of the distinctive feature of each data modality, and then optimally combine data across modalities. We apply the developed methods to two large epidemiological studies, one on Huntington's disease and the other on Alzheimer's Disease (Alzheimer's Disease Neuroimaging Initiative, ADNI) where we explore a unique opportunity to combine imaging and genetic data to study prediction of mild cognitive impairment, and show a substantial gain in performance while accounting for the longitudinal aspect of the data. PMID:26177419
Lee, Jaebeom; Lee, Young-Joo
2018-01-01
Management of the vertical long-term deflection of a high-speed railway bridge is a crucial factor to guarantee traffic safety and passenger comfort. Therefore, there have been efforts to predict the vertical deflection of a railway bridge based on physics-based models representing various influential factors to vertical deflection such as concrete creep and shrinkage. However, it is not an easy task because the vertical deflection of a railway bridge generally involves several sources of uncertainty. This paper proposes a probabilistic method that employs a Gaussian process to construct a model to predict the vertical deflection of a railway bridge based on actual vision-based measurement and temperature. To deal with the sources of uncertainty which may cause prediction errors, a Gaussian process is modeled with multiple kernels and hyperparameters. Once the hyperparameters are identified through the Gaussian process regression using training data, the proposed method provides a 95% prediction interval as well as a predictive mean about the vertical deflection of the bridge. The proposed method is applied to an arch bridge under operation for high-speed trains in South Korea. The analysis results obtained from the proposed method show good agreement with the actual measurement data on the vertical deflection of the example bridge, and the prediction results can be utilized for decision-making on railway bridge maintenance. PMID:29747421
Lee, Jaebeom; Lee, Kyoung-Chan; Lee, Young-Joo
2018-05-09
Management of the vertical long-term deflection of a high-speed railway bridge is a crucial factor to guarantee traffic safety and passenger comfort. Therefore, there have been efforts to predict the vertical deflection of a railway bridge based on physics-based models representing various influential factors to vertical deflection such as concrete creep and shrinkage. However, it is not an easy task because the vertical deflection of a railway bridge generally involves several sources of uncertainty. This paper proposes a probabilistic method that employs a Gaussian process to construct a model to predict the vertical deflection of a railway bridge based on actual vision-based measurement and temperature. To deal with the sources of uncertainty which may cause prediction errors, a Gaussian process is modeled with multiple kernels and hyperparameters. Once the hyperparameters are identified through the Gaussian process regression using training data, the proposed method provides a 95% prediction interval as well as a predictive mean about the vertical deflection of the bridge. The proposed method is applied to an arch bridge under operation for high-speed trains in South Korea. The analysis results obtained from the proposed method show good agreement with the actual measurement data on the vertical deflection of the example bridge, and the prediction results can be utilized for decision-making on railway bridge maintenance.
A Systematic Review of Techniques and Sources of Big Data in the Healthcare Sector.
Alonso, Susel Góngora; de la Torre Díez, Isabel; Rodrigues, Joel J P C; Hamrioui, Sofiane; López-Coronado, Miguel
2017-10-14
The main objective of this paper is to present a review of existing researches in the literature, referring to Big Data sources and techniques in health sector and to identify which of these techniques are the most used in the prediction of chronic diseases. Academic databases and systems such as IEEE Xplore, Scopus, PubMed and Science Direct were searched, considering the date of publication from 2006 until the present time. Several search criteria were established as 'techniques' OR 'sources' AND 'Big Data' AND 'medicine' OR 'health', 'techniques' AND 'Big Data' AND 'chronic diseases', etc. Selecting the paper considered of interest regarding the description of the techniques and sources of Big Data in healthcare. It found a total of 110 articles on techniques and sources of Big Data on health from which only 32 have been identified as relevant work. Many of the articles show the platforms of Big Data, sources, databases used and identify the techniques most used in the prediction of chronic diseases. From the review of the analyzed research articles, it can be noticed that the sources and techniques of Big Data used in the health sector represent a relevant factor in terms of effectiveness, since it allows the application of predictive analysis techniques in tasks such as: identification of patients at risk of reentry or prevention of hospital or chronic diseases infections, obtaining predictive models of quality.
NASA Technical Reports Server (NTRS)
Swift, G.; Mungur, P.
1979-01-01
General procedures for the prediction of component noise levels incident upon airframe surfaces during cruise are developed. Contributing noise sources are those associated with the propulsion system, the airframe and the laminar flow control (LFC) system. Transformation procedures from the best prediction base of each noise source to the transonic cruise condition are established. Two approaches to LFC/acoustic criteria are developed. The first is a semi-empirical extension of the X-21 LFC/acoustic criteria to include sensitivity to the spectrum and directionality of the sound field. In the second, the more fundamental problem of how sound excites boundary layer disturbances is analyzed by deriving and solving an inhomogeneous Orr-Sommerfeld equation in which the source terms are proportional to the production and dissipation of sound induced fluctuating vorticity. Numerical solutions are obtained and compared with corresponding measurements. Recommendations are made to improve and validate both the cruise noise prediction methods and the LFC/acoustic criteria.
Supple, Megan Ann; Bragg, Jason G; Broadhurst, Linda M; Nicotra, Adrienne B; Byrne, Margaret; Andrew, Rose L; Widdup, Abigail; Aitken, Nicola C; Borevitz, Justin O
2018-04-24
As species face rapid environmental change, we can build resilient populations through restoration projects that incorporate predicted future climates into seed sourcing decisions. Eucalyptus melliodora is a foundation species of a critically endangered community in Australia that is a target for restoration. We examined genomic and phenotypic variation to make empirical based recommendations for seed sourcing. We examined isolation by distance and isolation by environment, determining high levels of gene flow extending for 500 km and correlations with climate and soil variables. Growth experiments revealed extensive phenotypic variation both within and among sampling sites, but no site-specific differentiation in phenotypic plasticity. Model predictions suggest that seed can be sourced broadly across the landscape, providing ample diversity for adaptation to environmental change. Application of our landscape genomic model to E. melliodora restoration projects can identify genomic variation suitable for predicted future climates, thereby increasing the long term probability of successful restoration. © 2018, Supple et al.
NASA Astrophysics Data System (ADS)
Klimasewski, A.; Sahakian, V. J.; Baltay, A.; Boatwright, J.; Fletcher, J. B.; Baker, L. M.
2017-12-01
A large source of epistemic uncertainty in Ground Motion Prediction Equations (GMPEs) is derived from the path term, currently represented as a simple geometric spreading and intrinsic attenuation term. Including additional physical relationships between the path properties and predicted ground motions would produce more accurate and precise, region-specific GMPEs by reclassifying some of the random, aleatory uncertainty as epistemic. This study focuses on regions of Southern California, using data from the Anza network and Southern California Seismic network to create a catalog of events magnitude 2.5 and larger from 1998 to 2016. The catalog encompasses regions of varying geology and therefore varying path and site attenuation. Within this catalog of events, we investigate several collections of event region-to-station pairs, each of which share similar origin locations and stations so that all events have similar paths. Compared with a simple regional GMPE, these paths consistently have high or low residuals. By working with events that have the same path, we can isolate source and site effects, and focus on the remaining residual as path effects. We decompose the recordings into source and site spectra for each unique event and site in our greater Southern California regional database using the inversion method of Andrews (1986). This model represents each natural log record spectra as the sum of its natural log event and site spectra, while constraining each record to a reference site or Brune source spectrum. We estimate a regional, path-specific anelastic attenuation (Q) and site attenuation (t*) from the inversion site spectra and corner frequency from the inversion event spectra. We then compute the residuals between the observed record data, and the inversion model prediction (event*site spectra). This residual is representative of path effects, likely anelastic attenuation along the path that varies from the regional median attenuation. We examine the residuals for our different sets independently to see how path terms differ between event-to-station collections. The path-specific information gained from this can inform development of terms for regional GMPEs, through understanding of these seismological phenomena.
AMOEBA 2.0: A physics-first approach to biomolecular simulations
NASA Astrophysics Data System (ADS)
Rackers, Joshua; Ponder, Jay
The goal of the AMOEBA force field project is to use classical physics to understand and predict the nature of interactions between biological molecules. While making significant advances over the past decade, the ultimate goal of predicting binding energies with ``chemical accuracy'' remains elusive. The primary source of this inaccuracy comes from the physics of how molecules interact at short range. For example, despite AMOEBA's advanced treatment of electrostatics, the force field dramatically overpredicts the electrostatic energy of DNA stacking interactions. AMOEBA 2.0 works to correct these errors by including simple, first principles physics-based terms to account for the quantum mechanical nature of these short-range molecular interactions. We have added a charge penetration term that considerably improves the description of electrostatic interactions at short range. We are reformulating the polarization term of AMOEBA in terms of basic physics assertions. And we are reevaluating the van der Waals term to match ab initio energy decompositions. These additions and changes promise to make AMOEBA more predictive. By including more physical detail of the important short-range interactions of biological molecules, we hope to move closer to the ultimate goal of true predictive power.
Predicting vertically-nonsequential wetting patterns with a source-responsive model
Nimmo, John R.; Mitchell, Lara
2013-01-01
Water infiltrating into soil of natural structure often causes wetting patterns that do not develop in an orderly sequence. Because traditional unsaturated flow models represent a water advance that proceeds sequentially, they fail to predict irregular development of water distribution. In the source-responsive model, a diffuse domain (D) represents flow within soil matrix material following traditional formulations, and a source-responsive domain (S), characterized in terms of the capacity for preferential flow and its degree of activation, represents preferential flow as it responds to changing water-source conditions. In this paper we assume water undergoing rapid source-responsive transport at any particular time is of negligibly small volume; it becomes sensible at the time and depth where domain transfer occurs. A first-order transfer term represents abstraction from the S to the D domain which renders the water sensible. In tests with lab and field data, for some cases the model shows good quantitative agreement, and in all cases it captures the characteristic patterns of wetting that proceed nonsequentially in the vertical direction. In these tests we determined the values of the essential characterizing functions by inverse modeling. These functions relate directly to observable soil characteristics, rendering them amenable to evaluation and improvement through hydropedologic development.
NASA Astrophysics Data System (ADS)
Ni, X. Y.; Huang, H.; Du, W. P.
2017-02-01
The PM2.5 problem is proving to be a major public crisis and is of great public-concern requiring an urgent response. Information about, and prediction of PM2.5 from the perspective of atmospheric dynamic theory is still limited due to the complexity of the formation and development of PM2.5. In this paper, we attempted to realize the relevance analysis and short-term prediction of PM2.5 concentrations in Beijing, China, using multi-source data mining. A correlation analysis model of PM2.5 to physical data (meteorological data, including regional average rainfall, daily mean temperature, average relative humidity, average wind speed, maximum wind speed, and other pollutant concentration data, including CO, NO2, SO2, PM10) and social media data (microblog data) was proposed, based on the Multivariate Statistical Analysis method. The study found that during these factors, the value of average wind speed, the concentrations of CO, NO2, PM10, and the daily number of microblog entries with key words 'Beijing; Air pollution' show high mathematical correlation with PM2.5 concentrations. The correlation analysis was further studied based on a big data's machine learning model- Back Propagation Neural Network (hereinafter referred to as BPNN) model. It was found that the BPNN method performs better in correlation mining. Finally, an Autoregressive Integrated Moving Average (hereinafter referred to as ARIMA) Time Series model was applied in this paper to explore the prediction of PM2.5 in the short-term time series. The predicted results were in good agreement with the observed data. This study is useful for helping realize real-time monitoring, analysis and pre-warning of PM2.5 and it also helps to broaden the application of big data and the multi-source data mining methods.
Predicting Near-Term Water Quality from Satellite Observations of Watershed Conditions
NASA Astrophysics Data System (ADS)
Weiss, W. J.; Wang, L.; Hoffman, K.; West, D.; Mehta, A. V.; Lee, C.
2017-12-01
Despite the strong influence of watershed conditions on source water quality, most water utilities and water resource agencies do not currently have the capability to monitor watershed sources of contamination with great temporal or spatial detail. Typically, knowledge of source water quality is limited to periodic grab sampling; automated monitoring of a limited number of parameters at a few select locations; and/or monitoring relevant constituents at a treatment plant intake. While important, such observations are not sufficient to inform proactive watershed or source water management at a monthly or seasonal scale. Satellite remote sensing data on the other hand can provide a snapshot of an entire watershed at regular, sub-monthly intervals, helping analysts characterize watershed conditions and identify trends that could signal changes in source water quality. Accordingly, the authors are investigating correlations between satellite remote sensing observations of watersheds and source water quality, at a variety of spatial and temporal scales and lags. While correlations between remote sensing observations and direct in situ measurements of water quality have been well described in the literature, there are few studies that link remote sensing observations across a watershed with near-term predictions of water quality. In this presentation, the authors will describe results of statistical analyses and discuss how these results are being used to inform development of a desktop decision support tool to support predictive application of remote sensing data. Predictor variables under evaluation include parameters that describe vegetative conditions; parameters that describe climate/weather conditions; and non-remote sensing, in situ measurements. Water quality parameters under investigation include nitrogen, phosphorus, organic carbon, chlorophyll-a, and turbidity.
Augmented classical least squares multivariate spectral analysis
Haaland, David M.; Melgaard, David K.
2004-02-03
A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.
Augmented Classical Least Squares Multivariate Spectral Analysis
Haaland, David M.; Melgaard, David K.
2005-07-26
A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.
Augmented Classical Least Squares Multivariate Spectral Analysis
Haaland, David M.; Melgaard, David K.
2005-01-11
A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.
Upper and lower bounds of ground-motion variabilities: implication for source properties
NASA Astrophysics Data System (ADS)
Cotton, Fabrice; Reddy-Kotha, Sreeram; Bora, Sanjay; Bindi, Dino
2017-04-01
One of the key challenges of seismology is to be able to analyse the physical factors that control earthquakes and ground-motion variabilities. Such analysis is particularly important to calibrate physics-based simulations and seismic hazard estimations at high frequencies. Within the framework of the development of ground-motion prediction equation (GMPE) developments, ground-motions residuals (differences between recorded ground motions and the values predicted by a GMPE) are computed. The exponential growth of seismological near-source records and modern GMPE analysis technics allow to partition these residuals into between- and a within-event components. In particular, the between-event term quantifies all those repeatable source effects (e.g. related to stress-drop or kappa-source variability) which have not been accounted by the magnitude-dependent term of the model. In this presentation, we first discuss the between-event variabilities computed both in the Fourier and Response Spectra domains, using recent high-quality global accelerometric datasets (e.g. NGA-west2, Resorce, Kiknet). These analysis lead to the assessment of upper bounds for the ground-motion variability. Then, we compare these upper bounds with lower bounds estimated by analysing seismic sequences which occurred on specific fault systems (e.g., located in Central Italy or in Japan). We show that the lower bounds of between-event variabilities are surprisingly large which indicates a large variability of earthquake dynamic properties even within the same fault system. Finally, these upper and lower bounds of ground-shaking variability are discussed in term of variability of earthquake physical properties (e.g., stress-drop and kappa_source).
NASA Astrophysics Data System (ADS)
Vilain, J.
Approaches to major hazard assessment and prediction are reviewed. Source term: (phenomenology/modeling of release, influence on early stages of dispersion); dispersion (atmospheric advection, diffusion and deposition, emphasis on dense/cold gases); combustion (flammable clouds and mists covering flash fires, deflagration, transition to detonation; mostly unconfined/partly confined situations); blast formation, propagation, interaction with structures; catastrophic fires (pool fires, torches and fireballs; highly reactive substances) runaway reactions; features of more general interest; toxic substances, excluding toxicology; and dust explosions (phenomenology and protective measures) are discussed.
NASA Astrophysics Data System (ADS)
Sridhara, Basavapatna Sitaramaiah
In an internal combustion engine, the engine is the noise source and the exhaust pipe is the main transmitter of noise. Mufflers are often used to reduce engine noise level in the exhaust pipe. To optimize a muffler design, a series of experiments could be conducted using various mufflers installed in the exhaust pipe. For each configuration, the radiated sound pressure could be measured. However, this is not a very efficient method. A second approach would be to develop a scheme involving only a few measurements which can predict the radiated sound pressure at a specified distance from the open end of the exhaust pipe. In this work, the engine exhaust system was modelled as a lumped source-muffler-termination system. An expression for the predicted sound pressure level was derived in terms of the source and termination impedances, and the muffler geometry. The pressure source and monopole radiation models were used for the source and the open end of the exhaust pipe. The four pole parameters were used to relate the acoustic properties at two different cross sections of the muffler and the pipe. The developed formulation was verified through a series of experiments. Two loudspeakers and a reciprocating type vacuum pump were used as sound sources during the tests. The source impedance was measured using the direct, two-load and four-load methods. A simple expansion chamber and a side-branch resonator were used as mufflers. Sound pressure level measurements for the prediction scheme were made for several source-muffler and source-straight pipe combinations. The predicted and measured sound pressure levels were compared for all cases considered. In all cases, correlation of the experimental results and those predicted by the developed expressions was good. Predicted and measured values of the insertion loss of the mufflers were compared. The agreement between the two was good. Also, an error analysis of the four-load method was done.
NuSTAR observations of M31: globular cluster candidates found to be Z sources
NASA Astrophysics Data System (ADS)
Maccarone, Thomas J.; Yukita, Mihoko; Hornschemeier, Ann E.; Lehmer, Bret; Antoniou, Vallia; Ptak, Andrew; Wik, Daniel R.; Zezas, Andreas; Boyd, Patricia T.; Kennea, Jamie A.; Page, Kim; Eracleous, Michael; Williams, Benjamin F.; NuSTAR mission Team
2016-01-01
We present the results of Swift + NuSTAR observations of 4 bright globular cluster sources in M31. Three of these had previously been suggested to be black holes on the basis of their spectra. We show that all are well fit by models indicative of Z source natures for the sources. We also discuss some reasons why the long term light curves of these objects indicate that they are more likely to be neutron stars, and discuss the discrepancy between the empirical understanding of persistent sources and theoretical predictions.
Prediction of Down-Gradient Impacts of DNAPL Source Depletion Using Tracer Techniques
NASA Astrophysics Data System (ADS)
Basu, N. B.; Fure, A. D.; Jawitz, J. W.
2006-12-01
Four simplified DNAPL source depletion models that have been discussed in the literature recently are evaluated for the prediction of long-term effects of source depletion under natural gradient flow. These models are simple in form (a power function equation is an example) but are shown here to serve as mathematical analogs to complex multiphase flow and transport simulators. One of the source depletion models, the equilibrium streamtube model, is shown to be relatively easily parameterized using non-reactive and reactive tracers. Non-reactive tracers are used to characterize the aquifer heterogeneity while reactive tracers are used to describe the mean DNAPL mass and its distribution. This information is then used in a Lagrangian framework to predict source remediation performance. In a Lagrangian approach the source zone is conceptualized as a collection of non-interacting streamtubes with hydrodynamic and DNAPL heterogeneity represented by the variation of the travel time and DNAPL saturation among the streamtubes. The travel time statistics are estimated from the non-reactive tracer data while the DNAPL distribution statistics are estimated from the reactive tracer data. The combined statistics are used to define an analytical solution for contaminant dissolution under natural gradient flow. The tracer prediction technique compared favorably with results from a multiphase flow and transport simulator UTCHEM in domains with different hydrodynamic heterogeneity (variance of the log conductivity field = 0.2, 1 and 3).
NASA Astrophysics Data System (ADS)
Perdigão, R. A. P.
2017-12-01
Predictability assessments are traditionally made on a case-by-case basis, often by running the particular model of interest with randomly perturbed initial/boundary conditions and parameters, producing computationally expensive ensembles. These approaches provide a lumped statistical view of uncertainty evolution, without eliciting the fundamental processes and interactions at play in the uncertainty dynamics. In order to address these limitations, we introduce a systematic dynamical framework for predictability assessment and forecast, by analytically deriving governing equations of predictability in terms of the fundamental architecture of dynamical systems, independent of any particular problem under consideration. The framework further relates multiple uncertainty sources along with their coevolutionary interplay, enabling a comprehensive and explicit treatment of uncertainty dynamics along time, without requiring the actual model to be run. In doing so, computational resources are freed and a quick and effective a-priori systematic dynamic evaluation is made of predictability evolution and its challenges, including aspects in the model architecture and intervening variables that may require optimization ahead of initiating any model runs. It further brings out universal dynamic features in the error dynamics elusive to any case specific treatment, ultimately shedding fundamental light on the challenging issue of predictability. The formulated approach, framed with broad mathematical physics generality in mind, is then implemented in dynamic models of nonlinear geophysical systems with various degrees of complexity, in order to evaluate their limitations and provide informed assistance on how to optimize their design and improve their predictability in fundamental dynamical terms.
Emergent Constraints for Cloud Feedbacks and Climate Sensitivity
Klein, Stephen A.; Hall, Alex
2015-10-26
Emergent constraints are physically explainable empirical relationships between characteristics of the current climate and long-term climate prediction that emerge in collections of climate model simulations. With the prospect of constraining long-term climate prediction, scientists have recently uncovered several emergent constraints related to long-term cloud feedbacks. We review these proposed emergent constraints, many of which involve the behavior of low-level clouds, and discuss criteria to assess their credibility. With further research, some of the cases we review may eventually become confirmed emergent constraints, provided they are accompanied by credible physical explanations. Because confirmed emergent constraints identify a source of model errormore » that projects onto climate predictions, they deserve extra attention from those developing climate models and climate observations. While a systematic bias cannot be ruled out, it is noteworthy that the promising emergent constraints suggest larger cloud feedback and hence climate sensitivity.« less
Development of surrogate models for the prediction of the flow around an aircraft propeller
NASA Astrophysics Data System (ADS)
Salpigidou, Christina; Misirlis, Dimitris; Vlahostergios, Zinon; Yakinthos, Kyros
2018-05-01
In the present work, the derivation of two surrogate models (SMs) for modelling the flow around a propeller for small aircrafts is presented. Both methodologies use derived functions based on computations with the detailed propeller geometry. The computations were performed using k-ω shear stress transport for modelling turbulence. In the SMs, the modelling of the propeller was performed in a computational domain of disk-like geometry, where source terms were introduced in the momentum equations. In the first SM, the source terms were polynomial functions of swirl and thrust, mainly related to the propeller radius. In the second SM, regression analysis was used to correlate the source terms with the velocity distribution through the propeller. The proposed SMs achieved faster convergence, in relation to the detail model, by providing also results closer to the available operational data. The regression-based model was the most accurate and required less computational time for convergence.
A controlled variation scheme for convection treatment in pressure-based algorithm
NASA Technical Reports Server (NTRS)
Shyy, Wei; Thakur, Siddharth; Tucker, Kevin
1993-01-01
Convection effect and source terms are two primary sources of difficulties in computing turbulent reacting flows typically encountered in propulsion devices. The present work intends to elucidate the individual as well as the collective roles of convection and source terms in the fluid flow equations, and to devise appropriate treatments and implementations to improve our current capability of predicting such flows. A controlled variation scheme (CVS) has been under development in the context of a pressure-based algorithm, which has the characteristics of adaptively regulating the amount of numerical diffusivity, relative to central difference scheme, according to the variation in local flow field. Both the basic concepts and a pragmatic assessment will be presented to highlight the status of this work.
Post audit of a numerical prediction of wellfield drawdown in a semiconfined aquifer system
Stewart, M.; Langevin, C.
1999-01-01
A numerical ground water flow model was created in 1978 and revised in 1981 to predict the drawdown effects of a proposed municipal wellfield permitted to withdraw 30 million gallons per day (mgd; 1.1 x 105 m3/day) of water from the semiconfined Floridan Aquifer system. The predictions are based on the assumption that water levels in the semiconfined Floridan Aquifer reach a long-term, steady-state condition within a few days of initiation of pumping. Using this assumption, a 75 day simulation without water table recharge, pumping at the maximum permitted rates, was considered to represent a worst-case condition and the greatest drawdowns that could be experienced during wellfield operation. This method of predicting wellfield effects was accepted by the permitting agency. For this post audit, observed drawdowns were derived by taking the difference between pre-pumping and post-pumping potentiometric surface levels. Comparison of predicted and observed drawdowns suggests that actual drawdown over a 12 year period exceeds predicted drawdown by a factor of two or more. Analysis of the source of error in the 1981 predictions suggests that the values used for transmissivity, storativity, specific yield, and leakance are reasonable at the wellfield scale. Simulation using actual 1980-1992 pumping rates improves the agreement between predicted and observed drawdowns. The principal source of error is the assumption that water levels in a semiconfined aquifer achieve a steady-state condition after a few days or weeks of pumping. Simulations using a version of the 1981 model modified to include recharge and evapotranspiration suggest that it can take hundreds of days or several years for water levels in the linked Surficial and Floridan Aquifers to reach an apparent steady-state condition, and that slow declines in levels continue for years after the initiation of pumping. While the 1981 'impact' model can be used for reasonably predicting short-term, wellfield-scale effects of pumping, using a 75 day long simulation without recharge to predict the long-term behavior of the wellfield was an inappropriate application, resulting in significant underprediction of wellfield effects.A numerical ground water flow model was created in 1978 and revised in 1981 to predict the drawdown effects of a proposed municipal wellfield permitted to withdraw 30 million gallons per day (mgd; 1.1??105 m3/day) of water from the semiconfined Floridan Aquifer system. The predictions are based on the assumption that water levels in the semiconfined Floridan Aquifer reach a long-term, steady-state condition within a few days of initiation of pumping. Using this assumption, a 75 day simulation without water table recharge, pumping at the maximum permitted rates, was considered to represent a worst-case condition and the greatest drawdowns that could be experienced during wellfield operation. This method of predicting wellfield effects was accepted by the permitting agency. For this post audit, observed drawdowns were derived by taking the difference between pre-pumping and post-pumping potentiometric surface levels. Comparison of predicted and observed drawdowns suggests that actual drawdown over a 12 year period exceeds predicted drawdown by a factor of two or more. Analysis of the source of error in the 1981 predictions suggests that the values used for transmissivity, storativity, specific yield, and leakance are reasonable at the wellfield scale. Simulation using actual 1980-1992 pumping rates improves the agreement between predicted and observed drawdowns. The principal source of error is the assumption that water levels in a semiconfined aquifer achieve a steady-state condition after a few days or weeks of pumping. Simulations using a version of the 1981 model modified to include recharge and evapotranspiration suggest that it can take hundreds of days or several years for water levels in the linked Surficial and Floridan Aquifers to reach an apparent stead
Observed ground-motion variabilities and implication for source properties
NASA Astrophysics Data System (ADS)
Cotton, F.; Bora, S. S.; Bindi, D.; Specht, S.; Drouet, S.; Derras, B.; Pina-Valdes, J.
2016-12-01
One of the key challenges of seismology is to be able to calibrate and analyse the physical factors that control earthquake and ground-motion variabilities. Within the framework of empirical ground-motion prediction equation (GMPE) developments, ground-motions residuals (differences between recorded ground motions and the values predicted by a GMPE) are computed. The exponential growth of seismological near-field records and modern regression algorithms allow to decompose these residuals into between-event and a within-event residual components. The between-event term quantify all the residual effects of the source (e.g. stress-drops) which are not accounted by magnitude term as the only source parameter of the model. Between-event residuals provide a new and rather robust way to analyse the physical factors that control earthquake source properties and associated variabilities. We first will show the correlation between classical stress-drops and between-event residuals. We will also explain why between-event residuals may be a more robust way (compared to classical stress-drop analysis) to analyse earthquake source-properties. We will finally calibrate between-events variabilities using recent high-quality global accelerometric datasets (NGA-West 2, RESORCE) and datasets from recent earthquakes sequences (Aquila, Iquique, Kunamoto). The obtained between-events variabilities will be used to evaluate the variability of earthquake stress-drops but also the variability of source properties which cannot be explained by a classical Brune stress-drop variations. We will finally use the between-event residual analysis to discuss regional variations of source properties, differences between aftershocks and mainshocks and potential magnitude dependencies of source characteristics.
Frequent long-distance plant colonization in the changing Arctic.
Alsos, Inger Greve; Eidesen, Pernille Bronken; Ehrich, Dorothee; Skrede, Inger; Westergaard, Kristine; Jacobsen, Gro Hilde; Landvik, Jon Y; Taberlet, Pierre; Brochmann, Christian
2007-06-15
The ability of species to track their ecological niche after climate change is a major source of uncertainty in predicting their future distribution. By analyzing DNA fingerprinting (amplified fragment-length polymorphism) of nine plant species, we show that long-distance colonization of a remote arctic archipelago, Svalbard, has occurred repeatedly and from several source regions. Propagules are likely carried by wind and drifting sea ice. The genetic effect of restricted colonization was strongly correlated with the temperature requirements of the species, indicating that establishment limits distribution more than dispersal. Thus, it may be appropriate to assume unlimited dispersal when predicting long-term range shifts in the Arctic.
NASA Astrophysics Data System (ADS)
Labahn, Jeffrey William; Devaud, Cecile
2017-05-01
A Reynolds-Averaged Navier-Stokes (RANS) simulation of the semi-industrial International Flame Research Foundation (IFRF) furnace is performed using a non-adiabatic Conditional Source-term Estimation (CSE) formulation. This represents the first time that a CSE formulation, which accounts for the effect of radiation on the conditional reaction rates, has been applied to a large scale semi-industrial furnace. The objective of the current study is to assess the capabilities of CSE to accurately reproduce the velocity field, temperature, species concentration and nitrogen oxides (NOx) emission for the IFRF furnace. The flow field is solved using the standard k-ε turbulence model and detailed chemistry is included. NOx emissions are calculated using two different methods. Predicted velocity profiles are in good agreement with the experimental data. The predicted peak temperature occurs closer to the centreline, as compared to the experimental observations, suggesting that the mixing between the fuel jet and vitiated air jet may be overestimated. Good agreement between the species concentrations, including NOx, and the experimental data is observed near the burner exit. Farther downstream, the centreline oxygen concentration is found to be underpredicted. Predicted NOx concentrations are in good agreement with experimental data when calculated using the method of Peters and Weber. The current study indicates that RANS-CSE can accurately predict the main characteristics seen in a semi-industrial IFRF furnace.
NASA Astrophysics Data System (ADS)
Saheer, Sahana; Pathak, Amey; Mathew, Roxy; Ghosh, Subimal
2016-04-01
Simulations of Indian Summer Monsoon (ISM) with its seasonal and subseasonal characteristics is highly crucial for predictions/ projections towards sustainable agricultural planning and water resources management. The Climate forecast system version 2 (CFSv2), the state of the art coupled climate model developed by National Center for Environmental Prediction (NCEP), is evaluated here for the simulations of ISM. Even though CFSv2 is a fully coupled ocean-atmosphere-land model with advanced physics, increased resolution and refined initialization, its ISM simulations/ predictions/ projections, in terms of seasonal mean and variability are not satisfactory. Numerous works have been done for verifying the CFSv2 forecasts in terms of the seasonal mean, its mean and variability, active and break spells, and El Nino Southern Oscillation (ENSO)-monsoon interactions. Underestimation of JJAS precipitation over the Indian land mass is one of the major drawbacks of CFSv2. ISM gets the moisture required to maintain the precipitation from different oceanic and land sources. In this work, we find the fraction of moisture supplied by different sources in the CFSv2 simulations and the findings are compared with observed fractions. We also investigate the possible variations in the moisture contributions from these different sources. We suspect that the deviation in the relative moisture contribution from different sources to various sinks over the monsoon region has resulted in the observed dry bias. We also find that over the Arabian Sea region, which is the key moisture source of ISM, there is a premature built up of specific humidity during the month of May and a decline during the later months of JJAS. This is also one of the reasons for the underestimation of JJAS mean precipitation.
Modeling of Turbulence Generated Noise in Jets
NASA Technical Reports Server (NTRS)
Khavaran, Abbas; Bridges, James
2004-01-01
A numerically calculated Green's function is used to predict jet noise spectrum and its far-field directivity. A linearized form of Lilley's equation governs the non-causal Green s function of interest, with the non-linear terms on the right hand side identified as the source. In this paper, contributions from the so-called self- and shear-noise source terms will be discussed. A Reynolds-averaged Navier-Stokes solution yields the required mean flow as well as time- and length scales of a noise-generating turbulent eddy. A non-compact source, with exponential temporal and spatial functions, is used to describe the turbulence velocity correlation tensors. It is shown that while an exact non-causal Green's function accurately predicts the observed shift in the location of the spectrum peak with angle as well as the angularity of sound at moderate Mach numbers, at high subsonic and supersonic acoustic Mach numbers the polar directivity of radiated sound is not entirely captured by this Green's function. Results presented for Mach 0.5 and 0.9 isothermal jets, as well as a Mach 0.8 hot jet conclude that near the peak radiation angle a different source/Green's function convolution integral may be required in order to capture the peak observed directivity of jet noise.
The NASA Seasonal-to-Interannual Prediction Project (NSIPP). [Annual Report for 2000
NASA Technical Reports Server (NTRS)
Rienecker, Michele; Suarez, Max; Adamec, David; Koster, Randal; Schubert, Siegfried; Hansen, James; Koblinsky, Chester (Technical Monitor)
2001-01-01
The goal of the project is to develop an assimilation and forecast system based on a coupled atmosphere-ocean-land-surface-sea-ice model capable of using a combination of satellite and in situ data sources to improve the prediction of ENSO and other major S-I signals and their global teleconnections. The objectives of this annual report are to: (1) demonstrate the utility of satellite data, especially surface height surface winds, air-sea fluxes and soil moisture, in a coupled model prediction system; and (2) aid in the design of the observing system for short-term climate prediction by conducting OSSE's and predictability studies.
On the numerical treatment of nonlinear source terms in reaction-convection equations
NASA Technical Reports Server (NTRS)
Lafon, A.; Yee, H. C.
1992-01-01
The objectives of this paper are to investigate how various numerical treatments of the nonlinear source term in a model reaction-convection equation can affect the stability of steady-state numerical solutions and to show under what conditions the conventional linearized analysis breaks down. The underlying goal is to provide part of the basic building blocks toward the ultimate goal of constructing suitable numerical schemes for hypersonic reacting flows, combustions and certain turbulence models in compressible Navier-Stokes computations. It can be shown that nonlinear analysis uncovers much of the nonlinear phenomena which linearized analysis is not capable of predicting in a model reaction-convection equation.
An extension of the Lighthill theory of jet noise to encompass refraction and shielding
NASA Technical Reports Server (NTRS)
Ribner, Herbert S.
1995-01-01
A formalism for jet noise prediction is derived that includes the refractive 'cone of silence' and other effects; outside the cone it approximates the simple Lighthill format. A key step is deferral of the simplifying assumption of uniform density in the dominant 'source' term. The result is conversion to a convected wave equation retaining the basic Lighthill source term. The main effect is to amend the Lighthill solution to allow for refraction by mean flow gradients, achieved via a frequency-dependent directional factor. A general formula for power spectral density emitted from unit volume is developed as the Lighthill-based value multiplied by a squared 'normalized' Green's function (the directional factor), referred to a stationary point source. The convective motion of the sources, with its powerful amplifying effect, also directional, is already accounted for in the Lighthill format: wave convection and source convection are decoupled. The normalized Green's function appears to be near unity outside the refraction dominated 'cone of silence', this validates our long term practice of using Lighthill-based approaches outside the cone, with extension inside via the Green's function. The function is obtained either experimentally (injected 'point' source) or numerically (computational aeroacoustics). Approximation by unity seems adequate except near the cone and except when there are shrouding jets: in that case the difference from unity quantifies the shielding effect. Further extension yields dipole and monopole source terms (cf. Morfey, Mani, and others) when the mean flow possesses density gradients (e.g., hot jets).
On Theoretical Broadband Shock-Associated Noise Near-Field Cross-Spectra
NASA Technical Reports Server (NTRS)
Miller, Steven A. E.
2015-01-01
The cross-spectral acoustic analogy is used to predict auto-spectra and cross-spectra of broadband shock-associated noise in the near-field and far-field from a range of heated and unheated supersonic off-design jets. A single equivalent source model is proposed for the near-field, mid-field, and far-field terms, that contains flow-field statistics of the shock wave shear layer interactions. Flow-field statistics are modeled based upon experimental observation and computational fluid dynamics solutions. An axisymmetric assumption is used to reduce the model to a closed-form equation involving a double summation over the equivalent source at each shock wave shear layer interaction. Predictions are compared with a wide variety of measurements at numerous jet Mach numbers and temperature ratios from multiple facilities. Auto-spectral predictions of broadband shock-associated noise in the near-field and far-field capture trends observed in measurement and other prediction theories. Predictions of spatial coherence of broadband shock-associated noise accurately capture the peak coherent intensity, frequency, and spectral width.
Analysis and Synthesis of Tonal Aircraft Noise Sources
NASA Technical Reports Server (NTRS)
Allen, Matthew P.; Rizzi, Stephen A.; Burdisso, Ricardo; Okcu, Selen
2012-01-01
Fixed and rotary wing aircraft operations can have a significant impact on communities in proximity to airports. Simulation of predicted aircraft flyover noise, paired with listening tests, is useful to noise reduction efforts since it allows direct annoyance evaluation of aircraft or operations currently in the design phase. This paper describes efforts to improve the realism of synthesized source noise by including short term fluctuations, specifically for inlet-radiated tones resulting from the fan stage of turbomachinery. It details analysis performed on an existing set of recorded turbofan data to isolate inlet-radiated tonal fan noise, then extract and model short term tonal fluctuations using the analytic signal. Methodologies for synthesizing time-variant tonal and broadband turbofan noise sources using measured fluctuations are also described. Finally, subjective listening test results are discussed which indicate that time-variant synthesized source noise is perceived to be very similar to recordings.
Wasserkampf, A; Silva, M N; Santos, I C; Carraça, E V; Meis, J J M; Kremers, S P J; Teixeira, P J
2014-12-01
This study analyzed psychosocial predictors of the Theory of Planned Behavior (TPB) and Self-Determination Theory (SDT) and evaluated their associations with short- and long-term moderate plus vigorous physical activity (MVPA) and lifestyle physical activity (PA) outcomes in women who underwent a weight-management program. 221 participants (age 37.6 ± 7.02 years) completed a 12-month SDT-based lifestyle intervention and were followed-up for 24 months. Multiple linear regression analyses tested associations between psychosocial variables and self-reported short- and long-term PA outcomes. Regression analyses showed that control constructs of both theories were significant determinants of short- and long-term MVPA, whereas affective and self-determination variables were strong predictors of short- and long-term lifestyle PA. Regarding short-term prediction models, TPB constructs were stronger in predicting MVPA, whereas SDT was more effective in predicting lifestyle PA. For long-term models, both forms of PA were better predicted by SDT in comparison to TPB. These results highlight the importance of comparing health behavior theories to identify the mechanisms involved in the behavior change process. Control and competence constructs are crucial during early adoption of structured PA behaviors, whereas affective and intrinsic sources of motivation are more involved in incidental types of PA, particularly in relation to behavioral maintenance. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please email: journals.permissions@oup.com.
ESPC Coupled Global Prediction System
2014-09-30
active, and cloud- nucleating aerosols into NAVGEM for use in long-term simulations and forecasts and for use in the full coupled system. APPROACH...cloud- nucleating aerosols into NAVGEM for use in long-term simulations and forecasts for ESPC applications. We are relying on approaches, findings...function. For sea salt we follow NAAPS and use a source that depends on ocean surface winds and relative humidity . In lieu of the relevant
Bauer, Timothy J
2013-06-15
The Jack Rabbit Test Program was sponsored in April and May 2010 by the Department of Homeland Security Transportation Security Administration to generate source data for large releases of chlorine and ammonia from transport tanks. In addition to a variety of data types measured at the release location, concentration versus time data was measured using sensors at distances up to 500 m from the tank. Release data were used to create accurate representations of the vapor flux versus time for the ten releases. This study was conducted to determine the importance of source terms and meteorological conditions in predicting downwind concentrations and the accuracy that can be obtained in those predictions. Each source representation was entered into an atmospheric transport and dispersion model using simplifying assumptions regarding the source characterization and meteorological conditions, and statistics for cloud duration and concentration at the sensor locations were calculated. A detailed characterization for one of the chlorine releases predicted 37% of concentration values within a factor of two, but cannot be considered representative of all the trials. Predictions of toxic effects at 200 m are relevant to incidents involving 1-ton chlorine tanks commonly used in parts of the United States and internationally. Published by Elsevier B.V.
On the gravitational potential and field anomalies due to thin mass layers
NASA Technical Reports Server (NTRS)
Ockendon, J. R.; Turcotte, D. L.
1977-01-01
The gravitational potential and field anomalies for thin mass layers are derived using the technique of matched asymptotic expansions. An inner solution is obtained using an expansion in powers of the thickness and it is shown that the outer solution is given by a surface distribution of mass sources and dipoles. Coefficients are evaluated by matching the inner expansion of the outer solution with the outer expansion of the inner solution. The leading term in the inner expansion for the normal gravitational field gives the Bouguer formula. The leading term in the expansion for the gravitational potential gives an expression for the perturbation to the geoid. The predictions given by this term are compared with measurements by satellite altimetry. The second-order terms in the expansion for the gravitational field are required to predict the gravity anomaly at a continental margin. The results are compared with observations.
Coarse Grid Modeling of Turbine Film Cooling Flows Using Volumetric Source Terms
NASA Technical Reports Server (NTRS)
Heidmann, James D.; Hunter, Scott D.
2001-01-01
The recent trend in numerical modeling of turbine film cooling flows has been toward higher fidelity grids and more complex geometries. This trend has been enabled by the rapid increase in computing power available to researchers. However, the turbine design community requires fast turnaround time in its design computations, rendering these comprehensive simulations ineffective in the design cycle. The present study describes a methodology for implementing a volumetric source term distribution in a coarse grid calculation that can model the small-scale and three-dimensional effects present in turbine film cooling flows. This model could be implemented in turbine design codes or in multistage turbomachinery codes such as APNASA, where the computational grid size may be larger than the film hole size. Detailed computations of a single row of 35 deg round holes on a flat plate have been obtained for blowing ratios of 0.5, 0.8, and 1.0, and density ratios of 1.0 and 2.0 using a multiblock grid system to resolve the flows on both sides of the plate as well as inside the hole itself. These detailed flow fields were spatially averaged to generate a field of volumetric source terms for each conservative flow variable. Solutions were also obtained using three coarse grids having streamwise and spanwise grid spacings of 3d, 1d, and d/3. These coarse grid solutions used the integrated hole exit mass, momentum, energy, and turbulence quantities from the detailed solutions as volumetric source terms. It is shown that a uniform source term addition over a distance from the wall on the order of the hole diameter is able to predict adiabatic film effectiveness better than a near-wall source term model, while strictly enforcing correct values of integrated boundary layer quantities.
Rating curve estimation of nutrient loads in Iowa rivers
Stenback, G.A.; Crumpton, W.G.; Schilling, K.E.; Helmers, M.J.
2011-01-01
Accurate estimation of nutrient loads in rivers and streams is critical for many applications including determination of sources of nutrient loads in watersheds, evaluating long-term trends in loads, and estimating loading to downstream waterbodies. Since in many cases nutrient concentrations are measured on a weekly or monthly frequency, there is a need to estimate concentration and loads during periods when no data is available. The objectives of this study were to: (i) document the performance of a multiple regression model to predict loads of nitrate and total phosphorus (TP) in Iowa rivers and streams; (ii) determine whether there is any systematic bias in the load prediction estimates for nitrate and TP; and (iii) evaluate streamflow and concentration factors that could affect the load prediction efficiency. A commonly cited rating curve regression is utilized to estimate riverine nitrate and TP loads for rivers in Iowa with watershed areas ranging from 17.4 to over 34,600km2. Forty-nine nitrate and 44 TP datasets each comprising 5-22years of approximately weekly to monthly concentrations were examined. Three nitrate data sets had sample collection frequencies averaging about three samples per week. The accuracy and precision of annual and long term riverine load prediction was assessed by direct comparison of rating curve load predictions with observed daily loads. Significant positive bias of annual and long term nitrate loads was detected. Long term rating curve nitrate load predictions exceeded observed loads by 25% or more at 33% of the 49 measurement sites. No bias was found for TP load prediction although 15% of the 44 cases either underestimated or overestimate observed long-term loads by more than 25%. The rating curve was found to poorly characterize nitrate and phosphorus variation in some rivers. ?? 2010 .
2013-09-30
Circulation (HC) in terms of the meridional streamfunction. The interannual variability of the Atlantic HC in boreal summer was examined using the EOF...large-scale circulations in the NAVGEM model and the source of predictability for the seasonal variation of the Atlantic TCs. We have been working...EOF analysis of Meridional Circulation (JAS). (a) The leading mode (M1); (b) variance explained by the first 10 modes. 9
2002-03-01
source term. Several publications provided a thorough accounting of the accident, including “ Chernobyl Record” [Mould], and the NRC technical report...Report on the Accident at the Chernobyl Nuclear Power Station” [NUREG-1250]. The most comprehensive study of transport models to predict the...from the Chernobyl Accident: The ATMES Report” [Klug, et al.]. The Atmospheric Transport 5 Model Evaluation Study (ATMES) report used data
Progress Toward Improving Jet Noise Predictions in Hot Jets
NASA Technical Reports Server (NTRS)
Khavaran, Abbas; Kenzakowski, Donald C.
2007-01-01
An acoustic analogy methodology for improving noise predictions in hot round jets is presented. Past approaches have often neglected the impact of temperature fluctuations on the predicted sound spectral density, which could be significant for heated jets, and this has yielded noticeable acoustic under-predictions in such cases. The governing acoustic equations adopted here are a set of linearized, inhomogeneous Euler equations. These equations are combined into a single third order linear wave operator when the base flow is considered as a locally parallel mean flow. The remaining second-order fluctuations are regarded as the equivalent sources of sound and are modeled. It is shown that the hot jet effect may be introduced primarily through a fluctuating velocity/enthalpy term. Modeling this additional source requires specialized inputs from a RANS-based flowfield simulation. The information is supplied using an extension to a baseline two equation turbulence model that predicts total enthalpy variance in addition to the standard parameters. Preliminary application of this model to a series of unheated and heated subsonic jets shows significant improvement in the acoustic predictions at the 90 degree observer angle.
NASA Technical Reports Server (NTRS)
Herrero, F. A.; Mayr, H. G.; Harris, I.; Varosi, F.; Meriwether, J. W., Jr.
1984-01-01
Theoretical predictions of thermospheric gravity wave oscillations are compared with observed neutral temperatures and velocities. The data were taken in February 1983 using a Fabry-Perot interferometer located on Greenland, close to impulse heat sources in the auroral oval. The phenomenon was modeled in terms of linearized equations of motion of the atmosphere on a slowly rotating sphere. Legendre polynomials were used as eigenfunctions and the transfer function amplitude surface was characterized by maxima in the wavenumber frequency plane. Good agreement for predicted and observed velocities and temperatures was attained in the 250-300 km altitude. The amplitude of the vertical velocity, however, was not accurately predicted, nor was the temperature variability. The vertical velocity did exhibit maxima and minima in response to corresponding temperature changes.
NASA Astrophysics Data System (ADS)
Herrero, F. A.; Mayr, H. G.; Harris, I.; Varosi, F.; Meriwether, J. W., Jr.
1984-09-01
Theoretical predictions of thermospheric gravity wave oscillations are compared with observed neutral temperatures and velocities. The data were taken in February 1983 using a Fabry-Perot interferometer located on Greenland, close to impulse heat sources in the auroral oval. The phenomenon was modeled in terms of linearized equations of motion of the atmosphere on a slowly rotating sphere. Legendre polynomials were used as eigenfunctions and the transfer function amplitude surface was characterized by maxima in the wavenumber frequency plane. Good agreement for predicted and observed velocities and temperatures was attained in the 250-300 km altitude. The amplitude of the vertical velocity, however, was not accurately predicted, nor was the temperature variability. The vertical velocity did exhibit maxima and minima in response to corresponding temperature changes.
Johnson, W B; Lall, R; Bongar, B; Nordlund, M D
1999-01-01
Objective personality assessment instruments offer a comparatively underutilized source of clinical data in attempts to evaluate and predict risk for suicide. In contrast to focal suicide risk measures, global personality inventories may be useful in identification of long-standing styles that predispose persons to eventual suicidal behavior. This article reviews the empirical literature regarding the efficacy of established personality inventories in predicting suicidality. The authors offer several recommendations for future research with these measures and conclude that such objective personality instruments offer only marginal utility as sources of clinical information in comprehensive suicide risk evaluations. Personality inventories may offer greatest utility in long-term assessment of suicide risk.
Recent Progress of Solar Weather Forecasting at Naoc
NASA Astrophysics Data System (ADS)
He, Han; Wang, Huaning; Du, Zhanle; Zhang, Liyun; Huang, Xin; Yan, Yan; Fan, Yuliang; Zhu, Xiaoshuai; Guo, Xiaobo; Dai, Xinghua
The history of solar weather forecasting services at National Astronomical Observatories, Chinese Academy of Sciences (NAOC) can be traced back to 1960s. Nowadays, NAOC is the headquarters of the Regional Warning Center of China (RWC-China), which is one of the members of the International Space Environment Service (ISES). NAOC is responsible for exchanging data, information and space weather forecasts of RWC-China with other RWCs. The solar weather forecasting services at NAOC cover short-term prediction (within two or three days), medium-term prediction (within several weeks), and long-term prediction (in time scale of solar cycle) of solar activities. Most efforts of the short-term prediction research are concentrated on the solar eruptive phenomena, such as flares, coronal mass ejections (CMEs) and solar proton events, which are the key driving sources of strong space weather disturbances. Based on the high quality observation data of the latest space-based and ground-based solar telescopes and with the help of artificial intelligence techniques, new numerical models with quantitative analyses and physical consideration are being developed for the predictions of solar eruptive events. The 3-D computer simulation technology is being introduced for the operational solar weather service platform to visualize the monitoring of solar activities, the running of the prediction models, as well as the presenting of the forecasting results. A new generation operational solar weather monitoring and forecasting system is expected to be constructed in the near future at NAOC.
Identification of Spurious Signals from Permeable Ffowcs Williams and Hawkings Surfaces
NASA Technical Reports Server (NTRS)
Lopes, Leonard V.; Boyd, David D., Jr.; Nark, Douglas M.; Wiedemann, Karl E.
2017-01-01
Integral forms of the permeable surface formulation of the Ffowcs Williams and Hawkings (FW-H) equation often require an input in the form of a near field Computational Fluid Dynamics (CFD) solution to predict noise in the near or far field from various types of geometries. The FW-H equation involves three source terms; two surface terms (monopole and dipole) and a volume term (quadrupole). Many solutions to the FW-H equation, such as several of Farassat's formulations, neglect the quadrupole term. Neglecting the quadrupole term in permeable surface formulations leads to inaccuracies called spurious signals. This paper explores the concept of spurious signals, explains how they are generated by specifying the acoustic and hydrodynamic surface properties individually, and provides methods to determine their presence, regardless of whether a correction algorithm is employed. A potential approach based on the equivalent sources method (ESM) and the sensitivity of Formulation 1A (Formulation S1A) is also discussed for the removal of spurious signals.
NASA Astrophysics Data System (ADS)
Sarmah, Ratan; Tiwari, Shubham
2018-03-01
An analytical solution is developed for predicting two-dimensional transient seepage into ditch drainage network receiving water from a non-uniform steady ponding field from the surface of the soil under the influence of source/sink in the flow domain. The flow domain is assumed to be saturated, homogeneous and anisotropic in nature and have finite extends in horizontal and vertical directions. The drains are assumed to be standing vertical and penetrating up to impervious layer. The water levels in the drains are unequal and invariant with time. The flow field is also assumed to be under the continuous influence of time-space dependent arbitrary source/sink term. The correctness of the proposed model is checked by developing a numerical code and also with the existing analytical solution for the simplified case. The study highlights the significance of source/sink influence in the subsurface flow. With the imposition of the source and sink term in the flow domain, the pathline and travel time of water particles started deviating from their original position and above that the side and top discharge to the drains were also observed to have a strong influence of the source/sink terms. The travel time and pathline of water particles are also observed to have a dependency on the height of water in the ditches and on the location of source/sink activation area.
Final safety analysis report for the Galileo Mission: Volume 2: Book 1, Accident model document
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The Accident Model Document (AMD) is the second volume of the three volume Final Safety Analysis Report (FSAR) for the Galileo outer planetary space science mission. This mission employs Radioisotope Thermoelectric Generators (RTGs) as the prime electrical power sources for the spacecraft. Galileo will be launched into Earth orbit using the Space Shuttle and will use the Inertial Upper Stage (IUS) booster to place the spacecraft into an Earth escape trajectory. The RTG's employ silicon-germanium thermoelectric couples to produce electricity from the heat energy that results from the decay of the radioisotope fuel, Plutonium-238, used in the RTG heat source.more » The heat source configuration used in the RTG's is termed General Purpose Heat Source (GPHS), and the RTG's are designated GPHS-RTGs. The use of radioactive material in these missions necessitates evaluations of the radiological risks that may be encountered by launch complex personnel as well as by the Earth's general population resulting from postulated malfunctions or failures occurring in the mission operations. The FSAR presents the results of a rigorous safety assessment, including substantial analyses and testing, of the launch and deployment of the RTGs for the Galileo mission. This AMD is a summary of the potential accident and failure sequences which might result in fuel release, the analysis and testing methods employed, and the predicted source terms. Each source term consists of a quantity of fuel released, the location of release and the physical characteristics of the fuel released. Each source term has an associated probability of occurrence. 27 figs., 11 tabs.« less
Faulkner, William B; Shaw, Bryan W; Grosch, Tom
2008-10-01
As of December 2006, the American Meteorological Society/U.S. Environmental Protection Agency (EPA) Regulatory Model with Plume Rise Model Enhancements (AERMOD-PRIME; hereafter AERMOD) replaced the Industrial Source Complex Short Term Version 3 (ISCST3) as the EPA-preferred regulatory model. The change from ISCST3 to AERMOD will affect Prevention of Significant Deterioration (PSD) increment consumption as well as permit compliance in states where regulatory agencies limit property line concentrations using modeling analysis. Because of differences in model formulation and the treatment of terrain features, one cannot predict a priori whether ISCST3 or AERMOD will predict higher or lower pollutant concentrations downwind of a source. The objectives of this paper were to determine the sensitivity of AERMOD to various inputs and compare the highest downwind concentrations from a ground-level area source (GLAS) predicted by AERMOD to those predicted by ISCST3. Concentrations predicted using ISCST3 were sensitive to changes in wind speed, temperature, solar radiation (as it affects stability class), and mixing heights below 160 m. Surface roughness also affected downwind concentrations predicted by ISCST3. AERMOD was sensitive to changes in albedo, surface roughness, wind speed, temperature, and cloud cover. Bowen ratio did not affect the results from AERMOD. These results demonstrate AERMOD's sensitivity to small changes in wind speed and surface roughness. When AERMOD is used to determine property line concentrations, small changes in these variables may affect the distance within which concentration limits are exceeded by several hundred meters.
Optimization Of Engine Heat Transfer Mechanisms For Ground Combat Vehicle Signature Models
NASA Astrophysics Data System (ADS)
Gonda, T.; Rogers, P.; Gerhart, G.; Reynolds, W. R.
1988-08-01
A thermodynamic model for predicting the behavior of selected internal thermal sources of an M2 Bradley Infantry Fighting Vehicle is described. The modeling methodology is expressed in terms of first principle heat transfer equations along with a brief history of TACOM's experience with thermal signature modeling techniques. The dynamic operation of the internal thermal sources is presented along with limited test data and an examination of their effect on the vehicle signature.
NASA Astrophysics Data System (ADS)
Kim, R.-S.; Cho, K.-S.; Moon, Y.-J.; Dryer, M.; Lee, J.; Yi, Y.; Kim, K.-H.; Wang, H.; Park, Y.-D.; Kim, Yong Ha
2010-12-01
In this study, we discuss the general behaviors of geomagnetic storm strength associated with observed parameters of coronal mass ejection (CME) such as speed (V) and earthward direction (D) of CMEs as well as the longitude (L) and magnetic field orientation (M) of overlaying potential fields of the CME source region, and we develop an empirical model to predict geomagnetic storm occurrence with its strength (gauged by the Dst index) in terms of these CME parameters. For this we select 66 halo or partial halo CMEs associated with M-class and X-class solar flares, which have clearly identifiable source regions, from 1997 to 2003. After examining how each of these CME parameters correlates with the geoeffectiveness of the CMEs, we find several properties as follows: (1) Parameter D best correlates with storm strength Dst; (2) the majority of geoeffective CMEs have been originated from solar longitude 15°W, and CMEs originated away from this longitude tend to produce weaker storms; (3) correlations between Dst and the CME parameters improve if CMEs are separated into two groups depending on whether their magnetic fields are oriented southward or northward in their source regions. Based on these observations, we present two empirical expressions for Dst in terms of L, V, and D for two groups of CMEs, respectively. This is a new attempt to predict not only the occurrence of geomagnetic storms, but also the storm strength (Dst) solely based on the CME parameters.
NASA Technical Reports Server (NTRS)
Schubert, Siegfried; Dole, Randall; vandenDool, Huug; Suarez, Max; Waliser, Duane
2002-01-01
This workshop, held in April 2002, brought together various Earth Sciences experts to focus on the subseasonal prediction problem. While substantial advances have occurred over the last few decades in both weather and seasonal prediction, progress in improving predictions on these intermediate time scales (time scales ranging from about two weeks to two months) has been slow. The goals of the workshop were to get an assessment of the "state of the art" in predictive skill on these time scales, to determine the potential sources of "untapped" predictive skill, and to make recommendations for a course of action that will accelerate progress in this area. One of the key conclusions of the workshop was that there is compelling evidence for predictability at forecast lead times substantially longer than two weeks. Tropical diabatic heating and soil wetness were singled out as particularly important processes affecting predictability on these time scales. Predictability was also linked to various low-frequency atmospheric "phenomena" such as the annular modes in high latitudes (including their connections to the stratosphere), the Pacific/North American (PNA) pattern, and the Madden Julian Oscillation (MJO). The latter, in particular, was highlighted as a key source of untapped predictability in the tropics and subtropics, including the Asian and Australian monsoon regions.
Introduction to Agricultural Marketing.
ERIC Educational Resources Information Center
Futrell, Gene; And Others
This marketing unit focuses on the importance of forecasting in order for a farm family to develop marketing plans. It describes sources of information and includes a glossary of marketing terms and exercises using both fundamental and technical methods to predict prices in order to improve forecasting ability. The unit is organized in the…
DEVELOPMENT AND VALIDATION OF AN AIR-TO-BEEF FOOD CHAIN MODEL FOR DIOXIN-LIKE COMPOUNDS
A model for predicting concentrations of dioxin-like compounds in beef is developed and tested. The key premise of the model is that concentrations of these compounds in air are the source term, or starting point, for estimating beef concentrations. Vapor-phase concentrations t...
NASA Astrophysics Data System (ADS)
Wang, Chaoen; Chang, Lung-Hai; Chang, Mei-Hsia; Chen, Ling-Jhen; Chung, Fu-Tsai; Lin, Ming-Chyuan; Liu, Zong-Kai; Lo, Chih-Hung; Tsai, Chi-Lin; Yeh, Meng-Shu; Yu, Tsung-Chi
2017-11-01
Excitation of multipacting, enhanced by gas condensation on cold surfaces of the high power input coupler in a SRF module poses the highest challenge for reliable SRF operation under high average RF power. This could prevent the light source SRF module from being operated with a desired high beam current. Off-line long-term reliability tests have been conducted for the newly constructed 500-MHz SRF KEKB type modules at an accelerating RF voltage of 1.6-MV to enable prediction of their operational reliability in the 3-GeV Taiwan Photon Source (TPS), since prediction from mere production performance by conventional horizontal test is presently unreliable. As expected, operational difficulties resulting from multipacting, enhanced by gas condensation, have been identified in the course of long-term reliability test. Our present hypothesis is that gas condensation can be slowed down by preserving the vacuum pressure at the power coupler close to that reached just after its cool down to liquid helium temperatures. This is achievable by reduction of the power coupler out-gassing rate through comprehensive warm aging. Its feasibility and effectiveness has been experimentally verified in a second long term reliability test. Our success opens the possibility to operate the SRF module free of multipacting trouble and opens a new direction to improve the operational performance of next generation SRF modules in light sources with high beam currents.
NASA Technical Reports Server (NTRS)
MCKissick, Burnell T. (Technical Monitor); Plassman, Gerald E.; Mall, Gerald H.; Quagliano, John R.
2005-01-01
Linear multivariable regression models for predicting day and night Eddy Dissipation Rate (EDR) from available meteorological data sources are defined and validated. Model definition is based on a combination of 1997-2000 Dallas/Fort Worth (DFW) data sources, EDR from Aircraft Vortex Spacing System (AVOSS) deployment data, and regression variables primarily from corresponding Automated Surface Observation System (ASOS) data. Model validation is accomplished through EDR predictions on a similar combination of 1994-1995 Memphis (MEM) AVOSS and ASOS data. Model forms include an intercept plus a single term of fixed optimal power for each of these regression variables; 30-minute forward averaged mean and variance of near-surface wind speed and temperature, variance of wind direction, and a discrete cloud cover metric. Distinct day and night models, regressing on EDR and the natural log of EDR respectively, yield best performance and avoid model discontinuity over day/night data boundaries.
Long Term Mean Local Time of the Ascending Node Prediction
NASA Technical Reports Server (NTRS)
McKinley, David P.
2007-01-01
Significant error has been observed in the long term prediction of the Mean Local Time of the Ascending Node on the Aqua spacecraft. This error of approximately 90 seconds over a two year prediction is a complication in planning and timing of maneuvers for all members of the Earth Observing System Afternoon Constellation, which use Aqua's MLTAN as the reference for their inclination maneuvers. It was determined that the source of the prediction error was the lack of a solid Earth tide model in the operational force models. The Love Model of the solid Earth tide potential was used to derive analytic corrections to the inclination and right ascension of the ascending node of Aqua's Sun-synchronous orbit. Additionally, it was determined that the resonance between the Sun and orbit plane of the Sun-synchronous orbit is the primary driver of this error. The analytic corrections have been added to the operational force models for the Aqua spacecraft reducing the two-year 90-second error to less than 7 seconds.
Technical note: A linear model for predicting δ13 Cprotein.
Pestle, William J; Hubbe, Mark; Smith, Erin K; Stevenson, Joseph M
2015-08-01
Development of a model for the prediction of δ(13) Cprotein from δ(13) Ccollagen and Δ(13) Cap-co . Model-generated values could, in turn, serve as "consumer" inputs for multisource mixture modeling of paleodiet. Linear regression analysis of previously published controlled diet data facilitated the development of a mathematical model for predicting δ(13) Cprotein (and an experimentally generated error term) from isotopic data routinely generated during the analysis of osseous remains (δ(13) Cco and Δ(13) Cap-co ). Regression analysis resulted in a two-term linear model (δ(13) Cprotein (%) = (0.78 × δ(13) Cco ) - (0.58× Δ(13) Cap-co ) - 4.7), possessing a high R-value of 0.93 (r(2) = 0.86, P < 0.01), and experimentally generated error terms of ±1.9% for any predicted individual value of δ(13) Cprotein . This model was tested using isotopic data from Formative Period individuals from northern Chile's Atacama Desert. The model presented here appears to hold significant potential for the prediction of the carbon isotope signature of dietary protein using only such data as is routinely generated in the course of stable isotope analysis of human osseous remains. These predicted values are ideal for use in multisource mixture modeling of dietary protein source contribution. © 2015 Wiley Periodicals, Inc.
A Cross-Lingual Similarity Measure for Detecting Biomedical Term Translations
Bollegala, Danushka; Kontonatsios, Georgios; Ananiadou, Sophia
2015-01-01
Bilingual dictionaries for technical terms such as biomedical terms are an important resource for machine translation systems as well as for humans who would like to understand a concept described in a foreign language. Often a biomedical term is first proposed in English and later it is manually translated to other languages. Despite the fact that there are large monolingual lexicons of biomedical terms, only a fraction of those term lexicons are translated to other languages. Manually compiling large-scale bilingual dictionaries for technical domains is a challenging task because it is difficult to find a sufficiently large number of bilingual experts. We propose a cross-lingual similarity measure for detecting most similar translation candidates for a biomedical term specified in one language (source) from another language (target). Specifically, a biomedical term in a language is represented using two types of features: (a) intrinsic features that consist of character n-grams extracted from the term under consideration, and (b) extrinsic features that consist of unigrams and bigrams extracted from the contextual windows surrounding the term under consideration. We propose a cross-lingual similarity measure using each of those feature types. First, to reduce the dimensionality of the feature space in each language, we propose prototype vector projection (PVP)—a non-negative lower-dimensional vector projection method. Second, we propose a method to learn a mapping between the feature spaces in the source and target language using partial least squares regression (PLSR). The proposed method requires only a small number of training instances to learn a cross-lingual similarity measure. The proposed PVP method outperforms popular dimensionality reduction methods such as the singular value decomposition (SVD) and non-negative matrix factorization (NMF) in a nearest neighbor prediction task. Moreover, our experimental results covering several language pairs such as English–French, English–Spanish, English–Greek, and English–Japanese show that the proposed method outperforms several other feature projection methods in biomedical term translation prediction tasks. PMID:26030738
Generation of GHS Scores from TEST and online sources ...
Alternatives assessment frameworks such as DfE (Design for the Environment) evaluate chemical alternatives in terms of human health effects, ecotoxicity, and fate. T.E.S.T. (Toxicity Estimation Software Tool) can be utilized to evaluate human health in terms of acute oral rat toxicity, developmental toxicity, endocrine activity, and mutagenicity. It can be used to evaluate ecotoxicity (in terms of acute fathead minnow toxicity) and fate (in terms of bioconcentration factor). It also be used to estimate a variety of key physicochemical properties such as melting point, boiling point, vapor pressure, water solubility, and bioconcentration factor. A web-based version of T.E.S.T. is currently being developed to allow predictions to be made from other web tools. Online data sources such as from NCCT’s Chemistry Dashboard, REACH dossiers, or from ChemHat.org can also be utilized to obtain GHS (Global Harmonization System) scores for comparing alternatives. The purpose of this talk is to show how GHS (Global Harmonization Score) data can be obtained from literature sources and from T.E.S.T. (Toxicity Estimation Software Tool). This data will be used to compare chemical alternatives in the alternatives assessment dashboard (a 2018 CSS product).
Chaos in the sunspot cycle - Analysis and prediction
NASA Technical Reports Server (NTRS)
Mundt, Michael D.; Maguire, W. Bruce, II; Chase, Robert R. P.
1991-01-01
The variability of solar activity over long time scales, given semiquantitatively by measurements of sunspot numbers, is examined as a nonlinear dynamical system. First, a discussion of the data set used and the techniques utilized to reduce the noise and capture the long-term dynamics inherent in the data is presented. Subsequently, an attractor is reconstructed from the data set using the method of time delays. The reconstructed attractor is then used to determine both the dimension of the underlying system and also the largest Lyapunov exponent, which together indicate that the sunspot cycle is indeed chaotic and also low dimensional. In addition, recent techniques of exploiting chaotic dynamics to provide accurate, short-term predictions are utilized in order to improve upon current forecasting methods and also to place theoretical limits on predictability extent. The results are compared to chaotic solar-dynamo models as a possible physically motivated source of this chaotic behavior.
NASA Astrophysics Data System (ADS)
Taha, M. P. M.; Drew, G. H.; Tamer, A.; Hewings, G.; Jordinson, G. M.; Longhurst, P. J.; Pollard, S. J. T.
We present bioaerosol source term concentrations from passive and active composting sources and compare emissions from green waste compost aged 1, 2, 4, 6, 8, 12 and 16 weeks. Results reveal that the age of compost has little effect on the bioaerosol concentrations emitted for passive windrow sources. However emissions from turning compost during the early stages may be higher than during the later stages of the composting process. The bioaerosol emissions from passive sources were in the range of 10 3-10 4 cfu m -3, with releases from active sources typically 1-log higher. We propose improvements to current risk assessment methodologies by examining emission rates and the differences between two air dispersion models for the prediction of downwind bioaerosol concentrations at off-site points of exposure. The SCREEN3 model provides a more precautionary estimate of the source depletion curves of bioaerosol emissions in comparison to ADMS 3.3. The results from both models predict that bioaerosol concentrations decrease to below typical background concentrations before 250 m, the distance at which the regulator in England and Wales may require a risk assessment to be completed.
A Multigroup Method for the Calculation of Neutron Fluence with a Source Term
NASA Technical Reports Server (NTRS)
Heinbockel, J. H.; Clowdsley, M. S.
1998-01-01
Current research on the Grant involves the development of a multigroup method for the calculation of low energy evaporation neutron fluences associated with the Boltzmann equation. This research will enable one to predict radiation exposure under a variety of circumstances. Knowledge of radiation exposure in a free-space environment is a necessity for space travel, high altitude space planes and satellite design. This is because certain radiation environments can cause damage to biological and electronic systems involving both short term and long term effects. By having apriori knowledge of the environment one can use prediction techniques to estimate radiation damage to such systems. Appropriate shielding can be designed to protect both humans and electronic systems that are exposed to a known radiation environment. This is the goal of the current research efforts involving the multi-group method and the Green's function approach.
Yo, Chia-Hung; Lee, Si-Huei; Chang, Shy-Shin; Lee, Matthew Chien-Hung; Lee, Chien-Chang
2014-01-01
Objectives We performed a systematic review and meta-analysis of studies on high-sensitivity C-reactive protein (hs-CRP) assays to see whether these tests are predictive of atrial fibrillation (AF) recurrence after cardioversion. Design Systematic review and meta-analysis. Data sources PubMed, EMBASE and Cochrane databases as well as a hand search of the reference lists in the retrieved articles from inception to December 2013. Study eligibility criteria This review selected observational studies in which the measurements of serum CRP were used to predict AF recurrence. An hs-CRP assay was defined as any CRP test capable of measuring serum CRP to below 0.6 mg/dL. Primary and secondary outcome measures We summarised test performance characteristics with the use of forest plots, hierarchical summary receiver operating characteristic curves and bivariate random effects models. Meta-regression analysis was performed to explore the source of heterogeneity. Results We included nine qualifying studies comprising a total of 347 patients with AF recurrence and 335 controls. A CRP level higher than the optimal cut-off point was an independent predictor of AF recurrence after cardioversion (summary adjusted OR: 3.33; 95% CI 2.10 to 5.28). The estimated pooled sensitivity and specificity for hs-CRP was 71.0% (95% CI 63% to 78%) and 72.0% (61% to 81%), respectively. Most studies used a CRP cut-off point of 1.9 mg/L to predict long-term AF recurrence (77% sensitivity, 65% specificity), and 3 mg/L to predict short-term AF recurrence (73% sensitivity, 71% specificity). Conclusions hs-CRP assays are moderately accurate in predicting AF recurrence after successful cardioversion. PMID:24556243
A Comparison of Combustor-Noise Models
NASA Technical Reports Server (NTRS)
Hultgren, Lennart S.
2012-01-01
The present status of combustor-noise prediction in the NASA Aircraft Noise Prediction Program (ANOPP)1 for current-generation (N) turbofan engines is summarized. Several semi-empirical models for turbofan combustor noise are discussed, including best methods for near-term updates to ANOPP. An alternate turbine-transmission factor2 will appear as a user selectable option in the combustor-noise module GECOR in the next release. The three-spectrum model proposed by Stone et al.3 for GE turbofan-engine combustor noise is discussed and compared with ANOPP predictions for several relevant cases. Based on the results presented herein and in their report,3 it is recommended that the application of this fully empirical combustor-noise prediction method be limited to situations involving only General-Electric turbofan engines. Long-term needs and challenges for the N+1 through N+3 time frame are discussed. Because the impact of other propulsion-noise sources continues to be reduced due to turbofan design trends, advances in noise-mitigation techniques, and expected aircraft configuration changes, the relative importance of core noise is expected to greatly increase in the future. The noise-source structure in the combustor, including the indirect one, and the effects of the propagation path through the engine and exhaust nozzle need to be better understood. In particular, the acoustic consequences of the expected trends toward smaller, highly efficient gas-generator cores and low-emission fuel-flexible combustors need to be fully investigated since future designs are quite likely to fall outside of the parameter space of existing (semi-empirical) prediction tools.
Doos, Lucy; Packer, Claire; Ward, Derek; Simpson, Sue; Stevens, Andrew
2016-01-01
Objectives Forecasting can support rational decision-making around the introduction and use of emerging health technologies and prevent investment in technologies that have limited long-term potential. However, forecasting methods need to be credible. We performed a systematic search to identify the methods used in forecasting studies to predict future health technologies within a 3–20-year timeframe. Identification and retrospective assessment of such methods potentially offer a route to more reliable prediction. Design Systematic search of the literature to identify studies reported on methods of forecasting in healthcare. Participants People are not needed in this study. Data sources The authors searched MEDLINE, EMBASE, PsychINFO and grey literature sources, and included articles published in English that reported their methods and a list of identified technologies. Main outcome measure Studies reporting methods used to predict future health technologies within a 3–20-year timeframe with an identified list of individual healthcare technologies. Commercially sponsored reviews, long-term futurology studies (with over 20-year timeframes) and speculative editorials were excluded. Results 15 studies met our inclusion criteria. Our results showed that the majority of studies (13/15) consulted experts either alone or in combination with other methods such as literature searching. Only 2 studies used more complex forecasting tools such as scenario building. Conclusions The methodological fundamentals of formal 3–20-year prediction are consistent but vary in details. Further research needs to be conducted to ascertain if the predictions made were accurate and whether accuracy varies by the methods used or by the types of technologies identified. PMID:26966060
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wickliff, D.S.; Solomon, D.K.; Farrow, N.D.
Solid Waste Storage Area (SWSA) 5 is known to be a significant source of contaminants, especially tritium ({sup 3}H), to the White Oak Creek (WOC) watershed. For example, Solomon et al. (1991) estimated the total {sup 3}H discharge in Melton Branch (most of which originates in SWSA 5) for the 1988 water year to be 1210 Ci. A critical issue for making decisions concerning remedial actions at SWSA 5 is knowing whether the annual contaminant discharge is increasing or decreasing. Because (1) the magnitude of the annual contaminant discharge is highly correlated to the amount of annual precipitation (Solomon etmore » al., 1991) and (2) a significant lag may exist between the time of peak contaminant release from primary sources (i.e., waste trenches) and the time of peak discharge into streams, short-term stream monitoring by itself is not sufficient for predicting future contaminant discharges. In this study we use {sup 3}H to examine the link between contaminant release from primary waste sources and contaminant discharge into streams. By understanding and quantifying subsurface transport processes, realistic predictions of future contaminant discharge, along with an evaluation of the effectiveness of remedial action alternatives, will be possible. The objectives of this study are (1) to characterize the subsurface movement of contaminants (primarily {sup 3}H) with an emphasis on the effects of matrix diffusion; (2) to determine the relative strength of primary vs secondary sources; and (3) to establish a methodology capable of determining whether the {sup 3}H discharge from SWSA 5 to streams is increasing or decreasing.« less
Preliminary investigation of processes that affect source term identification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wickliff, D.S.; Solomon, D.K.; Farrow, N.D.
Solid Waste Storage Area (SWSA) 5 is known to be a significant source of contaminants, especially tritium ({sup 3}H), to the White Oak Creek (WOC) watershed. For example, Solomon et al. (1991) estimated the total {sup 3}H discharge in Melton Branch (most of which originates in SWSA 5) for the 1988 water year to be 1210 Ci. A critical issue for making decisions concerning remedial actions at SWSA 5 is knowing whether the annual contaminant discharge is increasing or decreasing. Because (1) the magnitude of the annual contaminant discharge is highly correlated to the amount of annual precipitation (Solomon etmore » al., 1991) and (2) a significant lag may exist between the time of peak contaminant release from primary sources (i.e., waste trenches) and the time of peak discharge into streams, short-term stream monitoring by itself is not sufficient for predicting future contaminant discharges. In this study we use {sup 3}H to examine the link between contaminant release from primary waste sources and contaminant discharge into streams. By understanding and quantifying subsurface transport processes, realistic predictions of future contaminant discharge, along with an evaluation of the effectiveness of remedial action alternatives, will be possible. The objectives of this study are (1) to characterize the subsurface movement of contaminants (primarily {sup 3}H) with an emphasis on the effects of matrix diffusion; (2) to determine the relative strength of primary vs secondary sources; and (3) to establish a methodology capable of determining whether the {sup 3}H discharge from SWSA 5 to streams is increasing or decreasing.« less
Antineutrino analysis for continuous monitoring of nuclear reactors: Sensitivity study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stewart, Christopher; Erickson, Anna
This paper explores the various contributors to uncertainty on predictions of the antineutrino source term which is used for reactor antineutrino experiments and is proposed as a safeguard mechanism for future reactor installations. The errors introduced during simulation of the reactor burnup cycle from variation in nuclear reaction cross sections, operating power, and other factors are combined with those from experimental and predicted antineutrino yields, resulting from fissions, evaluated, and compared. The most significant contributor to uncertainty on the reactor antineutrino source term when the reactor was modeled in 3D fidelity with assembly-level heterogeneity was found to be the uncertaintymore » on the antineutrino yields. Using the reactor simulation uncertainty data, the dedicated observation of a rigorously modeled small, fast reactor by a few-ton near-field detector was estimated to offer reduction of uncertainty on antineutrino yields in the 3.0–6.5 MeV range to a few percent for the primary power-producing fuel isotopes, even with zero prior knowledge of the yields.« less
Starns, Jeffrey J.; Pazzaglia, Angela M.; Rotello, Caren M.; Hautus, Michael J.; Macmillan, Neil A.
2014-01-01
Source memory zROC slopes change from below 1 to above 1 depending on which source gets the strongest learning. This effect has been attributed to memory processes, either in terms of a threshold source recollection process or changes in the variability of continuous source evidence. We propose two decision mechanisms that can produce the slope effect, and we test them in three experiments. The evidence mixing account assumes that people change how they weight item versus source evidence based on which source is stronger, and the converging criteria account assumes that participants become more willing to make high confidence source responses for test probes that have higher item strength. Results failed to support the evidence mixing account, in that the slope effect emerged even when item evidence was not informative for the source judgment (that is, in tests that included strong and weak items from both sources). In contrast, results showed strong support for the converging criteria account. This account not only accommodated the unequal-strength slope effect, but also made a prediction for unstudied (new) items that was empirically confirmed: participants made more high confidence source responses for new items when they were more confident that the item was studied. The converging criteria account has an advantage over accounts based on source recollection or evidence variability, as the latter accounts do not predict the relationship between recognition and source confidence for new items. PMID:23565789
Four Major South Korea's Rivers Using Deep Learning Models.
Lee, Sangmok; Lee, Donghyun
2018-06-24
Harmful algal blooms are an annual phenomenon that cause environmental damage, economic losses, and disease outbreaks. A fundamental solution to this problem is still lacking, thus, the best option for counteracting the effects of algal blooms is to improve advance warnings (predictions). However, existing physical prediction models have difficulties setting a clear coefficient indicating the relationship between each factor when predicting algal blooms, and many variable data sources are required for the analysis. These limitations are accompanied by high time and economic costs. Meanwhile, artificial intelligence and deep learning methods have become increasingly common in scientific research; attempts to apply the long short-term memory (LSTM) model to environmental research problems are increasing because the LSTM model exhibits good performance for time-series data prediction. However, few studies have applied deep learning models or LSTM to algal bloom prediction, especially in South Korea, where algal blooms occur annually. Therefore, we employed the LSTM model for algal bloom prediction in four major rivers of South Korea. We conducted short-term (one week) predictions by employing regression analysis and deep learning techniques on a newly constructed water quality and quantity dataset drawn from 16 dammed pools on the rivers. Three deep learning models (multilayer perceptron, MLP; recurrent neural network, RNN; and long short-term memory, LSTM) were used to predict chlorophyll-a, a recognized proxy for algal activity. The results were compared to those from OLS (ordinary least square) regression analysis and actual data based on the root mean square error (RSME). The LSTM model showed the highest prediction rate for harmful algal blooms and all deep learning models out-performed the OLS regression analysis. Our results reveal the potential for predicting algal blooms using LSTM and deep learning.
Comparison of Predicted and Measured Attenuation of Turbine Noise from a Static Engine Test
NASA Technical Reports Server (NTRS)
Chien, Eugene W.; Ruiz, Marta; Yu, Jia; Morin, Bruce L.; Cicon, Dennis; Schwieger, Paul S.; Nark, Douglas M.
2007-01-01
Aircraft noise has become an increasing concern for commercial airlines. Worldwide demand for quieter aircraft is increasing, making the prediction of engine noise suppression one of the most important fields of research. The Low-Pressure Turbine (LPT) can be an important noise source during the approach condition for commercial aircraft. The National Aeronautics and Space Administration (NASA), Pratt & Whitney (P&W), and Goodrich Aerostructures (Goodrich) conducted a joint program to validate a method for predicting turbine noise attenuation. The method includes noise-source estimation, acoustic treatment impedance prediction, and in-duct noise propagation analysis. Two noise propagation prediction codes, Eversman Finite Element Method (FEM) code [1] and the CDUCT-LaRC [2] code, were used in this study to compare the predicted and the measured turbine noise attenuation from a static engine test. In this paper, the test setup, test configurations and test results are detailed in Section II. A description of the input parameters, including estimated noise modal content (in terms of acoustic potential), and acoustic treatment impedance values are provided in Section III. The prediction-to-test correlation study results are illustrated and discussed in Section IV and V for the FEM and the CDUCT-LaRC codes, respectively, and a summary of the results is presented in Section VI.
Schiestl-Aalto, Pauliina; Kulmala, Liisa; Mäkinen, Harri; Nikinmaa, Eero; Mäkelä, Annikki
2015-04-01
The control of tree growth vs environment by carbon sources or sinks remains unresolved although it is widely studied. This study investigates growth of tree components and carbon sink-source dynamics at different temporal scales. We constructed a dynamic growth model 'carbon allocation sink source interaction' (CASSIA) that calculates tree-level carbon balance from photosynthesis, respiration, phenology and temperature-driven potential structural growth of tree organs and dynamics of stored nonstructural carbon (NSC) and their modifying influence on growth. With the model, we tested hypotheses that sink demand explains the intra-annual growth dynamics of the meristems, and that the source supply is further needed to explain year-to-year growth variation. The predicted intra-annual dimensional growth of shoots and needles and the number of cells in xylogenesis phases corresponded with measurements, whereas NSC hardly limited the growth, supporting the first hypothesis. Delayed GPP influence on potential growth was necessary for simulating the yearly growth variation, indicating also at least an indirect source limitation. CASSIA combines seasonal growth and carbon balance dynamics with long-term source dynamics affecting growth and thus provides a first step to understanding the complex processes regulating intra- and interannual growth and sink-source dynamics. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.
Opportunities of probabilistic flood loss models
NASA Astrophysics Data System (ADS)
Schröter, Kai; Kreibich, Heidi; Lüdtke, Stefan; Vogel, Kristin; Merz, Bruno
2016-04-01
Oftentimes, traditional uni-variate damage models as for instance depth-damage curves fail to reproduce the variability of observed flood damage. However, reliable flood damage models are a prerequisite for the practical usefulness of the model results. Innovative multi-variate probabilistic modelling approaches are promising to capture and quantify the uncertainty involved and thus to improve the basis for decision making. In this study we compare the predictive capability of two probabilistic modelling approaches, namely Bagging Decision Trees and Bayesian Networks and traditional stage damage functions. For model evaluation we use empirical damage data which are available from computer aided telephone interviews that were respectively compiled after the floods in 2002, 2005, 2006 and 2013 in the Elbe and Danube catchments in Germany. We carry out a split sample test by sub-setting the damage records. One sub-set is used to derive the models and the remaining records are used to evaluate the predictive performance of the model. Further we stratify the sample according to catchments which allows studying model performance in a spatial transfer context. Flood damage estimation is carried out on the scale of the individual buildings in terms of relative damage. The predictive performance of the models is assessed in terms of systematic deviations (mean bias), precision (mean absolute error) as well as in terms of sharpness of the predictions the reliability which is represented by the proportion of the number of observations that fall within the 95-quantile and 5-quantile predictive interval. The comparison of the uni-variable Stage damage function and the multivariable model approach emphasises the importance to quantify predictive uncertainty. With each explanatory variable, the multi-variable model reveals an additional source of uncertainty. However, the predictive performance in terms of precision (mbe), accuracy (mae) and reliability (HR) is clearly improved in comparison to uni-variable Stage damage function. Overall, Probabilistic models provide quantitative information about prediction uncertainty which is crucial to assess the reliability of model predictions and improves the usefulness of model results.
Minella, Marco; Rogora, Michela; Vione, Davide; Maurino, Valter; Minero, Claudio
2011-08-15
A model-based approach is here developed and applied to predict the long-term trends of indirect photochemical processes in the surface layer (5m water depth) of Lake Maggiore, NW Italy. For this lake, time series of the main parameters of photochemical importance that cover almost two decades are available. As a way to assess the relevant photochemical reactions, the modelled steady-state concentrations of important photogenerated transients ((•)OH, ³CDOM* and CO₃(-•)) were taken into account. A multivariate analysis approach was adopted to have an overview of the system, to emphasise relationships among chemical, photochemical and seasonal variables, and to highlight annual and long-term trends. Over the considered time period, because of the decrease of the dissolved organic carbon (DOC) content of water and of the increase of alkalinity, a significant increase is predicted for the steady-state concentrations of the radicals (•)OH and CO₃(-•). Therefore, the photochemical degradation processes that involve the two radical species would be enhanced. Another issue of potential photochemical importance is related to the winter maxima of nitrate (a photochemical (•)OH source) and the summer maxima of DOC ((•)OH sink and ³CDOM* source) in the lake water under consideration. From the combination of sunlight irradiance and chemical composition data, one predicts that the processes involving (•)OH and CO₃(-•) would be most important in spring, while the reactions involving ³CDOM* would be most important in summer. Copyright © 2011 Elsevier B.V. All rights reserved.
Nonparametric Stochastic Model for Uncertainty Quantifi cation of Short-term Wind Speed Forecasts
NASA Astrophysics Data System (ADS)
AL-Shehhi, A. M.; Chaouch, M.; Ouarda, T.
2014-12-01
Wind energy is increasing in importance as a renewable energy source due to its potential role in reducing carbon emissions. It is a safe, clean, and inexhaustible source of energy. The amount of wind energy generated by wind turbines is closely related to the wind speed. Wind speed forecasting plays a vital role in the wind energy sector in terms of wind turbine optimal operation, wind energy dispatch and scheduling, efficient energy harvesting etc. It is also considered during planning, design, and assessment of any proposed wind project. Therefore, accurate prediction of wind speed carries a particular importance and plays significant roles in the wind industry. Many methods have been proposed in the literature for short-term wind speed forecasting. These methods are usually based on modeling historical fixed time intervals of the wind speed data and using it for future prediction. The methods mainly include statistical models such as ARMA, ARIMA model, physical models for instance numerical weather prediction and artificial Intelligence techniques for example support vector machine and neural networks. In this paper, we are interested in estimating hourly wind speed measures in United Arab Emirates (UAE). More precisely, we predict hourly wind speed using a nonparametric kernel estimation of the regression and volatility functions pertaining to nonlinear autoregressive model with ARCH model, which includes unknown nonlinear regression function and volatility function already discussed in the literature. The unknown nonlinear regression function describe the dependence between the value of the wind speed at time t and its historical data at time t -1, t - 2, … , t - d. This function plays a key role to predict hourly wind speed process. The volatility function, i.e., the conditional variance given the past, measures the risk associated to this prediction. Since the regression and the volatility functions are supposed to be unknown, they are estimated using nonparametric kernel methods. In addition, to the pointwise hourly wind speed forecasts, a confidence interval is also provided which allows to quantify the uncertainty around the forecasts.
Vukovic, Jovana; Jones, Benedict C; Feinberg, David R; Debruine, Lisa M; Smith, Finlay G; Welling, Lisa L M; Little, Anthony C
2011-02-01
Several studies have found that women tend to demonstrate stronger preferences for masculine men as short-term partners than as long-term partners, though there is considerable variation among women in the magnitude of this effect. One possible source of this variation is individual differences in the extent to which women perceive masculine men to possess antisocial traits that are less costly in short-term relationships than in long-term relationships. Consistent with this proposal, here we show that the extent to which women report stronger preferences for men with low (i.e., masculine) voice pitch as short-term partners than as long-term partners is associated with the extent to which they attribute physical dominance and low trustworthiness to these masculine voices. Thus, our findings suggest that variation in the extent to which women attribute negative personality characteristics to masculine men predicts individual differences in the magnitude of the effect of relationship context on women's masculinity preferences, highlighting the importance of perceived personality attributions for individual differences in women's judgments of men's vocal attractiveness and, potentially, their mate preferences. ©2010 The British Psychological Society.
NASA Astrophysics Data System (ADS)
Saleh, D.; Domagalski, J. L.; Smith, R. A.
2016-12-01
The SPARROW (SPAtially-Referenced Regression On Watershed Attributes) model, developed by the U.S. Geological Survey, has been used to identify and quantify the sources of nitrogen and phosphorus in watersheds and to predict their fluxes and concentration at specified locations downstream. Existing SPARROW models use a hybrid statistical approach to describe an annual average ("steady-state") relationship between sources and stream conditions based on long-term water quality monitoring data and spatially-referenced explanatory information. Although these annual models are useful for some management purposes, many water quality issues stem from intra- and inter-annual changes in constituent sources, hydrologic forcing, or other environmental conditions, which cause a lag between watershed inputs and stream water quality. We are developing a seasonal dynamic SPARROW model of sources, fluxes, and yields of phosphorus for the watershed (approximately 9,700 square kilometers) draining to Upper Klamath Lake, Oregon. The lake is hyper-eutrophic and various options are being considered for water quality improvement. The model was calibrated with 11 years of water quality data (2000 to 2010) and simulates seasonal loads and yields for a total of 44 seasons. Phosphorus sources to the watershed include animal manure, farm fertilizer, discharges of treated wastewater, and natural sources (soil and streambed sediment). The model predicts that phosphorus delivery to the lake is strongly affected by intra- and inter-annual changes in precipitation and by temporary seasonal storage of phosphorus in the watershed. The model can be used to predict how different management actions for mitigating phosphorus sources might affect phosphorus loading to the lake as well as the time required for any changes in loading to occur following implementation of the action.
Genomic Selection Improves Heat Tolerance in Dairy Cattle
Garner, J. B.; Douglas, M. L.; Williams, S. R. O; Wales, W. J.; Marett, L. C.; Nguyen, T. T. T.; Reich, C. M.; Hayes, B. J.
2016-01-01
Dairy products are a key source of valuable proteins and fats for many millions of people worldwide. Dairy cattle are highly susceptible to heat-stress induced decline in milk production, and as the frequency and duration of heat-stress events increases, the long term security of nutrition from dairy products is threatened. Identification of dairy cattle more tolerant of heat stress conditions would be an important progression towards breeding better adapted dairy herds to future climates. Breeding for heat tolerance could be accelerated with genomic selection, using genome wide DNA markers that predict tolerance to heat stress. Here we demonstrate the value of genomic predictions for heat tolerance in cohorts of Holstein cows predicted to be heat tolerant and heat susceptible using controlled-climate chambers simulating a moderate heatwave event. Not only was the heat challenge stimulated decline in milk production less in cows genomically predicted to be heat-tolerant, physiological indicators such as rectal and intra-vaginal temperatures had reduced increases over the 4 day heat challenge. This demonstrates that genomic selection for heat tolerance in dairy cattle is a step towards securing a valuable source of nutrition and improving animal welfare facing a future with predicted increases in heat stress events. PMID:27682591
Is social projection based on simulation or theory? Why new methods are needed for differentiating
Bazinger, Claudia; Kühberger, Anton
2012-01-01
The literature on social cognition reports many instances of a phenomenon titled ‘social projection’ or ‘egocentric bias’. These terms indicate egocentric predictions, i.e., an over-reliance on the self when predicting the cognition, emotion, or behavior of other people. The classic method to diagnose egocentric prediction is to establish high correlations between our own and other people's cognition, emotion, or behavior. We argue that this method is incorrect because there is a different way to come to a correlation between own and predicted states, namely, through the use of theoretical knowledge. Thus, the use of correlational measures is not sufficient to identify the source of social predictions. Based on the distinction between simulation theory and theory theory, we propose the following alternative methods for inferring prediction strategies: independent vs. juxtaposed predictions, the use of ‘hot’ mental processes, and the use of participants’ self-reports. PMID:23209342
Improved method for predicting protein fold patterns with ensemble classifiers.
Chen, W; Liu, X; Huang, Y; Jiang, Y; Zou, Q; Lin, C
2012-01-27
Protein folding is recognized as a critical problem in the field of biophysics in the 21st century. Predicting protein-folding patterns is challenging due to the complex structure of proteins. In an attempt to solve this problem, we employed ensemble classifiers to improve prediction accuracy. In our experiments, 188-dimensional features were extracted based on the composition and physical-chemical property of proteins and 20-dimensional features were selected using a coupled position-specific scoring matrix. Compared with traditional prediction methods, these methods were superior in terms of prediction accuracy. The 188-dimensional feature-based method achieved 71.2% accuracy in five cross-validations. The accuracy rose to 77% when we used a 20-dimensional feature vector. These methods were used on recent data, with 54.2% accuracy. Source codes and dataset, together with web server and software tools for prediction, are available at: http://datamining.xmu.edu.cn/main/~cwc/ProteinPredict.html.
Predictive onboard flow control for packet switching satellites
NASA Technical Reports Server (NTRS)
Bobinsky, Eric A.
1992-01-01
We outline two alternate approaches to predicting the onset of congestion in a packet switching satellite, and argue that predictive, rather than reactive, flow control is necessary for the efficient operation of such a system. The first method discussed is based on standard, statistical techniques which are used to periodically calculate a probability of near-term congestion based on arrival rate statistics. If this probability exceeds a present threshold, the satellite would transmit a rate-reduction signal to all active ground stations. The second method discussed would utilize a neural network to periodically predict the occurrence of buffer overflow based on input data which would include, in addition to arrival rates, the distributions of packet lengths, source addresses, and destination addresses.
Groundwater Pollution Source Identification using Linked ANN-Optimization Model
NASA Astrophysics Data System (ADS)
Ayaz, Md; Srivastava, Rajesh; Jain, Ashu
2014-05-01
Groundwater is the principal source of drinking water in several parts of the world. Contamination of groundwater has become a serious health and environmental problem today. Human activities including industrial and agricultural activities are generally responsible for this contamination. Identification of groundwater pollution source is a major step in groundwater pollution remediation. Complete knowledge of pollution source in terms of its source characteristics is essential to adopt an effective remediation strategy. Groundwater pollution source is said to be identified completely when the source characteristics - location, strength and release period - are known. Identification of unknown groundwater pollution source is an ill-posed inverse problem. It becomes more difficult for real field conditions, when the lag time between the first reading at observation well and the time at which the source becomes active is not known. We developed a linked ANN-Optimization model for complete identification of an unknown groundwater pollution source. The model comprises two parts- an optimization model and an ANN model. Decision variables of linked ANN-Optimization model contain source location and release period of pollution source. An objective function is formulated using the spatial and temporal data of observed and simulated concentrations, and then minimized to identify the pollution source parameters. In the formulation of the objective function, we require the lag time which is not known. An ANN model with one hidden layer is trained using Levenberg-Marquardt algorithm to find the lag time. Different combinations of source locations and release periods are used as inputs and lag time is obtained as the output. Performance of the proposed model is evaluated for two and three dimensional case with error-free and erroneous data. Erroneous data was generated by adding uniformly distributed random error (error level 0-10%) to the analytically computed concentration values. The main advantage of the proposed model is that it requires only upper half of the breakthrough curve and is capable of predicting source parameters when the lag time is not known. Linking of ANN model with proposed optimization model reduces the dimensionality of the decision variables of the optimization model by one and hence complexity of optimization model is reduced. The results show that our proposed linked ANN-Optimization model is able to predict the source parameters for the error-free data accurately. The proposed model was run several times to obtain the mean, standard deviation and interval estimate of the predicted parameters for observations with random measurement errors. It was observed that mean values as predicted by the model were quite close to the exact values. An increasing trend was observed in the standard deviation of the predicted values with increasing level of measurement error. The model appears to be robust and may be efficiently utilized to solve the inverse pollution source identification problem.
Nonlinear forecasting as a way of distinguishing chaos from measurement error in time series
NASA Astrophysics Data System (ADS)
Sugihara, George; May, Robert M.
1990-04-01
An approach is presented for making short-term predictions about the trajectories of chaotic dynamical systems. The method is applied to data on measles, chickenpox, and marine phytoplankton populations, to show how apparent noise associated with deterministic chaos can be distinguished from sampling error and other sources of externally induced environmental noise.
A review of methods for predicting air pollution dispersion
NASA Technical Reports Server (NTRS)
Mathis, J. J., Jr.; Grose, W. L.
1973-01-01
Air pollution modeling, and problem areas in air pollution dispersion modeling were surveyed. Emission source inventory, meteorological data, and turbulent diffusion are discussed in terms of developing a dispersion model. Existing mathematical models of urban air pollution, and highway and airport models are discussed along with their limitations. Recommendations for improving modeling capabilities are included.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cabrera-Palmer, Belkis
Predicting the performance of radiation detection systems at field sites based on measured performance acquired under controlled conditions at test locations, e.g., the Nevada National Security Site (NNSS), remains an unsolved and standing issue within DNDO’s testing methodology. Detector performance can be defined in terms of the system’s ability to detect and/or identify a given source or set of sources, and depends on the signal generated by the detector for the given measurement configuration (i.e., source strength, distance, time, surrounding materials, etc.) and on the quality of the detection algorithm. Detector performance is usually evaluated in the performance and operationalmore » testing phases, where the measurement configurations are selected to represent radiation source and background configurations of interest to security applications.« less
A Simulated Environment Experiment on Annoyance Due to Combined Road Traffic and Industrial Noises.
Marquis-Favre, Catherine; Morel, Julien
2015-07-21
Total annoyance due to combined noises is still difficult to predict adequately. This scientific gap is an obstacle for noise action planning, especially in urban areas where inhabitants are usually exposed to high noise levels from multiple sources. In this context, this work aims to highlight potential to enhance the prediction of total annoyance. The work is based on a simulated environment experiment where participants performed activities in a living room while exposed to combined road traffic and industrial noises. The first objective of the experiment presented in this paper was to gain further understanding of the effects on annoyance of some acoustical factors, non-acoustical factors and potential interactions between the combined noise sources. The second one was to assess total annoyance models constructed from the data collected during the experiment and tested using data gathered in situ. The results obtained in this work highlighted the superiority of perceptual models. In particular, perceptual models with an interaction term seemed to be the best predictors for the two combined noise sources under study, even with high differences in sound pressure level. Thus, these results reinforced the need to focus on perceptual models and to improve the prediction of partial annoyances.
Safaie, Ammar; Wendzel, Aaron; Ge, Zhongfu; Nevers, Meredith; Whitman, Richard L.; Corsi, Steven R.; Phanikumar, Mantha S.
2016-01-01
Statistical and mechanistic models are popular tools for predicting the levels of indicator bacteria at recreational beaches. Researchers tend to use one class of model or the other, and it is difficult to generalize statements about their relative performance due to differences in how the models are developed, tested, and used. We describe a cooperative modeling approach for freshwater beaches impacted by point sources in which insights derived from mechanistic modeling were used to further improve the statistical models and vice versa. The statistical models provided a basis for assessing the mechanistic models which were further improved using probability distributions to generate high-resolution time series data at the source, long-term “tracer” transport modeling based on observed electrical conductivity, better assimilation of meteorological data, and the use of unstructured-grids to better resolve nearshore features. This approach resulted in improved models of comparable performance for both classes including a parsimonious statistical model suitable for real-time predictions based on an easily measurable environmental variable (turbidity). The modeling approach outlined here can be used at other sites impacted by point sources and has the potential to improve water quality predictions resulting in more accurate estimates of beach closures.
The Scaling of Broadband Shock-Associated Noise with Increasing Temperature
NASA Technical Reports Server (NTRS)
Miller, Steven A.
2012-01-01
A physical explanation for the saturation of broadband shock-associated noise (BBSAN) intensity with increasing jet stagnation temperature has eluded investigators. An explanation is proposed for this phenomenon with the use of an acoustic analogy. For this purpose the acoustic analogy of Morris and Miller is examined. To isolate the relevant physics, the scaling of BBSAN at the peak intensity level at the sideline ( = 90 degrees) observer location is examined. Scaling terms are isolated from the acoustic analogy and the result is compared using a convergent nozzle with the experiments of Bridges and Brown and using a convergent-divergent nozzle with the experiments of Kuo, McLaughlin, and Morris at four nozzle pressure ratios in increments of total temperature ratios from one to four. The equivalent source within the framework of the acoustic analogy for BBSAN is based on local field quantities at shock wave shear layer interactions. The equivalent source combined with accurate calculations of the propagation of sound through the jet shear layer, using an adjoint vector Green s function solver of the linearized Euler equations, allows for predictions that retain the scaling with respect to stagnation pressure and allows for the accurate saturation of BBSAN with increasing stagnation temperature. This is a minor change to the source model relative to the previously developed models. The full development of the scaling term is shown. The sources and vector Green s function solver are informed by steady Reynolds-Averaged Navier-Stokes solutions. These solutions are examined as a function of stagnation temperature at the first shock wave shear layer interaction. It is discovered that saturation of BBSAN with increasing jet stagnation temperature occurs due to a balance between the amplification of the sound propagation through the shear layer and the source term scaling.A physical explanation for the saturation of broadband shock-associated noise (BBSAN) intensity with increasing jet stagnation temperature has eluded investigators. An explanation is proposed for this phenomenon with the use of an acoustic analogy. For this purpose the acoustic analogy of Morris and Miller is examined. To isolate the relevant physics, the scaling of BBSAN at the peak intensity level at the sideline psi = 90 degrees) observer location is examined. Scaling terms are isolated from the acoustic analogy and the result is compared using a convergent nozzle with the experiments of Bridges and Brown and using a convergent-divergent nozzle with the experiments of Kuo, McLaughlin, and Morris at four nozzle pressure ratios in increments of total temperature ratios from one to four. The equivalent source within the framework of the acoustic analogy for BBSAN is based on local field quantities at shock wave shear layer interactions. The equivalent source combined with accurate calculations of the propagation of sound through the jet shear layer, using an adjoint vector Green s function solver of the linearized Euler equations, allows for predictions that retain the scaling with respect to stagnation pressure and allows for the accurate saturation of BBSAN with increasing stagnation temperature. This is a minor change to the source model relative to the previously developed models. The full development of the scaling term is shown. The sources and vector Green s function solver are informed by steady Reynolds-Averaged Navier-Stokes solutions. These solutions are examined as a function of stagnation temperature at the first shock wave shear layer interaction. It is discovered that saturation of BBSAN with increasing jet stagnation temperature occurs due to a balance between the amplification of the sound propagation through the shear layer and the source term scaling.
Modelling of a spread of hazardous substances in a Floreon+ system
NASA Astrophysics Data System (ADS)
Ronovsky, Ales; Brzobohaty, Tomas; Kuchar, Stepan; Vojtek, David
2017-07-01
This paper is focused on a module of an automatized numerical modelling of a spread of hazardous substances developed for the Floreon+ system on demand of the Fire Brigade of Moravian-Silesian. The main purpose of the module is to provide more accurate prediction for smog situations that are frequent problems in the region. It can be operated by non-scientific user through the Floreon+ client and can be used as a short term prediction model of an evolution of concentrations of dangerous substances (SO2, PMx) from stable sources, such as heavy industry factories, local furnaces or highways or as fast prediction of spread of hazardous substances in case of crash of mobile source of contamination (transport of dangerous substances) or in case of a leakage in a local chemical factory. The process of automatic gathering of atmospheric data, connection of Floreon+ system with an HPC infrastructure necessary for computing of such an advantageous model and the model itself are described bellow.
Advanced turbo-prop airplane interior noise reduction-source definition
NASA Technical Reports Server (NTRS)
Magliozzi, B.; Brooks, B. M.
1979-01-01
Acoustic pressure amplitudes and phases were measured in model scale on the surface of a rigid semicylinder mounted in an acoustically treated wind tunnel near a prop-fan (an advanced turboprop with many swept blades) model. Operating conditions during the test simulated those of a prop-fan at 0.8 Mach number cruise. Acoustic pressure amplitude and phase contours were defined on the semicylinder surface. Measurements obtained without the semi-cylinder in place were used to establish the magnitude of pressure doubling for an aircraft fuselage located near a prop-fan. Pressure doubling effects were found to be 6dB at 90 deg incidence decreasing to no effect at grazing incidence. Comparisons of measurements with predictions made using a recently developed prop-fan noise prediction theory which includes linear and non-linear source terms showed good agreement in phase and in peak noise amplitude. Predictions of noise amplitude and phase contours, including pressure doubling effects derived from test, are included for a full scale prop-fan installation.
Refraction and Shielding of Noise in Non-Axisymmetric Jets
NASA Technical Reports Server (NTRS)
Khavaran, Abbas
1996-01-01
This paper examines the shielding effect of the mean flow and refraction of sound in non-axisymmetric jets. A general three-dimensional ray-acoustic approach is applied. The methodology is independent of the exit geometry and may account for jet spreading and transverse as well as streamwise flow gradients. We assume that noise is dominated by small-scale turbulence. The source correlation terms, as described by the acoustic analogy approach, are simplified and a model is proposed that relates the source strength to 7/2 power of turbulence kinetic energy. Local characteristics of the source such as its strength, time- or length-scale, convection velocity and characteristic frequency are inferred from the mean flow considerations. Compressible Navier Stokes equations are solved with a k-e turbulence model. Numerical predictions are presented for a Mach 1.5, aspect ratio 2:1 elliptic jet. The predicted sound pressure level directivity demonstrates favorable agreement with reported data, indicating a relative quiet zone on the side of the major axis of the elliptic jet.
Reply by the Authors to C. K. W. Tam
NASA Technical Reports Server (NTRS)
Morris, Philip J.; Farassat, F.
2002-01-01
The prediction of noise generation and radiation by turbulence has been the subject of continuous research for over fifty years. The essential problem is how to model the noise sources when one s knowledge of the detailed space-time properties of the turbulence is limited. We attempted to provide a comparison of models based on acoustic analogies and recent alternative models. Our goal was to demonstrate that the predictive capabilities of any model are based on the choice of the turbulence property that is modeled as a source of noise. Our general definition of an acoustic analogy is a rearrangement of the equations of motion into the form L(u) = Q, where L is a linear operator that reduces to an acoustic propagation operator outside a region upsilon; u is a variable that reduces to acoustic pressure (or a related linear acoustic variable) outside upsilon; and Q is a source term that can be meaningfully estimated without knowing u and tends to zero outside upsilon.
Analysis of Orbital Lifetime Prediction Parameters in Preparation for Post-Mission Disposal
NASA Astrophysics Data System (ADS)
Choi, Ha-Yeon; Kim, Hae-Dong; Seong, Jae-Dong
2015-12-01
Atmospheric drag force is an important source of perturbation of Low Earth Orbit (LEO) orbit satellites, and solar activity is a major factor for changes in atmospheric density. In particular, the orbital lifetime of a satellite varies with changes in solar activity, so care must be taken in predicting the remaining orbital lifetime during preparation for post-mission disposal. In this paper, the System Tool Kit (STK®) Long-term Orbit Propagator is used to analyze the changes in orbital lifetime predictions with respect to solar activity. In addition, the STK® Lifetime tool is used to analyze the change in orbital lifetime with respect to solar flux data generation, which is needed for the orbital lifetime calculation, and its control on the drag coefficient control. Analysis showed that the application of the most recent solar flux file within the Lifetime tool gives a predicted trend that is closest to the actual orbit. We also examine the effect of the drag coefficient, by performing a comparative analysis between varying and constant coefficients in terms of solar activity intensities.
A method for obtaining a statistically stationary turbulent free shear flow
NASA Technical Reports Server (NTRS)
Timson, Stephen F.; Lele, S. K.; Moser, R. D.
1994-01-01
The long-term goal of the current research is the study of Large-Eddy Simulation (LES) as a tool for aeroacoustics. New algorithms and developments in computer hardware are making possible a new generation of tools for aeroacoustic predictions, which rely on the physics of the flow rather than empirical knowledge. LES, in conjunction with an acoustic analogy, holds the promise of predicting the statistics of noise radiated to the far-field of a turbulent flow. LES's predictive ability will be tested through extensive comparison of acoustic predictions based on a Direct Numerical Simulation (DNS) and LES of the same flow, as well as a priori testing of DNS results. The method presented here is aimed at allowing simulation of a turbulent flow field that is both simple and amenable to acoustic predictions. A free shear flow is homogeneous in both the streamwise and spanwise directions and which is statistically stationary will be simulated using equations based on the Navier-Stokes equations with a small number of added terms. Studying a free shear flow eliminates the need to consider flow-surface interactions as an acoustic source. The homogeneous directions and the flow's statistically stationary nature greatly simplify the application of an acoustic analogy.
Schwartz, Charles C.; Gude, Patricia H.; Landenburger, Lisa; Haroldson, Mark A.; Podruzny, Shannon
2012-01-01
Exurban development is consuming wildlife habitat within the Greater Yellowstone Ecosystem with potential consequences to the long-term conservation of grizzly bears Ursus arctos. We assessed the impacts of alternative future land-use scenarios by linking an existing regression-based simulation model predicting rural development with a spatially explicit model that predicted bear survival. Using demographic criteria that predict population trajectory, we portioned habitats into either source or sink, and projected the loss of source habitat associated with four different build out (new home construction) scenarios through 2020. Under boom growth, we predicted that 12 km2 of source habitat were converted to sink habitat within the Grizzly Bear Recovery Zone (RZ), 189 km2 were converted within the current distribution of grizzly bears outside of the RZ, and 289 km2 were converted in the area outside the RZ identified as suitable grizzly bear habitat. Our findings showed that extremely low densities of residential development created sink habitats. We suggest that tools, such as those outlined in this article, in addition to zoning and subdivision regulation may prove more practical, and the most effective means of retaining large areas of undeveloped land and conserving grizzly bear source habitat will likely require a landscape-scale approach. We recommend a focus on land conservation efforts that retain open space (easements, purchases and trades) coupled with the implementation of ‘bear community programmes’ on an ecosystem wide basis in an effort to minimize human-bear conflicts, minimize management-related bear mortalities associated with preventable conflicts and to safeguard human communities. Our approach has application to other species and areas, and it has illustrated how spatially explicit demographic models can be combined with models predicting land-use change to help focus conservation priorities.
Baisden, W Troy; Keller, Elizabeth D; Van Hale, Robert; Frew, Russell D; Wassenaar, Leonard I
2016-01-01
Predictive understanding of precipitation δ(2)H and δ(18)O in New Zealand faces unique challenges, including high spatial variability in precipitation amounts, alternation between subtropical and sub-Antarctic precipitation sources, and a compressed latitudinal range of 34 to 47 °S. To map the precipitation isotope ratios across New Zealand, three years of integrated monthly precipitation samples were acquired from >50 stations. Conventional mean-annual precipitation δ(2)H and δ(18)O maps were produced by regressions using geographic and annual climate variables. Incomplete data and short-term variation in climate and precipitation sources limited the utility of this approach. We overcome these difficulties by calculating precipitation-weighted monthly climate parameters using national 5-km-gridded daily climate data. This data plus geographic variables were regressed to predict δ(2)H, δ(18)O, and d-excess at all sites. The procedure yields statistically-valid predictions of the isotope composition of precipitation (long-term average root mean square error (RMSE) for δ(18)O = 0.6 ‰; δ(2)H = 5.5 ‰); and monthly RMSE δ(18)O = 1.9 ‰, δ(2)H = 16 ‰. This approach has substantial benefits for studies that require the isotope composition of precipitation during specific time intervals, and may be further improved by comparison to daily and event-based precipitation samples as well as the use of back-trajectory calculations.
A hybrid probabilistic/spectral model of scalar mixing
NASA Astrophysics Data System (ADS)
Vaithianathan, T.; Collins, Lance
2002-11-01
In the probability density function (PDF) description of a turbulent reacting flow, the local temperature and species concentration are replaced by a high-dimensional joint probability that describes the distribution of states in the fluid. The PDF has the great advantage of rendering the chemical reaction source terms closed, independent of their complexity. However, molecular mixing, which involves two-point information, must be modeled. Indeed, the qualitative shape of the PDF is sensitive to this modeling, hence the reliability of the model to predict even the closed chemical source terms rests heavily on the mixing model. We will present a new closure to the mixing based on a spectral representation of the scalar field. The model is implemented as an ensemble of stochastic particles, each carrying scalar concentrations at different wavenumbers. Scalar exchanges within a given particle represent ``transfer'' while scalar exchanges between particles represent ``mixing.'' The equations governing the scalar concentrations at each wavenumber are derived from the eddy damped quasi-normal Markovian (or EDQNM) theory. The model correctly predicts the evolution of an initial double delta function PDF into a Gaussian as seen in the numerical study by Eswaran & Pope (1988). Furthermore, the model predicts the scalar gradient distribution (which is available in this representation) approaches log normal at long times. Comparisons of the model with data derived from direct numerical simulations will be shown.
NASA Astrophysics Data System (ADS)
Negraru, Petru; Golden, Paul
2017-04-01
Long-term ground truth observations were collected at two infrasound arrays in Nevada to investigate how seasonal atmospheric variations affect the detection, traveltime and signal characteristics (azimuth, trace velocity, frequency content and amplitudes) of infrasonic arrivals at regional distances. The arrays were located in different azimuthal directions from a munition disposal facility in Nevada. FNIAR, located 154 km north of the source has a high detection rate throughout the year. Over 90 per cent of the detonations have traveltimes indicative of stratospheric arrivals, while tropospheric waveguides are observed from only 27 per cent of the detonations. The second array, DNIAR, located 293 km southeast of the source exhibits strong seasonal variations with high stratospheric detection rates in winter and the virtual absence of stratospheric arrivals in summer. Tropospheric waveguides and thermospheric arrivals are also observed for DNIAR. Modeling through the Naval Research Laboratory Ground to Space atmospheric sound speeds leads to mixed results: FNIAR arrivals are usually not predicted to be present at all (either stratospheric or tropospheric), while DNIAR arrivals are usually correctly predicted, but summer arrivals show a consistent traveltime bias. In the end, we show the possible improvement in location using empirically calibrated traveltime and azimuth observations. Using the Bayesian Infrasound Source Localization we show that we can decrease the area enclosed by the 90 per cent credibility contours by a factor of 2.5.
Fuselage boundary-layer refraction of fan tones radiated from an installed turbofan aero-engine.
Gaffney, James; McAlpine, Alan; Kingan, Michael J
2017-03-01
A distributed source model to predict fan tone noise levels of an installed turbofan aero-engine is extended to include the refraction effects caused by the fuselage boundary layer. The model is a simple representation of an installed turbofan, where fan tones are represented in terms of spinning modes radiated from a semi-infinite circular duct, and the aircraft's fuselage is represented by an infinitely long, rigid cylinder. The distributed source is a disk, formed by integrating infinitesimal volume sources located on the intake duct termination. The cylinder is located adjacent to the disk. There is uniform axial flow, aligned with the axis of the cylinder, everywhere except close to the cylinder where there is a constant thickness boundary layer. The aim is to predict the near-field acoustic pressure, and in particular, to predict the pressure on the cylindrical fuselage which is relevant to assess cabin noise. Thus no far-field approximations are included in the modelling. The effect of the boundary layer is quantified by calculating the area-averaged mean square pressure over the cylinder's surface with and without the boundary layer included in the prediction model. The sound propagation through the boundary layer is calculated by solving the Pridmore-Brown equation. Results from the theoretical method show that the boundary layer has a significant effect on the predicted sound pressure levels on the cylindrical fuselage, owing to sound radiation of fan tones from an installed turbofan aero-engine.
The importance of quadrupole sources in prediction of transonic tip speed propeller noise
NASA Technical Reports Server (NTRS)
Hanson, D. B.; Fink, M. R.
1978-01-01
A theoretical analysis is presented for the harmonic noise of high speed, open rotors. Far field acoustic radiation equations based on the Ffowcs-Williams/Hawkings theory are derived for a static rotor with thin blades and zero lift. Near the plane of rotation, the dominant sources are the volume displacement and the rho U(2) quadrupole, where u is the disturbance velocity component in the direction blade motion. These sources are compared in both the time domain and the frequency domain using two dimensional airfoil theories valid in the subsonic, transonic, and supersonic speed ranges. For nonlifting parabolic arc blades, the two sources are equally important at speeds between the section critical Mach number and a Mach number of one. However, for moderately subsonic or fully supersonic flow over thin blade sections, the quadrupole term is negligible. It is concluded for thin blades that significant quadrupole noise radiation is strictly a transonic phenomenon and that it can be suppressed with blade sweep. Noise calculations are presented for two rotors, one simulating a helicopter main rotor and the other a model propeller. For the latter, agreement with test data was substantially improved by including the quadrupole source term.
Quantitative Earthquake Prediction on Global and Regional Scales
NASA Astrophysics Data System (ADS)
Kossobokov, Vladimir G.
2006-03-01
The Earth is a hierarchy of volumes of different size. Driven by planetary convection these volumes are involved into joint and relative movement. The movement is controlled by a wide variety of processes on and around the fractal mesh of boundary zones, and does produce earthquakes. This hierarchy of movable volumes composes a large non-linear dynamical system. Prediction of such a system in a sense of extrapolation of trajectory into the future is futile. However, upon coarse-graining the integral empirical regularities emerge opening possibilities of prediction in a sense of the commonly accepted consensus definition worked out in 1976 by the US National Research Council. Implications of the understanding hierarchical nature of lithosphere and its dynamics based on systematic monitoring and evidence of its unified space-energy similarity at different scales help avoiding basic errors in earthquake prediction claims. They suggest rules and recipes of adequate earthquake prediction classification, comparison and optimization. The approach has already led to the design of reproducible intermediate-term middle-range earthquake prediction technique. Its real-time testing aimed at prediction of the largest earthquakes worldwide has proved beyond any reasonable doubt the effectiveness of practical earthquake forecasting. In the first approximation, the accuracy is about 1-5 years and 5-10 times the anticipated source dimension. Further analysis allows reducing spatial uncertainty down to 1-3 source dimensions, although at a cost of additional failures-to-predict. Despite of limited accuracy a considerable damage could be prevented by timely knowledgeable use of the existing predictions and earthquake prediction strategies. The December 26, 2004 Indian Ocean Disaster seems to be the first indication that the methodology, designed for prediction of M8.0+ earthquakes can be rescaled for prediction of both smaller magnitude earthquakes (e.g., down to M5.5+ in Italy) and for mega-earthquakes of M9.0+. The monitoring at regional scales may require application of a recently proposed scheme for the spatial stabilization of the intermediate-term middle-range predictions. The scheme guarantees a more objective and reliable diagnosis of times of increased probability and is less restrictive to input seismic data. It makes feasible reestablishment of seismic monitoring aimed at prediction of large magnitude earthquakes in Caucasus and Central Asia, which to our regret, has been discontinued in 1991. The first results of the monitoring (1986-1990) were encouraging, at least for M6.5+.
NASA Astrophysics Data System (ADS)
Hu, Minpeng; Liu, Yanmei; Wang, Jiahui; Dahlgren, Randy A.; Chen, Dingjiang
2018-06-01
Source apportionment is critical for guiding development of efficient watershed nitrogen (N) pollution control measures. The ReNuMa (Regional Nutrient Management) model, a semi-empirical, semi-process-oriented model with modest data requirements, has been widely used for riverine N source apportionment. However, the ReNuMa model contains limitations for addressing long-term N dynamics by ignoring temporal changes in atmospheric N deposition rates and N-leaching lag effects. This work modified the ReNuMa model by revising the source code to allow yearly changes in atmospheric N deposition and incorporation of N-leaching lag effects into N transport processes. The appropriate N-leaching lag time was determined from cross-correlation analysis between annual watershed individual N source inputs and riverine N export. Accuracy of the modified ReNuMa model was demonstrated through analysis of a 31-year water quality record (1980-2010) from the Yongan watershed in eastern China. The revisions considerably improved the accuracy (Nash-Sutcliff coefficient increased by ∼0.2) of the modified ReNuMa model for predicting riverine N loads. The modified model explicitly identified annual and seasonal changes in contributions of various N sources (i.e., point vs. nonpoint source, surface runoff vs. groundwater) to riverine N loads as well as the fate of watershed anthropogenic N inputs. Model results were consistent with previously modeled or observed lag time length as well as changes in riverine chloride and nitrate concentrations during the low-flow regime and available N levels in agricultural soils of this watershed. The modified ReNuMa model is applicable for addressing long-term changes in riverine N sources, providing decision-makers with critical information for guiding watershed N pollution control strategies.
VAN method of short-term earthquake prediction shows promise
NASA Astrophysics Data System (ADS)
Uyeda, Seiya
Although optimism prevailed in the 1970s, the present consensus on earthquake prediction appears to be quite pessimistic. However, short-term prediction based on geoelectric potential monitoring has stood the test of time in Greece for more than a decade [VarotsosandKulhanek, 1993] Lighthill, 1996]. The method used is called the VAN method.The geoelectric potential changes constantly due to causes such as magnetotelluric effects, lightning, rainfall, leakage from manmade sources, and electrochemical instabilities of electrodes. All of this noise must be eliminated before preseismic signals are identified, if they exist at all. The VAN group apparently accomplished this task for the first time. They installed multiple short (100-200m) dipoles with different lengths in both north-south and east-west directions and long (1-10 km) dipoles in appropriate orientations at their stations (one of their mega-stations, Ioannina, for example, now has 137 dipoles in operation) and found that practically all of the noise could be eliminated by applying a set of criteria to the data.
Mandal, Arundhati; Raju, Sheena; Viswanathan, Chandra
2016-02-01
Human embryonic stem cells (hESCs) are predicted to be an unlimited source of hepatocytes which can pave the way for applications such as cell replacement therapies or as a model of human development or even to predict the hepatotoxicity of drug compounds. We have optimized a 23-d differentiation protocol to generate hepatocyte-like cells (HLCs) from hESCs, obtaining a relatively pure population which expresses the major hepatic markers and is functional and mature. The stability of the HLCs in terms of hepato-specific marker expression and functionality was found to be intact even after an extended period of in vitro culture and cryopreservation. The hESC-derived HLCs have shown the capability to display sensitivity and an alteration in the level of CYP enzyme upon drug induction. This illustrates the potential of such assays in predicting the hepatotoxicity of a drug compound leading to advancement of pharmacology.
Dynamic power balance analysis in JET
NASA Astrophysics Data System (ADS)
Matthews, G. F.; Silburn, S. A.; Challis, C. D.; Eich, T.; Iglesias, D.; King, D.; Sieglin, B.; Contributors, JET
2017-12-01
The full scale realisation of nuclear fusion as an energy source requires a detailed understanding of power and energy balance in current experimental devices. In this we explore whether a global power balance model in which some of the calibration factors applied to the source or sink terms are fitted to the data can provide insight into possible causes of any discrepancies in power and energy balance seen in the JET tokamak. We show that the dynamics in the power balance can only be properly reproduced by including the changes in the thermal stored energy which therefore provides an additional opportunity to cross calibrate other terms in the power balance equation. Although the results are inconclusive with respect to the original goal of identifying the source of the discrepancies in the energy balance, we do find that with optimised parameters an extremely good prediction of the total power measured at the outer divertor target can be obtained over a wide range of pulses with time resolution up to ∼25 ms.
Kirol, Christopher P; Beck, Jeffrey L; Huzurbazar, Snehalata V; Holloran, Matthew J; Miller, Scott N
2015-06-01
Conserving a declining species that is facing many threats, including overlap of its habitats with energy extraction activities, depends upon identifying and prioritizing the value of the habitats that remain. In addition, habitat quality is often compromised when source habitats are lost or fragmented due to anthropogenic development. Our objective was to build an ecological model to classify and map habitat quality in terms of source or sink dynamics for Greater Sage-Grouse (Centrocercus urophasianus) in the Atlantic Rim Project Area (ARPA), a developing coalbed natural gas field in south-central Wyoming, USA. We used occurrence and survival modeling to evaluate relationships between environmental and anthropogenic variables at multiple spatial scales and for all female summer life stages, including nesting, brood-rearing, and non-brooding females. For each life stage, we created resource selection functions (RSFs). We weighted the RSFs and combined them to form a female summer occurrence map. We modeled survival also as a function of spatial variables for nest, brood, and adult female summer survival. Our survival-models were mapped as survival probability functions individually and then combined with fixed vital rates in a fitness metric model that, when mapped, predicted habitat productivity (productivity map). Our results demonstrate a suite of environmental and anthropogenic variables at multiple scales that were predictive of occurrence and survival. We created a source-sink map by overlaying our female summer occurrence map and productivity map to predict habitats contributing to population surpluses (source habitats) or deficits (sink habitat) and low-occurrence habitats on the landscape. The source-sink map predicted that of the Sage-Grouse habitat within the ARPA, 30% was primary source, 29% was secondary source, 4% was primary sink, 6% was secondary sink, and 31% was low occurrence. Our results provide evidence that energy development and avoidance of energy infrastructure were probably reducing the amount of source habitat within the ARPA landscape. Our source-sink map provides managers with a means of prioritizing habitats for conservation planning based on source and sink dynamics. The spatial identification of high value (i.e., primary source) as well as suboptimal (i.e., primary sink) habitats allows for informed energy development to minimize effects on local wildlife populations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kent Simmons, J.A.; Knap, A.H.
1991-04-01
The computer model Industrial Source Complex Short Term (ISCST) was used to study the stack emissions from a refuse incinerator proposed for the inland of Bermuda. The model predicts that the highest ground level pollutant concentrations will occur near Prospect, 800 m to 1,000 m due south of the stack. The authors installed a portable laboratory and instruments at Prospect to begin making air quality baseline measurements. By comparing the model's estimates of the incinerator contribution to the background levels measured at the site they predicted that stack emissions would not cause an increase in TSP or SO{sub 2}. Themore » incinerator will be a significant source of HCI to Bermuda air with ambient levels approaching air quality guidelines.« less
Dilatation-dissipation corrections for advanced turbulence models
NASA Technical Reports Server (NTRS)
Wilcox, David C.
1992-01-01
This paper analyzes dilatation-dissipation based compressibility corrections for advanced turbulence models. Numerical computations verify that the dilatation-dissipation corrections devised by Sarkar and Zeman greatly improve both the k-omega and k-epsilon model predicted effect of Mach number on spreading rate. However, computations with the k-gamma model also show that the Sarkar/Zeman terms cause an undesired reduction in skin friction for the compressible flat-plate boundary layer. A perturbation solution for the compressible wall layer shows that the Sarkar and Zeman terms reduce the effective von Karman constant in the law of the wall. This is the source of the inaccurate k-gamma model skin-friction predictions for the flat-plate boundary layer. The perturbation solution also shows that the k-epsilon model has an inherent flaw for compressible boundary layers that is not compensated for by the dilatation-dissipation corrections. A compressibility modification for k-gamma and k-epsilon models is proposed that is similar to those of Sarkar and Zeman. The new compressibility term permits accurate predictions for the compressible mixing layer, flat-plate boundary layer, and a shock separated flow with the same values for all closure coefficients.
Leveraging LSTM for rapid intensifications prediction of tropical cyclones
NASA Astrophysics Data System (ADS)
Li, Y.; Yang, R.; Yang, C.; Yu, M.; Hu, F.; Jiang, Y.
2017-10-01
Tropical cyclones (TCs) usually cause severe damages and destructions. TC intensity forecasting helps people prepare for the extreme weather and could save lives and properties. Rapid Intensifications (RI) of TCs are the major error sources of TC intensity forecasting. A large number of factors, such as sea surface temperature and wind shear, affect the RI processes of TCs. Quite a lot of work have been done to identify the combination of conditions most favorable to RI. In this study, deep learning method is utilized to combine conditions for RI prediction of TCs. Experiments show that the long short-term memory (LSTM) network provides the ability to leverage past conditions to predict TC rapid intensifications.
IR Image upconversion using band-limited ASE illumination fiber sources.
Maestre, H; Torregrosa, A J; Capmany, J
2016-04-18
We study the field-of-view (FOV) of an upconversion imaging system that employs an Amplified Spontaneous Emission (ASE) fiber source to illuminate a transmission target. As an intermediate case between narrowband laser and thermal illumination, an ASE fiber source allows for higher spectral intensity than thermal illumination and still keeps a broad wavelength spectrum to take advantage of an increased non-collinear phase-matching angle acceptance that enlarges the FOV of the upconversion system when compared to using narrowband laser illumination. A model is presented to predict the angular acceptance of the upconverter in terms of focusing and ASE spectral width and allocation. The model is experimentally checked in case of 1550-630 nm upconversion.
Ancient Chinese Observations and Modern Cometary Models
NASA Technical Reports Server (NTRS)
Yeomans, D. K.
1995-01-01
Ancient astronomical observations, primarily by Chinese, represent the only data source for discerning the long-term behavior of comets. These sky watchers produced astrological forecasts for their emperors. The comets Halley, Swift-Tuttle, and Tempel-Tuttle have been observed for 2000 years. Records of the Leonid meteor showers, starting from A.D.902, are used to guide predictions for the 1998-1999 reoccurrence.
Progress in the development of PDF turbulence models for combustion
NASA Technical Reports Server (NTRS)
Hsu, Andrew T.
1991-01-01
A combined Monte Carlo-computational fluid dynamic (CFD) algorithm was developed recently at Lewis Research Center (LeRC) for turbulent reacting flows. In this algorithm, conventional CFD schemes are employed to obtain the velocity field and other velocity related turbulent quantities, and a Monte Carlo scheme is used to solve the evolution equation for the probability density function (pdf) of species mass fraction and temperature. In combustion computations, the predictions of chemical reaction rates (the source terms in the species conservation equation) are poor if conventional turbulence modles are used. The main difficulty lies in the fact that the reaction rate is highly nonlinear, and the use of averaged temperature produces excessively large errors. Moment closure models for the source terms have attained only limited success. The probability density function (pdf) method seems to be the only alternative at the present time that uses local instantaneous values of the temperature, density, etc., in predicting chemical reaction rates, and thus may be the only viable approach for more accurate turbulent combustion calculations. Assumed pdf's are useful in simple problems; however, for more general combustion problems, the solution of an evolution equation for the pdf is necessary.
Neill, Aaron James; Tetzlaff, Doerthe; Strachan, Norval James Colin; Hough, Rupert Lloyd; Avery, Lisa Marie; Watson, Helen; Soulsby, Chris
2018-01-15
An 11year dataset of concentrations of E. coli at 10 spatially-distributed sites in a mixed land-use catchment in NE Scotland (52km 2 ) revealed that concentrations were not clearly associated with flow or season. The lack of a clear flow-concentration relationship may have been due to greater water fluxes from less-contaminated headwaters during high flows diluting downstream concentrations, the importance of persistent point sources of E. coli both anthropogenic and agricultural, and possibly the temporal resolution of the dataset. Point sources and year-round grazing of livestock probably obscured clear seasonality in concentrations. Multiple linear regression models identified potential for contamination by anthropogenic point sources as a significant predictor of long-term spatial patterns of low, average and high concentrations of E. coli. Neither arable nor pasture land was significant, even when accounting for hydrological connectivity with a topographic-index method. However, this may have reflected coarse-scale land-cover data inadequately representing "point sources" of agricultural contamination (e.g. direct defecation of livestock into the stream) and temporal changes in availability of E. coli from diffuse sources. Spatial-stream-network models (SSNMs) were applied in a novel context, and had value in making more robust catchment-scale predictions of concentrations of E. coli with estimates of uncertainty, and in enabling identification of potential "hot spots" of faecal contamination. Successfully managing faecal contamination of surface waters is vital for safeguarding public health. Our finding that concentrations of E. coli could not clearly be associated with flow or season may suggest that management strategies should not necessarily target only high flow events or summer when faecal contamination risk is often assumed to be greatest. Furthermore, we identified SSNMs as valuable tools for identifying possible "hot spots" of contamination which could be targeted for management, and for highlighting areas where additional monitoring could help better constrain predictions relating to faecal contamination. Copyright © 2017 Elsevier B.V. All rights reserved.
PredictProtein—an open resource for online prediction of protein structural and functional features
Yachdav, Guy; Kloppmann, Edda; Kajan, Laszlo; Hecht, Maximilian; Goldberg, Tatyana; Hamp, Tobias; Hönigschmid, Peter; Schafferhans, Andrea; Roos, Manfred; Bernhofer, Michael; Richter, Lothar; Ashkenazy, Haim; Punta, Marco; Schlessinger, Avner; Bromberg, Yana; Schneider, Reinhard; Vriend, Gerrit; Sander, Chris; Ben-Tal, Nir; Rost, Burkhard
2014-01-01
PredictProtein is a meta-service for sequence analysis that has been predicting structural and functional features of proteins since 1992. Queried with a protein sequence it returns: multiple sequence alignments, predicted aspects of structure (secondary structure, solvent accessibility, transmembrane helices (TMSEG) and strands, coiled-coil regions, disulfide bonds and disordered regions) and function. The service incorporates analysis methods for the identification of functional regions (ConSurf), homology-based inference of Gene Ontology terms (metastudent), comprehensive subcellular localization prediction (LocTree3), protein–protein binding sites (ISIS2), protein–polynucleotide binding sites (SomeNA) and predictions of the effect of point mutations (non-synonymous SNPs) on protein function (SNAP2). Our goal has always been to develop a system optimized to meet the demands of experimentalists not highly experienced in bioinformatics. To this end, the PredictProtein results are presented as both text and a series of intuitive, interactive and visually appealing figures. The web server and sources are available at http://ppopen.rostlab.org. PMID:24799431
A critical assessment of Mus musculus gene function prediction using integrated genomic evidence
Peña-Castillo, Lourdes; Tasan, Murat; Myers, Chad L; Lee, Hyunju; Joshi, Trupti; Zhang, Chao; Guan, Yuanfang; Leone, Michele; Pagnani, Andrea; Kim, Wan Kyu; Krumpelman, Chase; Tian, Weidong; Obozinski, Guillaume; Qi, Yanjun; Mostafavi, Sara; Lin, Guan Ning; Berriz, Gabriel F; Gibbons, Francis D; Lanckriet, Gert; Qiu, Jian; Grant, Charles; Barutcuoglu, Zafer; Hill, David P; Warde-Farley, David; Grouios, Chris; Ray, Debajyoti; Blake, Judith A; Deng, Minghua; Jordan, Michael I; Noble, William S; Morris, Quaid; Klein-Seetharaman, Judith; Bar-Joseph, Ziv; Chen, Ting; Sun, Fengzhu; Troyanskaya, Olga G; Marcotte, Edward M; Xu, Dong; Hughes, Timothy R; Roth, Frederick P
2008-01-01
Background: Several years after sequencing the human genome and the mouse genome, much remains to be discovered about the functions of most human and mouse genes. Computational prediction of gene function promises to help focus limited experimental resources on the most likely hypotheses. Several algorithms using diverse genomic data have been applied to this task in model organisms; however, the performance of such approaches in mammals has not yet been evaluated. Results: In this study, a standardized collection of mouse functional genomic data was assembled; nine bioinformatics teams used this data set to independently train classifiers and generate predictions of function, as defined by Gene Ontology (GO) terms, for 21,603 mouse genes; and the best performing submissions were combined in a single set of predictions. We identified strengths and weaknesses of current functional genomic data sets and compared the performance of function prediction algorithms. This analysis inferred functions for 76% of mouse genes, including 5,000 currently uncharacterized genes. At a recall rate of 20%, a unified set of predictions averaged 41% precision, with 26% of GO terms achieving a precision better than 90%. Conclusion: We performed a systematic evaluation of diverse, independently developed computational approaches for predicting gene function from heterogeneous data sources in mammals. The results show that currently available data for mammals allows predictions with both breadth and accuracy. Importantly, many highly novel predictions emerge for the 38% of mouse genes that remain uncharacterized. PMID:18613946
Classic flea-borne transmission does not drive plague epizootics in prairie dogs.
Webb, Colleen T; Brooks, Christopher P; Gage, Kenneth L; Antolin, Michael F
2006-04-18
We lack a clear understanding of the enzootic maintenance of the bacterium (Yersinia pestis) that causes plague and the sporadic epizootics that occur in its natural rodent hosts. A key to elucidating these epidemiological dynamics is determining the dominant transmission routes of plague. Plague can be acquired from the bites of infectious fleas (which is generally considered to occur via a blocked flea vector), inhalation of infectious respiratory droplets, or contact with a short-term infectious reservoir. We present results from a plague modeling approach that includes transmission from all three sources of infection simultaneously and uses sensitivity analysis to determine their relative importance. Our model is completely parameterized by using data from the literature and our own field studies of plague in the black-tailed prairie dog (Cynomys ludovicianus). Results of the model are qualitatively and quantitatively consistent with independent data from our field sites. Although infectious fleas might be an important source of infection and transmission via blocked fleas is a dominant paradigm in the literature, our model clearly predicts that this form of transmission cannot drive epizootics in prairie dogs. Rather, a short-term reservoir is required for epizootic dynamics. Several short-term reservoirs have the potential to affect the prairie dog system. Our model predictions of the residence time of the short-term reservoir suggest that other small mammals, infectious prairie dog carcasses, fleas that transmit plague without blockage of the digestive tract, or some combination of these three are the most likely of the candidate infectious reservoirs.
Modeling, Simulation, and Forecasting of Subseasonal Variability
NASA Technical Reports Server (NTRS)
Waliser, Duane; Schubert, Siegfried; Kumar, Arun; Weickmann, Klaus; Dole, Randall
2003-01-01
A planning workshop on "Modeling, Simulation and Forecasting of Subseasonal Variability" was held in June 2003. This workshop was the first of a number of meetings planned to follow the NASA-sponsored workshop entitled "Prospects For Improved Forecasts Of Weather And Short-Term Climate Variability On Sub-Seasonal Time Scales" that was held April 2002. The 2002 workshop highlighted a number of key sources of unrealized predictability on subseasonal time scales including tropical heating, soil wetness, the Madden Julian Oscillation (MJO) [a.k.a Intraseasonal Oscillation (ISO)], the Arctic Oscillation (AO) and the Pacific/North American (PNA) pattern. The overarching objective of the 2003 follow-up workshop was to proceed with a number of recommendations made from the 2002 workshop, as well as to set an agenda and collate efforts in the areas of modeling, simulation and forecasting intraseasonal and short-term climate variability. More specifically, the aims of the 2003 workshop were to: 1) develop a baseline of the "state of the art" in subseasonal prediction capabilities, 2) implement a program to carry out experimental subseasonal forecasts, and 3) develop strategies for tapping the above sources of predictability by focusing research, model development, and the development/acquisition of new observations on the subseasonal problem. The workshop was held over two days and was attended by over 80 scientists, modelers, forecasters and agency personnel. The agenda of the workshop focused on issues related to the MJO and tropicalextratropical interactions as they relate to the subseasonal simulation and prediction problem. This included the development of plans for a coordinated set of GCM hindcast experiments to assess current model subseasonal prediction capabilities and shortcomings, an emphasis on developing a strategy to rectify shortcomings associated with tropical intraseasonal variability, namely diabatic processes, and continuing the implementation of an experimental forecast and model development program that focuses on one of the key sources of untapped predictability, namely the MJO. The tangible outcomes of the meeting included: 1) the development of a recommended framework for a set of multi-year ensembles of 45-day hindcasts to be carried out by a number of GCMs so that they can be analyzed in regards to their representations of subseasonal variability, predictability and forecast skill, 2) an assessment of the present status of GCM representations of the MJO and recommendations for future steps to take in order to remedy the remaining shortcomings in these representations, and 3) a final implementation plan for a multi-institute/multi-nation Experimental MJO Prediction Program.
NASA Astrophysics Data System (ADS)
Chang, Ni-Bin; Weng, Yu-Chi
2013-03-01
Short-term predictions of potential impacts from accidental release of various radionuclides at nuclear power plants are acutely needed, especially after the Fukushima accident in Japan. An integrated modeling system that provides expert services to assess the consequences of accidental or intentional releases of radioactive materials to the atmosphere has received wide attention. These scenarios can be initiated either by accident due to human, software, or mechanical failures, or from intentional acts such as sabotage and radiological dispersal devices. Stringent action might be required just minutes after the occurrence of accidental or intentional release. To fulfill the basic functions of emergency preparedness and response systems, previous studies seldom consider the suitability of air pollutant dispersion models or the connectivity between source term, dispersion, and exposure assessment models in a holistic context for decision support. Therefore, the Gaussian plume and puff models, which are only suitable for illustrating neutral air pollutants in flat terrain conditional to limited meteorological situations, are frequently used to predict the impact from accidental release of industrial sources. In situations with complex terrain or special meteorological conditions, the proposing emergency response actions might be questionable and even intractable to decisionmakers responsible for maintaining public health and environmental quality. This study is a preliminary effort to integrate the source term, dispersion, and exposure assessment models into a Spatial Decision Support System (SDSS) to tackle the complex issues for short-term emergency response planning and risk assessment at nuclear power plants. Through a series model screening procedures, we found that the diagnostic (objective) wind field model with the aid of sufficient on-site meteorological monitoring data was the most applicable model to promptly address the trend of local wind field patterns. However, most of the hazardous materials being released into the environment from nuclear power plants are not neutral pollutants, so the particle and multi-segment puff models can be regarded as the most suitable models to incorporate into the output of the diagnostic wind field model in a modern emergency preparedness and response system. The proposed SDSS illustrates the state-of-the-art system design based on the situation of complex terrain in South Taiwan. This system design of SDSS with 3-dimensional animation capability using a tailored source term model in connection with ArcView® Geographical Information System map layers and remote sensing images is useful for meeting the design goal of nuclear power plants located in complex terrain.
Arima model and exponential smoothing method: A comparison
NASA Astrophysics Data System (ADS)
Wan Ahmad, Wan Kamarul Ariffin; Ahmad, Sabri
2013-04-01
This study shows the comparison between Autoregressive Moving Average (ARIMA) model and Exponential Smoothing Method in making a prediction. The comparison is focused on the ability of both methods in making the forecasts with the different number of data sources and the different length of forecasting period. For this purpose, the data from The Price of Crude Palm Oil (RM/tonne), Exchange Rates of Ringgit Malaysia (RM) in comparison to Great Britain Pound (GBP) and also The Price of SMR 20 Rubber Type (cents/kg) with three different time series are used in the comparison process. Then, forecasting accuracy of each model is measured by examinethe prediction error that producedby using Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and Mean Absolute deviation (MAD). The study shows that the ARIMA model can produce a better prediction for the long-term forecasting with limited data sources, butcannot produce a better prediction for time series with a narrow range of one point to another as in the time series for Exchange Rates. On the contrary, Exponential Smoothing Method can produce a better forecasting for Exchange Rates that has a narrow range of one point to another for its time series, while itcannot produce a better prediction for a longer forecasting period.
Characterizing Black Hole Mergers
NASA Technical Reports Server (NTRS)
Baker, John; Boggs, William Darian; Kelly, Bernard
2010-01-01
Binary black hole mergers are a promising source of gravitational waves for interferometric gravitational wave detectors. Recent advances in numerical relativity have revealed the predictions of General Relativity for the strong burst of radiation generated in the final moments of binary coalescence. We explore features in the merger radiation which characterize the final moments of merger and ringdown. Interpreting the waveforms in terms of an rotating implicit radiation source allows a unified phenomenological description of the system from inspiral through ringdown. Common features in the waveforms allow quantitative description of the merger signal which may provide insights for observations large-mass black hole binaries.
Modeling the refraction of microbaroms by the winds of a large maritime storm.
Blom, Philip; Waxler, Roger
2017-12-01
Continuous infrasonic signals produced by the ocean surface interacting with the atmosphere, termed microbaroms, are known to be generated by a number of phenomena including large maritime storms. Storm generated microbaroms exhibit axial asymmetry when observed at locations far from the storm due to the source location being offset from the storm center. Because of this offset, a portion of the microbarom energy will radiate towards the storm center and interact with the winds in the region. Detailed here are predictions for the propagation of microbaroms through an axisymmetric, three-dimensional model storm. Geometric propagation methods have been utilized and the predicted horizontal refraction is found to produce signals that appear to emanate from a virtual source near the storm center when observed far from the storm. This virtual source near the storm center is expected to be observed only from a limited arc around the storm system with increased extent associated with more intense wind fields. This result implies that identifying the extent of the arc observing signal from the virtual source could provide a means to estimate the wind structure using infrasonic observations far from the storm system.
Incorrect predictions reduce switch costs.
Kleinsorge, Thomas; Scheil, Juliane
2015-07-01
In three experiments, we combined two sources of conflict within a modified task-switching procedure. The first source of conflict was the one inherent in any task switching situation, namely the conflict between a task set activated by the recent performance of another task and the task set needed to perform the actually relevant task. The second source of conflict was induced by requiring participants to guess aspects of the upcoming task (Exps. 1 & 2: task identity; Exp. 3: position of task precue). In case of an incorrect guess, a conflict accrues between the representation of the guessed task and the actually relevant task. In Experiments 1 and 2, incorrect guesses led to an overall increase of reaction times and error rates, but they reduced task switch costs compared to conditions in which participants predicted the correct task. In Experiment 3, incorrect guesses resulted in faster performance overall and to a selective decrease of reaction times in task switch trials when the cue-target interval was long. We interpret these findings in terms of an enhanced level of controlled processing induced by a combination of two sources of conflict converging upon the same target of cognitive control. Copyright © 2015 Elsevier B.V. All rights reserved.
A moving medium formulation for prediction of propeller noise at incidence
NASA Astrophysics Data System (ADS)
Ghorbaniasl, Ghader; Lacor, Chris
2012-01-01
This paper presents a time domain formulation for the sound field radiated by moving bodies in a uniform steady flow with arbitrary orientation. The aim is to provide a formulation for prediction of noise from body so that effects of crossflow on a propeller can be modeled in the time domain. An established theory of noise generation by a moving source is combined with the moving medium Green's function for derivation of the formulation. A formula with Doppler factor is developed because it is more easily interpreted and is more helpful in examining the physic of systems. Based on the technique presented, the source of asymmetry of the sound field can be explained in terms of physics of a moving source. It is shown that the derived formulation can be interpreted as an extension of formulation 1 and 1A of Farassat based on the Ffowcs Williams and Hawkings (FW-H) equation for moving medium problems. Computational results for a stationary monopole and dipole point source in moving medium, a rotating point force in crossflow, a model of helicopter blade at incidence and a propeller case with subsonic tips at incidence verify the formulation.
Identifying pollution sources and predicting urban air quality using ensemble learning methods
NASA Astrophysics Data System (ADS)
Singh, Kunwar P.; Gupta, Shikha; Rai, Premanjali
2013-12-01
In this study, principal components analysis (PCA) was performed to identify air pollution sources and tree based ensemble learning models were constructed to predict the urban air quality of Lucknow (India) using the air quality and meteorological databases pertaining to a period of five years. PCA identified vehicular emissions and fuel combustion as major air pollution sources. The air quality indices revealed the air quality unhealthy during the summer and winter. Ensemble models were constructed to discriminate between the seasonal air qualities, factors responsible for discrimination, and to predict the air quality indices. Accordingly, single decision tree (SDT), decision tree forest (DTF), and decision treeboost (DTB) were constructed and their generalization and predictive performance was evaluated in terms of several statistical parameters and compared with conventional machine learning benchmark, support vector machines (SVM). The DT and SVM models discriminated the seasonal air quality rendering misclassification rate (MR) of 8.32% (SDT); 4.12% (DTF); 5.62% (DTB), and 6.18% (SVM), respectively in complete data. The AQI and CAQI regression models yielded a correlation between measured and predicted values and root mean squared error of 0.901, 6.67 and 0.825, 9.45 (SDT); 0.951, 4.85 and 0.922, 6.56 (DTF); 0.959, 4.38 and 0.929, 6.30 (DTB); 0.890, 7.00 and 0.836, 9.16 (SVR) in complete data. The DTF and DTB models outperformed the SVM both in classification and regression which could be attributed to the incorporation of the bagging and boosting algorithms in these models. The proposed ensemble models successfully predicted the urban ambient air quality and can be used as effective tools for its management.
NASA Astrophysics Data System (ADS)
Barrett, Steven R. H.; Britter, Rex E.
Predicting long-term mean pollutant concentrations in the vicinity of airports, roads and other industrial sources are frequently of concern in regulatory and public health contexts. Many emissions are represented geometrically as ground-level line or area sources. Well developed modelling tools such as AERMOD and ADMS are able to model dispersion from finite (i.e. non-point) sources with considerable accuracy, drawing upon an up-to-date understanding of boundary layer behaviour. Due to mathematical difficulties associated with line and area sources, computationally expensive numerical integration schemes have been developed. For example, some models decompose area sources into a large number of line sources orthogonal to the mean wind direction, for which an analytical (Gaussian) solution exists. Models also employ a time-series approach, which involves computing mean pollutant concentrations for every hour over one or more years of meteorological data. This can give rise to computer runtimes of several days for assessment of a site. While this may be acceptable for assessment of a single industrial complex, airport, etc., this level of computational cost precludes national or international policy assessments at the level of detail available with dispersion modelling. In this paper, we extend previous work [S.R.H. Barrett, R.E. Britter, 2008. Development of algorithms and approximations for rapid operational air quality modelling. Atmospheric Environment 42 (2008) 8105-8111] to line and area sources. We introduce approximations which allow for the development of new analytical solutions for long-term mean dispersion from line and area sources, based on hypergeometric functions. We describe how these solutions can be parameterized from a single point source run from an existing advanced dispersion model, thereby accounting for all processes modelled in the more costly algorithms. The parameterization method combined with the analytical solutions for long-term mean dispersion are shown to produce results several orders of magnitude more efficiently with a loss of accuracy small compared to the absolute accuracy of advanced dispersion models near sources. The method can be readily incorporated into existing dispersion models, and may allow for additional computation time to be expended on modelling dispersion processes more accurately in future, rather than on accounting for source geometry.
A Simulated Environment Experiment on Annoyance Due to Combined Road Traffic and Industrial Noises
Marquis-Favre, Catherine; Morel, Julien
2015-01-01
Total annoyance due to combined noises is still difficult to predict adequately. This scientific gap is an obstacle for noise action planning, especially in urban areas where inhabitants are usually exposed to high noise levels from multiple sources. In this context, this work aims to highlight potential to enhance the prediction of total annoyance. The work is based on a simulated environment experiment where participants performed activities in a living room while exposed to combined road traffic and industrial noises. The first objective of the experiment presented in this paper was to gain further understanding of the effects on annoyance of some acoustical factors, non-acoustical factors and potential interactions between the combined noise sources. The second one was to assess total annoyance models constructed from the data collected during the experiment and tested using data gathered in situ. The results obtained in this work highlighted the superiority of perceptual models. In particular, perceptual models with an interaction term seemed to be the best predictors for the two combined noise sources under study, even with high differences in sound pressure level. Thus, these results reinforced the need to focus on perceptual models and to improve the prediction of partial annoyances. PMID:26197326
NASA Astrophysics Data System (ADS)
Jones, J. L.; Sterbentz, J. W.; Yoon, W. Y.; Norman, D. R.
2009-12-01
Energetic photon sources with energies greater than 6 MeV continue to be recognized as viable source for various types of inspection applications, especially those related to nuclear and/or explosive material detection. These energetic photons can be produced as a continuum of energies (i.e., bremsstrahlung) or as a set of one or more discrete photon energies (i.e., monoenergetic). This paper will provide a follow-on extension of the photon dose comparison presented at the 9th International Conference on Applications of Nuclear Techniques (June 2008). Our previous paper showed the comparative advantages and disadvantages of the photon doses provided by these two energetic interrogation sources and highlighted the higher energy advantage of the bremsstrahlung source, especially at long standoff distances (i.e., distance from source to the inspected object). This paper will pursue higher energy photon inspection advantage (up to 100 MeV) by providing dose and stimulated photonuclear interaction predictions in air and for an infinitely dilute interrogated material (used for comparative interaction rate assessments since it excludes material self-shielding) as the interrogation object positioned forward on the inspection beam axis at increasing standoff distances. In addition to the direct energetic photon-induced stimulation, the predictions will identify the importance of secondary downscattered/attenuated source-term effects arising from the photon transport in the intervening air environment.
Kwon, Kideok; Yang, Jihoon; Yoo, Younghwan
2015-04-24
A number of research works has studied packet scheduling policies in energy scavenging wireless sensor networks, based on the predicted amount of harvested energy. Most of them aim to achieve energy neutrality, which means that an embedded system can operate perpetually while meeting application requirements. Unlike other renewable energy sources, solar energy has the feature of distinct periodicity in the amount of harvested energy over a day. Using this feature, this paper proposes a packet transmission control policy that can enhance the network performance while keeping sensor nodes alive. Furthermore, this paper suggests a novel solar energy prediction method that exploits the relation between cloudiness and solar radiation. The experimental results and analyses show that the proposed packet transmission policy outperforms others in terms of the deadline miss rate and data throughput. Furthermore, the proposed solar energy prediction method can predict more accurately than others by 6.92%.
Using VAPEPS for noise control on Space Station Freedom
NASA Technical Reports Server (NTRS)
Badilla, Gloria; Bergen, Thomas; Scharton, Terry
1991-01-01
Noise environmental control is an important design consideration for Space Station Freedom (SSF), both for crew safety and productivity. Acoustic noise requirements are established to eliminate fatigue and potential hearing loss by crew members from long-term exposure and to facilitate speech communication. VAPEPS (VibroAcoustic Payload Environment Prediction System) is currently being applied to SSF for prediction of the on-orbit noise and vibration environments induced in the 50 to 10,000 Hz frequency range. Various sources such as fans, pumps, centrifuges, exercise equipment, and other mechanical devices are used in the analysis. The predictions will be used in design tradeoff studies and to provide confidence that requirements will be met. Preliminary predictions show that the required levels will be exceeded unless substantial noise control measures are incorporated in the SSF design. Predicted levels for an SSF design without acoustic control treatments exceed requirements by 25 dB in some one-third octave frequency bands.
Lentz, Erika E.; Hapke, Cheryl J.; Stockdon, Hilary F.; Hehre, Rachel E.
2013-01-01
Observed morphodynamic changes over multiple decades were coupled with storm-driven run-up characteristics at Fire Island, New York, to explore the influence of wave processes relative to the impacts of other coastal change drivers on the near-term evolution of the barrier island. Historical topography was generated from digital stereo-photogrammetry and compared with more recent lidar surveys to quantify near-term (decadal) morphodynamic changes to the beach and primary dune system between the years 1969, 1999, and 2009. Notably increased profile volumes were observed along the entirety of the island in 1999, and likely provide the eolian source for the steady dune crest progradation observed over the relatively quiescent decade that followed. Persistent patterns of erosion and accretion over 10-, 30-, and 40-year intervals are attributable to variations in island morphology, human activity, and variations in offshore bathymetry and island orientation that influence the wave energy reaching the coast. Areas of documented long-term historical inlet formation and extensive bayside marsh development show substantial landward translation of the dune–beach profile over the near-term period of this study. Correlations among areas predicted to overwash, observed elevation changes of the dune crestline, and observed instances of overwash in undeveloped segments of the barrier island verify that overwash locations can be accurately predicted in undeveloped segments of coast. In fact, an assessment of 2012 aerial imagery collected after Hurricane Sandy confirms that overwash occurred at the majority of near-term locations persistently predicted to overwash. In addition to the storm wave climate, factors related to variations within the geologic framework which in turn influence island orientation, offshore slope, and sediment supply impact island behavior on near-term timescales.
Using a Magnetic Flux Transport Model to Predict the Solar Cycle
NASA Technical Reports Server (NTRS)
Lyatskaya, S.; Hathaway, D.; Winebarger, A.
2007-01-01
We present the results of an investigation into the use of a magnetic flux transport model to predict the amplitude of future solar cycles. Recently Dikpati, de Toma, & Gilman (2006) showed how their dynamo model could be used to accurately predict the amplitudes of the last eight solar cycles and offered a prediction for the next solar cycle - a large amplitude cycle. Cameron & Schussler (2007) found that they could reproduce this predictive skill with a simple 1-dimensional surface flux transport model - provided they used the same parameters and data as Dikpati, de Toma, & Gilman. However, when they tried incorporating the data in what they argued was a more realistic manner, they found that the predictive skill dropped dramatically. We have written our own code for examining this problem and have incorporated updated and corrected data for the source terms - the emergence of magnetic flux in active regions. We present both the model itself and our results from it - in particular our tests of its effectiveness at predicting solar cycles.
2014-09-01
generation, exotic storage technologies, smart power grid management, and better power sources for directed-energy weapons (DEW). Accessible partner nation...near term will help to mitigate risks and improve outcomes. 2 Forecasting typically extrapolates predictions based...eventually, diminished national power . Within this context, this paper examines policy, legal, ethical, and strategy implications for DoD from the impact
Campbell, W.H.
1986-01-01
Electric currents in long pipelines can contribute to corrosion effects that limit the pipe's lifetime. One cause of such electric currents is the geomagnetic field variations that have sources in the Earth's upper atmosphere. Knowledge of the general behavior of the sources allows a prediction of the occurrence times, favorable locations for the pipeline effects, and long-term projections of corrosion contributions. The source spectral characteristics, the Earth's conductivity profile, and a corrosion-frequency dependence limit the period range of the natural field changes that affect the pipe. The corrosion contribution by induced currents from geomagnetic sources should be evaluated for pipelines that are located at high and at equatorial latitudes. At midlatitude locations, the times of these natural current maxima should be avoided for the necessary accurate monitoring of the pipe-to-soil potential. ?? 1986 D. Reidel Publishing Company.
NASA Technical Reports Server (NTRS)
Farassat, F.
1984-01-01
A simple mathematical model of a stationary source distribution for the supersonic-propeller noise-prediction formula of Farassat (1983) is developed to test the validity of the formula solutions. The conventional thickness source term is used in place of the Isom thickness formula; the relative importance of the line and surface integrals in the solutions is evaluated; and the numerical results are compared with those obtained with a conventional retarded-time solution in tables. Good agreement is obtained over elevation angles from 10 to 90 deg, and the line-integral contribution is found to be significant at all elevation angles and of the same order of magnitude as the surface-integral contribution at angles less than 30 deg. The amplitude-normalized directivity patterns for the four cases computed (x = 1.5 or 10; k = 5.0 or 50) are presented graphically.
Numerical Prediction of Combustion-induced Noise using a hybrid LES/CAA approach
NASA Astrophysics Data System (ADS)
Ihme, Matthias; Pitsch, Heinz; Kaltenbacher, Manfred
2006-11-01
Noise generation in technical devices is an increasingly important problem. Jet engines in particular produce sound levels that not only are a nuisance but may also impair hearing. The noise emitted by such engines is generated by different sources such as jet exhaust, fans or turbines, and combustion. Whereas the former acoustic mechanisms are reasonably well understood, combustion-generated noise is not. A methodology for the prediction of combustion-generated noise is developed. In this hybrid approach unsteady acoustic source terms are obtained from an LES and the propagation of pressure perturbations are obtained using acoustic analogies. Lighthill's acoustic analogy and a non-linear wave equation, accounting for variable speed of sound, have been employed. Both models are applied to an open diffusion flame. The effects on the far field pressure and directivity due to the variation of speed of sound are analyzed. Results for the sound pressure level will be compared with experimental data.
Lateralization as a symmetry breaking process in birdsong
NASA Astrophysics Data System (ADS)
Trevisan, M. A.; Cooper, B.; Goller, F.; Mindlin, G. B.
2007-03-01
The singing by songbirds is a most convincing example in the animal kingdom of functional lateralization of the brain, a feature usually associated with human language. Lateralization is expressed as one or both of the bird’s sound sources being active during the vocalization. Normal songs require high coordination between the vocal organ and respiratory activity, which is bilaterally symmetric. Moreover, the physical and neural substrate used to produce the song lack obvious asymmetries. In this work we show that complex spatiotemporal patterns of motor activity controlling airflow through the sound sources can be explained in terms of spontaneous symmetry breaking bifurcations. This analysis also provides a framework from which to study the effects of imperfections in the system’ s symmetries. A physical model of the avian vocal organ is used to generate synthetic sounds, which allows us to predict acoustical signatures of the song and compare the predictions of the model with experimental data.
Low Data Drug Discovery with One-Shot Learning.
Altae-Tran, Han; Ramsundar, Bharath; Pappu, Aneesh S; Pande, Vijay
2017-04-26
Recent advances in machine learning have made significant contributions to drug discovery. Deep neural networks in particular have been demonstrated to provide significant boosts in predictive power when inferring the properties and activities of small-molecule compounds (Ma, J. et al. J. Chem. Inf. 2015, 55, 263-274). However, the applicability of these techniques has been limited by the requirement for large amounts of training data. In this work, we demonstrate how one-shot learning can be used to significantly lower the amounts of data required to make meaningful predictions in drug discovery applications. We introduce a new architecture, the iterative refinement long short-term memory, that, when combined with graph convolutional neural networks, significantly improves learning of meaningful distance metrics over small-molecules. We open source all models introduced in this work as part of DeepChem, an open-source framework for deep-learning in drug discovery (Ramsundar, B. deepchem.io. https://github.com/deepchem/deepchem, 2016).
NASA Technical Reports Server (NTRS)
Cunefare, K. A.; Koopmann, G. H.
1991-01-01
This paper presents the theoretical development of an approach to active noise control (ANC) applicable to three-dimensional radiators. The active noise control technique, termed ANC Optimization Analysis, is based on minimizing the total radiated power by adding secondary acoustic sources on the primary noise source. ANC Optimization Analysis determines the optimum magnitude and phase at which to drive the secondary control sources in order to achieve the best possible reduction in the total radiated power from the noise source/control source combination. For example, ANC Optimization Analysis predicts a 20 dB reduction in the total power radiated from a sphere of radius at a dimensionless wavenumber ka of 0.125, for a single control source representing 2.5 percent of the total area of the sphere. ANC Optimization Analysis is based on a boundary element formulation of the Helmholtz Integral Equation, and thus, the optimization analysis applies to a single frequency, while multiple frequencies can be treated through repeated analyses.
NASA Astrophysics Data System (ADS)
Gelmez Burakgazi, Sevinc; Yildirim, Ali; Weeth Feinstein, Noah
2016-04-01
Rooted in science education and science communication studies, this study examines 4th and 5th grade students' perceptions of science information sources (SIS) and their use in communicating science to students. It combines situated learning theory with uses and gratifications theory in a qualitative phenomenological analysis. Data were gathered through classroom observations and interviews in four Turkish elementary schools. Focus group interviews with 47 students and individual interviews with 17 teachers and 10 parents were conducted. Participants identified a wide range of SIS, including TV, magazines, newspapers, internet, peers, teachers, families, science centers/museums, science exhibitions, textbooks, science books, and science camps. Students reported using various SIS in school-based and non-school contexts to satisfy their cognitive, affective, personal, and social integrative needs. SIS were used for science courses, homework/project assignments, examination/test preparations, and individual science-related research. Students assessed SIS in terms of the perceived accessibility of the sources, the quality of the content, and the content presentation. In particular, some sources such as teachers, families, TV, science magazines, textbooks, and science centers/museums ("directive sources") predictably led students to other sources such as teachers, families, internet, and science books ("directed sources"). A small number of sources crossed context boundaries, being useful in both school and out. Results shed light on the connection between science education and science communication in terms of promoting science learning.
Nonlinear anomalous photocurrents in Weyl semimetals
NASA Astrophysics Data System (ADS)
Rostami, Habib; Polini, Marco
2018-05-01
We study the second-order nonlinear optical response of a Weyl semimetal (WSM), i.e., a three-dimensional metal with linear band touchings acting as pointlike sources of Berry curvature in momentum space, termed "Weyl-Berry monopoles." We first show that the anomalous second-order photocurrent of WSMs can be elegantly parametrized in terms of Weyl-Berry dipole and quadrupole moments. We then calculate the corresponding charge and node conductivities of WSMs with either broken time-reversal invariance or inversion symmetry. In particular, we predict a dissipationless second-order anomalous node conductivity for WSMs belonging to the TaAs family.
Sonsthagen, Sarah A.; McClaren, Erica L.; Doyle, Frank I.; Titus, K.; Sage, George K.; Wilson, Robert E.; Gust, Judy R.; Talbot, Sandra L.
2012-01-01
Northern Goshawks occupying the Alexander Archipelago, Alaska, and coastal British Columbia nest primarily in old-growth and mature forest, which results in spatial heterogeneity in the distribution of individuals across the landscape. We used microsatellite and mitochondrial data to infer genetic structure, gene flow, and fluctuations in population demography through evolutionary time. Patterns in the genetic signatures were used to assess predictions associated with the three population models: panmixia, metapopulation, and isolated populations. Population genetic structure was observed along with asymmetry in gene flow estimates that changed directionality at different temporal scales, consistent with metapopulation model predictions. Therefore, Northern Goshawk assemblages located in the Alexander Archipelago and coastal British Columbia interact through a metapopulation framework, though they may not fit the classic model of a metapopulation. Long-term population sources (coastal mainland British Columbia) and sinks (Revillagigedo and Vancouver islands) were identified. However, there was no trend through evolutionary time in the directionality of dispersal among the remaining assemblages, suggestive of a rescue-effect dynamic. Admiralty, Douglas, and Chichagof island complex appears to be an evolutionarily recent source population in the Alexander Archipelago. In addition, Kupreanof island complex and Kispiox Forest District populations have high dispersal rates to populations in close geographic proximity and potentially serve as local source populations. Metapopulation dynamics occurring in the Alexander Archipelago and coastal British Columbia by Northern Goshawks highlight the importance of both occupied and unoccupied habitats to long-term population persistence of goshawks in this region.
Lappe, Claudia; Bodeck, Sabine; Lappe, Markus; Pantev, Christo
2017-01-01
Predictive mechanisms in the human brain can be investigated using markers for prediction violations like the mismatch negativity (MMN). Short-term piano training increases the MMN for melodic and rhythmic deviations in the training material. This increase occurs only when the material is actually played, not when it is only perceived through listening, suggesting that learning predictions about upcoming musical events are derived from motor involvement. However, music is often performed in concert with others. In this case, predictions about upcoming actions from a partner are a crucial part of the performance. In the present experiment, we use magnetoencephalography (MEG) to measure MMNs to deviations in one's own and a partner's musical material after both engaged in musical duet training. Event-related field (ERF) results revealed that the MMN increased significantly for own and partner material suggesting a neural representation of the partner's part in a duet situation. Source analysis using beamforming revealed common activations in auditory, inferior frontal, and parietal areas, similar to previous results for single players, but also a pronounced contribution from the cerebellum. In addition, activation of the precuneus and the medial frontal cortex was observed, presumably related to the need to distinguish between own and partner material.
ISS Ambient Air Quality: Updated Inventory of Known Aerosol Sources
NASA Technical Reports Server (NTRS)
Meyer, Marit
2014-01-01
Spacecraft cabin air quality is of fundamental importance to crew health, with concerns encompassing both gaseous contaminants and particulate matter. Little opportunity exists for direct measurement of aerosol concentrations on the International Space Station (ISS), however, an aerosol source model was developed for the purpose of filtration and ventilation systems design. This model has successfully been applied, however, since the initial effort, an increase in the number of crewmembers from 3 to 6 and new processes on board the ISS necessitate an updated aerosol inventory to accurately reflect the current ambient aerosol conditions. Results from recent analyses of dust samples from ISS, combined with a literature review provide new predicted aerosol emission rates in terms of size-segregated mass and number concentration. Some new aerosol sources have been considered and added to the existing array of materials. The goal of this work is to provide updated filtration model inputs which can verify that the current ISS filtration system is adequate and filter lifetime targets are met. This inventory of aerosol sources is applicable to other spacecraft, and becomes more important as NASA considers future long term exploration missions, which will preclude the opportunity for resupply of filtration products.
Sensitivity of WRF-chem predictions to dust source function specification in West Asia
NASA Astrophysics Data System (ADS)
Nabavi, Seyed Omid; Haimberger, Leopold; Samimi, Cyrus
2017-02-01
Dust storms tend to form in sparsely populated areas covered by only few observations. Dust source maps, known as source functions, are used in dust models to allocate a certain potential of dust release to each place. Recent research showed that the well known Ginoux source function (GSF), currently used in Weather Research and Forecasting Model coupled with Chemistry (WRF-chem), exhibits large errors over some regions in West Asia, particularly near the IRAQ/Syrian border. This study aims to improve the specification of this critical part of dust forecasts. A new source function based on multi-year analysis of satellite observations, called West Asia source function (WASF), is therefore proposed to raise the quality of WRF-chem predictions in the region. WASF has been implemented in three dust schemes of WRF-chem. Remotely sensed and ground-based observations have been used to verify the horizontal and vertical extent and location of simulated dust clouds. Results indicate that WRF-chem performance is significantly improved in many areas after the implementation of WASF. The modified runs (long term simulations over the summers 2008-2012, using nudging) have yielded an average increase of Spearman correlation between observed and forecast aerosol optical thickness by 12-16 percent points compared to control runs with standard source functions. They even outperform MACC and DREAM dust simulations over many dust source regions. However, the quality of the forecasts decreased with distance from sources, probably due to deficiencies in the transport and deposition characteristics of the forecast model in these areas.
Magnetostrophic balance in planetary dynamos - Predictions for Neptune's magnetosphere
NASA Technical Reports Server (NTRS)
Curtis, S. A.; Ness, N. F.
1986-01-01
With the purpose of estimating Neptune's magnetic field and its implications for nonthermal Neptune radio emissions, a new scaling law for planetary magnetic fields was developed in terms of externally observable parameters (the planet's mean density, radius, mass, rotation rate, and internal heat source luminosity). From a comparison of theory and observations by Voyager it was concluded that planetary dynamos are two-state systems with either zero intrinsic magnetic field (for planets with low internal heat source) or (for planets with the internal heat source sufficiently strong to drive convection) a magnetic field near the upper bound determined from magnetostrophic balance. It is noted that mass loading of the Neptune magnetosphere by Triton may play an important role in the generation of nonthermal radio emissions.
Measurement of erosion in helicon plasma thrusters using the VASIMR® VX-CR device
NASA Astrophysics Data System (ADS)
Del Valle Gamboa, Juan Ignacio; Castro-Nieto, Jose; Squire, Jared; Carter, Mark; Chang-Diaz, Franklin
2015-09-01
The helicon plasma source is one of the principal stages of the high-power VASIMR® electric propulsion system. The VASIMR® VX-CR experiment focuses solely on this stage, exploring the erosion and long-term operation effects of the VASIMR helicon source. We report on the design and operational parameters of the VX-CR experiment, and the development of modeling tools and characterization techniques allowing the study of erosion phenomena in helicon plasma sources in general, and stand-alone helicon plasma thrusters (HPTs) in particular. A thorough understanding of the erosion phenomena within HPTs will enable better predictions of their behavior as well as more accurate estimations of their expected lifetime. We present a simplified model of the plasma-wall interactions within HPTs based on current models of the plasma density distributions in helicon discharges. Results from this modeling tool are used to predict the erosion within the plasma-facing components of the VX-CR device. Experimental techniques to measure actual erosion, including the use of coordinate-measuring machines and microscopy, will be discussed.
Acceleration of auroral electrons in parallel electric fields
NASA Technical Reports Server (NTRS)
Kaufmann, R. L.; Walker, D. N.; Arnoldy, R. L.
1976-01-01
Rocket observations of auroral electrons are compared with the predictions of a number of theoretical acceleration mechanisms that involve an electric field parallel to the earth's magnetic field. The theoretical models are discussed in terms of required plasma sources, the location of the acceleration region, and properties of necessary wave-particle scattering mechanisms. We have been unable to find any steady state scatter-free electric field configuration that predicts electron flux distributions in agreement with the observations. The addition of a fluctuating electric field or wave-particle scattering several thousand kilometers above the rocket can modify the theoretical flux distributions so that they agree with measurements. The presence of very narrow energy peaks in the flux contours implies a characteristic temperature of several tens of electron volts or less for the source of field-aligned auroral electrons and a temperature of several hundred electron volts or less for the relatively isotropic 'monoenergetic' auroral electrons. The temperature of the field-aligned electrons is more representative of the magnetosheath or possibly the ionosphere as a source region than of the plasma sheet.
Near-field sound radiation of fan tones from an installed turbofan aero-engine.
McAlpine, Alan; Gaffney, James; Kingan, Michael J
2015-09-01
The development of a distributed source model to predict fan tone noise levels of an installed turbofan aero-engine is reported. The key objective is to examine a canonical problem: how to predict the pressure field due to a distributed source located near an infinite, rigid cylinder. This canonical problem is a simple representation of an installed turbofan, where the distributed source is based on the pressure pattern generated by a spinning duct mode, and the rigid cylinder represents an aircraft fuselage. The radiation of fan tones can be modelled in terms of spinning modes. In this analysis, based on duct modes, theoretical expressions for the near-field acoustic pressures on the cylinder, or at the same locations without the cylinder, have been formulated. Simulations of the near-field acoustic pressures are compared against measurements obtained from a fan rig test. Also, the installation effect is quantified by calculating the difference in the sound pressure levels with and without the adjacent cylindrical fuselage. Results are shown for the blade passing frequency fan tone radiated at a supersonic fan operating condition.
Permeable Surface Corrections for Ffowcs Williams and Hawkings Integrals
NASA Technical Reports Server (NTRS)
Lockard, David P.; Casper, Jay H.
2005-01-01
The acoustic prediction methodology discussed herein applies an acoustic analogy to calculate the sound generated by sources in an aerodynamic simulation. Sound is propagated from the computed flow field by integrating the Ffowcs Williams and Hawkings equation on a suitable control surface. Previous research suggests that, for some applications, the integration surface must be placed away from the solid surface to incorporate source contributions from within the flow volume. As such, the fluid mechanisms in the input flow field that contribute to the far-field noise are accounted for by their mathematical projection as a distribution of source terms on a permeable surface. The passage of nonacoustic disturbances through such an integration surface can result in significant error in an acoustic calculation. A correction for the error is derived in the frequency domain using a frozen gust assumption. The correction is found to work reasonably well in several test cases where the error is a small fraction of the actual radiated noise. However, satisfactory agreement has not been obtained between noise predictions using the solution from a three-dimensional, detached-eddy simulation of flow over a cylinder.
NASA Technical Reports Server (NTRS)
Khavaran, Abbas; Bridges, James; Georgiadis, Nicholas
2005-01-01
The model-based approach, used by the JeNo code to predict jet noise spectral directivity, is described. A linearized form of Lilley's equation governs the non-causal Green s function of interest, with the non-linear terms on the right hand side identified as the source. A Reynolds-averaged Navier-Stokes (RANS) solution yields the required mean flow for the solution of the propagation Green s function in a locally parallel flow. The RANS solution also produces time- and length-scales needed to model the non-compact source, the turbulent velocity correlation tensor, with exponential temporal and spatial functions. It is shown that while an exact non-causal Green s function accurately predicts the observed shift in the location of the spectrum peak with angle as well as the angularity of sound at low to moderate Mach numbers, the polar directivity of radiated sound is not entirely captured by this Green s function at high subsonic and supersonic acoustic Mach numbers. Results presented for unheated jets in the Mach number range of 0.51 to 1.8 suggest that near the peak radiation angle of high-speed jets, a different source/Green s function convolution integral may be required in order to capture the peak observed directivity of jet noise. A sample Mach 0.90 heated jet is also discussed that highlights the requirements for a comprehensive jet noise prediction model.
On climate prediction: how much can we expect from climate memory?
NASA Astrophysics Data System (ADS)
Yuan, Naiming; Huang, Yan; Duan, Jianping; Zhu, Congwen; Xoplaki, Elena; Luterbacher, Jürg
2018-03-01
Slowing variability in climate system is an important source of climate predictability. However, it is still challenging for current dynamical models to fully capture the variability as well as its impacts on future climate. In this study, instead of simulating the internal multi-scale oscillations in dynamical models, we discussed the effects of internal variability in terms of climate memory. By decomposing climate state x(t) at a certain time point t into memory part M(t) and non-memory part ɛ (t) , climate memory effects from the past 30 years on climate prediction are quantified. For variables with strong climate memory, high variance (over 20% ) in x(t) is explained by the memory part M(t), and the effects of climate memory are non-negligible for most climate variables, but the precipitation. Regarding of multi-steps climate prediction, a power law decay of the explained variance was found, indicating long-lasting climate memory effects. The explained variances by climate memory can remain to be higher than 10% for more than 10 time steps. Accordingly, past climate conditions can affect both short (monthly) and long-term (interannual, decadal, or even multidecadal) climate predictions. With the memory part M(t) precisely calculated from Fractional Integral Statistical Model, one only needs to focus on the non-memory part ɛ (t) , which is an important quantity that determines climate predictive skills.
Widdowson, M.A.; Chapelle, F.H.; Brauner, J.S.; ,
2003-01-01
A method is developed for optimizing monitored natural attenuation (MNA) and the reduction in the aqueous source zone concentration (??C) required to meet a site-specific regulatory target concentration. The mathematical model consists of two one-dimensional equations of mass balance for the aqueous phase contaminant, to coincide with up to two distinct zones of transformation, and appropriate boundary and intermediate conditions. The solution is written in terms of zone-dependent Peclet and Damko??hler numbers. The model is illustrated at a chlorinated solvent site where MNA was implemented following source treatment using in-situ chemical oxidation. The results demonstrate that by not taking into account a variable natural attenuation capacity (NAC), a lower target ??C is predicted, resulting in unnecessary source concentration reduction and cost with little benefit to achieving site-specific remediation goals.
NASA Technical Reports Server (NTRS)
Wang, S.-Y. Simon; Barandiaran, Danny; Hilburn, Kyle; Houser, Paul; Oglesby, Bob; Pan, Ming; Pinker, Rachel; Santanello, Joe; Schubert, Siegfried; Wang, Hailan;
2015-01-01
This paper summarizes research related to the 2012 record drought in the central United States conducted by members of the NASA Energy and Water cycle Study (NEWS) Working Group. Past drought patterns were analyzed for signal coherency with latest drought and the contribution of long-term trends in the Great Plains low-level jet, an important regional circulation feature of the spring rainy season in the Great Palins. Long-term changes in the seasonal transition from rainy spring into dry summer were also examined. Potential external forcing from radiative processes, soil-air interactions, and ocean teleconnections were assessed as contributors to the intensity of the drought. The atmospheric Rossby wave activity was found to be a potential source of predictability for the onset of drought. A probabilistic model was introduced and evaluated for its performance in predicting drought recovery in the Great Plains.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, S.; Toll, J.; Cothern, K.
1995-12-31
The authors have performed robust sensitivity studies of the physico-chemical Hudson River PCB model PCHEPM to identify the parameters and process uncertainties contributing the most to uncertainty in predictions of water column and sediment PCB concentrations, over the time period 1977--1991 in one segment of the lower Hudson River. The term ``robust sensitivity studies`` refers to the use of several sensitivity analysis techniques to obtain a more accurate depiction of the relative importance of different sources of uncertainty. Local sensitivity analysis provided data on the sensitivity of PCB concentration estimates to small perturbations in nominal parameter values. Range sensitivity analysismore » provided information about the magnitude of prediction uncertainty associated with each input uncertainty. Rank correlation analysis indicated which parameters had the most dominant influence on model predictions. Factorial analysis identified important interactions among model parameters. Finally, term analysis looked at the aggregate influence of combinations of parameters representing physico-chemical processes. The authors scored the results of the local and range sensitivity and rank correlation analyses. The authors considered parameters that scored high on two of the three analyses to be important contributors to PCB concentration prediction uncertainty, and treated them probabilistically in simulations. They also treated probabilistically parameters identified in the factorial analysis as interacting with important parameters. The authors used the term analysis to better understand how uncertain parameters were influencing the PCB concentration predictions. The importance analysis allowed us to reduce the number of parameters to be modeled probabilistically from 16 to 5. This reduced the computational complexity of Monte Carlo simulations, and more importantly, provided a more lucid depiction of prediction uncertainty and its causes.« less
Using internet searches for influenza surveillance.
Polgreen, Philip M; Chen, Yiling; Pennock, David M; Nelson, Forrest D
2008-12-01
The Internet is an important source of health information. Thus, the frequency of Internet searches may provide information regarding infectious disease activity. As an example, we examined the relationship between searches for influenza and actual influenza occurrence. Using search queries from the Yahoo! search engine ( http://search.yahoo.com ) from March 2004 through May 2008, we counted daily unique queries originating in the United States that contained influenza-related search terms. Counts were divided by the total number of searches, and the resulting daily fraction of searches was averaged over the week. We estimated linear models, using searches with 1-10-week lead times as explanatory variables to predict the percentage of cultures positive for influenza and deaths attributable to pneumonia and influenza in the United States. With use of the frequency of searches, our models predicted an increase in cultures positive for influenza 1-3 weeks in advance of when they occurred (P < .001), and similar models predicted an increase in mortality attributable to pneumonia and influenza up to 5 weeks in advance (P < .001). Search-term surveillance may provide an additional tool for disease surveillance.
Cysewski, Piotr; Jeliński, Tomasz
2013-10-01
The electronic spectrum of four different anthraquinones (1,2-dihydroxyanthraquinone, 1-aminoanthraquinone, 2-aminoanthraquinone and 1-amino-2-methylanthraquinone) in methanol solution was measured and used as reference data for theoretical color prediction. The visible part of the spectrum was modeled according to TD-DFT framework with a broad range of DFT functionals. The convoluted theoretical spectra were validated against experimental data by a direct color comparison in terms of CIE XYZ and CIE Lab tristimulus model color. It was found, that the 6-31G** basis set provides the most accurate color prediction and there is no need to extend the basis set since it does not improve the prediction of color. Although different functionals were found to give the most accurate color prediction for different anthraquinones, it is possible to apply the same DFT approach for the whole set of analyzed dyes. Especially three functionals seem to be valuable, namely mPW1LYP, B1LYP and PBE0 due to very similar spectra predictions. The major source of discrepancies between theoretical and experimental spectra comes from L values, representing the lightness, and the a parameter, depicting the position on green→magenta axis. Fortunately, the agreement between computed and observed blue→yellow axis (parameter b) is very precise in the case of studied anthraquinone dyes in methanol solution. Despite discussed shortcomings, color prediction from first principle quantum chemistry computations can lead to quite satisfactory results, expressed in terms of color space parameters.
Short-term dynamics of indoor and outdoor endotoxin exposure: Case of Santiago, Chile, 2012.
Barraza, Francisco; Jorquera, Héctor; Heyer, Johanna; Palma, Wilfredo; Edwards, Ana María; Muñoz, Marcelo; Valdivia, Gonzalo; Montoya, Lupita D
2016-01-01
Indoor and outdoor endotoxin in PM2.5 was measured for the very first time in Santiago, Chile, in spring 2012. Average endotoxin concentrations were 0.099 and 0.094 [EU/m(3)] for indoor (N=44) and outdoor (N=41) samples, respectively; the indoor-outdoor correlation (log-transformed concentrations) was low: R=-0.06, 95% CI: (-0.35 to 0.24), likely owing to outdoor spatial variability. A linear regression model explained 68% of variability in outdoor endotoxins, using as predictors elemental carbon (a proxy of traffic emissions), chlorine (a tracer of marine air masses reaching the city) and relative humidity (a modulator of surface emissions of dust, vegetation and garbage debris). In this study, for the first time a potential source contribution function (PSCF) was applied to outdoor endotoxin measurements. Wind trajectory analysis identified upwind agricultural sources as contributors to the short-term, outdoor endotoxin variability. Our results confirm an association between combustion particles from traffic and outdoor endotoxin concentrations. For indoor endotoxins, a predictive model was developed but it only explained 44% of endotoxin variability; the significant predictors were tracers of indoor PM2.5 dust (Si, Ca), number of external windows and number of hours with internal doors open. Results suggest that short-term indoor endotoxin variability may be driven by household dust/garbage production and handling. This would explain the modest predictive performance of published models that use answers to household surveys as predictors. One feasible alternative is to increase the sampling period so that household features would arise as significant predictors of long-term airborne endotoxin levels. Copyright © 2016 Elsevier Ltd. All rights reserved.
Forecasting Future Sea Ice Conditions in the MIZ: A Lagrangian Approach
2013-09-30
www.mcgill.ca/meteo/people/tremblay LONG-TERM GOALS 1- Determine the source regions for sea ice in the seasonally ice-covered zones (SIZs...distribution of sea ice cover and transport pathways. 2- Improve our understanding of the strengths and/or limitations of GCM predictions of future...ocean currents, RGPS sea ice deformation, Reanalysis surface wind , surface radiative fluxes, etc. Processing the large datasets involved is a tedious
Numerical simulations of LNG vapor dispersion in Brayton Fire Training Field tests with ANSYS CFX.
Qi, Ruifeng; Ng, Dedy; Cormier, Benjamin R; Mannan, M Sam
2010-11-15
Federal safety regulations require the use of validated consequence models to determine the vapor cloud dispersion exclusion zones for accidental liquefied natural gas (LNG) releases. One tool that is being developed in industry for exclusion zone determination and LNG vapor dispersion modeling is computational fluid dynamics (CFD). This paper uses the ANSYS CFX CFD code to model LNG vapor dispersion in the atmosphere. Discussed are important parameters that are essential inputs to the ANSYS CFX simulations, including the atmospheric conditions, LNG evaporation rate and pool area, turbulence in the source term, ground surface temperature and roughness height, and effects of obstacles. A sensitivity analysis was conducted to illustrate uncertainties in the simulation results arising from the mesh size and source term turbulence intensity. In addition, a set of medium-scale LNG spill tests were performed at the Brayton Fire Training Field to collect data for validating the ANSYS CFX prediction results. A comparison of test data with simulation results demonstrated that CFX was able to describe the dense gas behavior of LNG vapor cloud, and its prediction results of downwind gas concentrations close to ground level were in approximate agreement with the test data. Copyright © 2010 Elsevier B.V. All rights reserved.
The prediction of the noise of supersonic propellers in time domain - New theoretical results
NASA Technical Reports Server (NTRS)
Farassat, F.
1983-01-01
In this paper, a new formula for the prediction of the noise of supersonic propellers is derived in the time domain which is superior to the previous formulations in several respects. The governing equation is based on the Ffowcs Williams-Hawkings (FW-H) equation with the thickness source term replaced by an equivalent loading source term derived by Isom (1975). Using some results of generalized function theory and simple four-dimensional space-time geometry, the formal solution of the governing equation is manipulated to a form requiring only the knowledge of blade surface pressure data and geometry. The final form of the main result of this paper consists of some surface and line integrals. The surface integrals depend on the surface pressure, time rate of change of surface pressure, and surface pressure gradient. These integrals also involve blade surface curvatures. The line integrals which depend on local surface pressure are along the trailing edge, the shock traces on the blade, and the perimeter of the airfoil section at the inner radius of the blade. The new formulation is for the full blade surface and does not involve any numerical observer time differentiation. The method of implementation on a computer for numerical work is also discussed.
NASA Astrophysics Data System (ADS)
Difilippo, E. L.; Hammond, D. E.; Douglas, R.; Clark, J. F.; Avisar, D.; Dunker, R.
2004-12-01
The Abalone Cove landslide occupies 80 acres of an ancient landslide complex on the Palos Verdes peninsula, and was re-activated in 1979. The uphill portion of the ancient landslide complex has remained stable in historic times. Water infiltration into the slide is a short term catalyst for mass movement in the area, so it is important to determine the sources of groundwater throughout the slide mass. Water may enter the slide mass through direct percolation of recent precipitation, inflow along the head scarp of the ancient landslide or by rising through the slide plane from a deeper aquifer. The objective of this contribution is to use geochemical tracers (tritium and CFC-12) in combination with numerical modeling to constrain the importance of each of these sources. Numerical models were constructed to predict geochemical tracer concentrations throughout the basin, assuming that the only source of water to the slide mass is percolation of recent precipitation. Predicted concentrations were then compared to measured tracer values. In the ancient landslide, predicted and measured tracer concentrations are in good agreement, indicating that most of the water in this area is recent precipitation falling within the basin. Groundwater recharged uphill of the ancient landslide contributes minor flow into the complex through the head scarp, with the majority of this water flowing beneath the ancient slide plane. However, predicted tracer concentrations in the toe of the Abalone Cove landslide are not consistent with measured values. Both CFC-12 and tritium concentrations indicate that water is older than predicted and communication between the slide mass and the aquifer beneath the slide plane must occur in this area. Infiltration of this deep circulating water may exert upward hydraulic pressure on the landslide slip surface, increasing the potential for movement. This hypothesis is consistent with the observation that current movement is only occurring in the area in which tracers indicate communication with the deeper aquifer.
Effects of physical aging on long-term behavior of composites
NASA Technical Reports Server (NTRS)
Brinson, L. Catherine
1993-01-01
The HSCT plane, envisioned to have a lifetime of over 60,000 flight hours and to travel at speeds in excess of Mach 2, is the source of intensive study at NASA. In particular, polymer matrix composites are being strongly considered for use in primary and secondary structures due to their high strength to weight ratio and the options of property tailoring. However, an added difficulty in the use of polymer based materials is that their properties change significantly over time, especially at the elevated temperatures that will be experienced during flight, and prediction of properties based on irregular thermal and mechanical loading is extremely difficult. This study focused on one aspect of long-term polymer composite behavior: physical aging. When a polymer is cooled to below its glass transition temperature, the material is not in thermodynamic equilibrium and the free volume and enthalpy evolve over time to approach their equilibrium values. During this time, the mechanical properties change significantly and this change is termed physical aging. This work begins with a review of the concepts of physical aging on a pure polymer system. The effective time theory, which can be used to predict long term behavior based on short term data, is mathematically formalized. The effects of aging to equilibrium are proven and discussed. The theory developed for polymers is then applied first to a unidirectional composite, then to a general laminate. Comparison to experimental data is excellent. It is shown that the effects of aging on the long-term properties of composites can be counter-intuitive, stressing the importance of the development and use of a predictive theory to analyze structures.
NASA Technical Reports Server (NTRS)
Zavodsky, Bradley; McCarty, Will; Chou, Shih-Hung; Jedlovec, Gary
2009-01-01
The Atmospheric Infrared Sounder (AIRS) is acting as a heritage and risk reduction instrument for the Cross-track lnfrared Sounder (CrIS) to be flown aboard the NPP and NPOESS satellites. The hyperspectral nature of AIRS and CrIS provides high-quality soundings that, along with their asynoptic observation time over North America, make them attractive sources to fill the spatial and temporal data voids in upper air temperature and moisture measurements for use in data assimilation and numerical weather prediction. Observations from AlRS can be assimilated either as direct radiances or retrieved thermodynamic profiles, and the Short-Term Prediction Research and Transition (SPORT) Center at NASA's Marshall Space Flight Center has used both data types to improve short-term (0-48h), regional forecasts. The purpose of this paper is to share SPORT'S experiences using AlRS radiances and retrieved profiles in regional data assimilation activities by showing that proper handling of issues-including cloud contamination and land emissivity characterization-are necessary to produce optimal analyses and forecasts.
Asymptotic/numerical analysis of supersonic propeller noise
NASA Technical Reports Server (NTRS)
Myers, M. K.; Wydeven, R.
1989-01-01
An asymptotic analysis based on the Mach surface structure of the field of a supersonic helical source distribution is applied to predict thickness and loading noise radiated by high speed propeller blades. The theory utilizes an integral representation of the Ffowcs-Williams Hawkings equation in a fully linearized form. The asymptotic results are used for chordwise strips of the blade, while required spanwise integrations are performed numerically. The form of the analysis enables predicted waveforms to be interpreted in terms of Mach surface propagation. A computer code developed to implement the theory is described and found to yield results in close agreement with more exact computations.
Prediction of facial cooling while walking in cold wind.
Tikuisis, Peter; Ducharme, Michel B; Brajkovic, Dragan
2007-09-01
A dynamic model of cheek cooling has been modified to account for increased skin blood circulation of individuals walking in cold wind. This was achieved by modelling the cold-induced vasodilation response to cold as a varying blood perfusion term, which provided a source of convective heat to the skin tissues of the model. Physiologically-valid blood perfusion was fitted to replicate the cheek skin temperature responses of 12 individuals experimentally exposed to air temperatures from -10 to 10 degrees C at wind speeds from 2 to 8 ms(-1). Resultant cheek skin temperatures met goodness-of-fit criteria and implications on wind chill predictions are discussed.
NASA Astrophysics Data System (ADS)
Sharifian, Mohammad Kazem; Kesserwani, Georges; Hassanzadeh, Yousef
2018-05-01
This work extends a robust second-order Runge-Kutta Discontinuous Galerkin (RKDG2) method to solve the fully nonlinear and weakly dispersive flows, within a scope to simultaneously address accuracy, conservativeness, cost-efficiency and practical needs. The mathematical model governing such flows is based on a variant form of the Green-Naghdi (GN) equations decomposed as a hyperbolic shallow water system with an elliptic source term. Practical features of relevance (i.e. conservative modeling over irregular terrain with wetting and drying and local slope limiting) have been restored from an RKDG2 solver to the Nonlinear Shallow Water (NSW) equations, alongside new considerations to integrate elliptic source terms (i.e. via a fourth-order local discretization of the topography) and to enable local capturing of breaking waves (i.e. via adding a detector for switching off the dispersive terms). Numerical results are presented, demonstrating the overall capability of the proposed approach in achieving realistic prediction of nearshore wave processes involving both nonlinearity and dispersion effects within a single model.
NASA Astrophysics Data System (ADS)
Yoshida, Satoshi
Applications of inductively coupled plasma mass spectrometry (ICP-MS) to the determination of long-lived radionuclides in environmental samples were summarized. In order to predict the long-term behavior of the radionuclides, related stable elements were also determined. Compared with radioactivity measurements, the ICP-MS method has advantages in terms of its simple analytical procedures, prompt measurement time, and capability of determining the isotope ratio such as240Pu/239Pu, which can not be separated by radiation. Concentration of U and Th in Japanese surface soils were determined in order to determine the background level of the natural radionuclides. The 235U/238U ratio was successfully used to detect the release of enriched U from reconversion facilities to the environment and to understand the source term. The 240Pu/239Pu ratios in environmental samples varied widely depending on the Pu sources. Applications of ICP-MS to the measurement of I and Tc isotopes were also described. The ratio between radiocesium and stable Cs is useful for judging the equilibrium of deposited radiocesium in a forest ecosystem.
Radiative heat transfer in strongly forward scattering media of circulating fluidized bed combustors
NASA Astrophysics Data System (ADS)
Ates, Cihan; Ozen, Guzide; Selçuk, Nevin; Kulah, Gorkem
2016-10-01
Investigation of the effect of particle scattering on radiative incident heat fluxes and source terms is carried out in the dilute zone of the lignite-fired 150 kWt Middle East Technical University Circulating Fluidized Bed Combustor (METU CFBC) test rig. The dilute zone is treated as an axisymmetric cylindrical enclosure containing grey/non-grey, absorbing, emitting gas with absorbing, emitting non/isotropically/anisotropically scattering particles surrounded by grey diffuse walls. A two-dimensional axisymmetric radiation model based on Method of Lines (MOL) solution of Discrete Ordinates Method (DOM) coupled with Grey Gas (GG)/Spectral Line-Based Weighted Sum of Grey Gases Model (SLW) and Mie theory/geometric optics approximation (GOA) is extended for incorporation of anisotropic scattering by using normalized Henyey-Greenstein (HG)/transport approximation for the phase function. Input data for the radiation model is obtained from predictions of a comprehensive model previously developed and benchmarked against measurements on the same CFBC burning low calorific value indigenous lignite with high volatile matter/fixed carbon (VM/FC) ratio in its own ash. Predictive accuracy and computational efficiency of nonscattering, isotropic scattering and forward scattering with transport approximation are tested by comparing their predictions with those of forward scattering with HG. GG and GOA based on reflectivity with angular dependency are found to be accurate and CPU efficient. Comparisons reveal that isotropic assumption leads to under-prediction of both incident heat fluxes and source terms for which discrepancy is much larger. On the other hand, predictions obtained by neglecting scattering were found to be in favorable agreement with those of forward scattering at significantly less CPU time. Transport approximation is as accurate and CPU efficient as HG. These findings indicate that negligence of scattering is a more practical choice in solution of the radiative transfer equation (RTE) in conjunction with conservation equations for the system under consideration.
NASA Astrophysics Data System (ADS)
Munoz-Arriola, F.; Torres-Alavez, J.; Mohamad Abadi, A.; Walko, R. L.
2014-12-01
Our goal is to investigate possible sources of predictability of hydrometeorological extreme events in the Northern High Plains. Hydrometeorological extreme events are considered the most costly natural phenomena. Water deficits and surpluses highlight how the water-climate interdependence becomes crucial in areas where single activities drive economies such as Agriculture in the NHP. Nonetheless we recognize the Water-Climate interdependence and the regulatory role that human activities play, we still grapple to identify what sources of predictability could be added to flood and drought forecasts. To identify the benefit of multi-scale climate modeling and the role of initial conditions on flood and drought predictability on the NHP, we use the Ocean Land Atmospheric Model (OLAM). OLAM is characterized by a dynamic core with a global geodesic grid with hexagonal (and variably refined) mesh cells and a finite volume discretization of the full compressible Navier Stokes equations, a cut-grid cell method for topography (that reduces error in computational gradient computation and anomalous vertical dispersion). Our hypothesis is that wet conditions will drive OLAM's simulations of precipitation to wetter conditions affecting both flood forecast and drought forecast. To test this hypothesis we simulate precipitation during identified historical flood events followed by drought events in the NHP (i.e. 2011-2012 years). We initialized OLAM with CFS-data 1-10 days previous to a flooding event (as initial conditions) to explore (1) short-term and high-resolution and (2) long-term and coarse-resolution simulations of flood and drought events, respectively. While floods are assessed during a maximum of 15-days refined-mesh simulations, drought is evaluated during the following 15 months. Simulated precipitation will be compared with the Sub-continental Observation Dataset, a gridded 1/16th degree resolution data obtained from climatological stations in Canada, US, and Mexico. This in-progress research will ultimately contribute to integrate OLAM and VIC models and improve predictability of extreme hydrometeorological events.
On the sound field radiated by a tuning fork
NASA Astrophysics Data System (ADS)
Russell, Daniel A.
2000-12-01
When a sounding tuning fork is brought close to the ear, and rotated about its long axis, four distinct maxima and minima are heard. However, when the same tuning fork is rotated while being held at arm's length from the ear only two maxima and minima are heard. Misconceptions concerning this phenomenon are addressed and the fundamental mode of the fork is described in terms of a linear quadrupole source. Measured directivity patterns in the near field and far field of several forks agree very well with theoretical predictions for a linear quadrupole. Other modes of vibration are shown to radiate as dipole and lateral quadrupole sources.
Compact Assumption Applied to the Monopole Term of Farassat's Formulations
NASA Technical Reports Server (NTRS)
Lopes, Leonard V.
2015-01-01
Farassat's formulations provide an acoustic prediction at an observer location provided a source surface, including motion and flow conditions. This paper presents compact forms for the monopole term of several of Farassat's formulations. When the physical surface is elongated, such as the case of a high aspect ratio rotorcraft blade, compact forms can be derived which are shown to be a function of the blade cross sectional area by reducing the computation from a surface integral to a line integral. The compact forms of all formulations are applied to two example cases: a short span wing with constant airfoil cross section moving at three forward flight Mach numbers and a rotor at two advance ratios. Acoustic pressure time histories and power spectral densities of monopole noise predicted from the compact forms of all the formulations at several observer positions are shown to compare very closely to the predictions from their non-compact counterparts. A study on the influence of rotorcraft blade shape on the high frequency portion of the power spectral density shows that there is a direct correlation between the aspect ratio of the airfoil and the error incurred by using the compact form. Finally, a prediction of pressure gradient from the non-compact and compact forms of the thickness term of Formulation G1A shows that using the compact forms results in a 99.6% improvement in computation time, which will be critical when noise is incorporated into a design environment.
Simulation and Prediction of Warm Season Drought in North America
NASA Technical Reports Server (NTRS)
Wang, Hailan; Chang, Yehui; Schubert, Siegfried D.; Koster, Randal D.
2018-01-01
This presentation presents our recent work on model simulation and prediction of warm season drought in North America. The emphasis will be on the contribution from the leading modes of subseasonal atmospheric circulation variability, which are often present in the form of stationary Rossby waves. Here we take advantage of the results from observations, reanalyses, and simulations and reforecasts performed using the NASA Goddard Earth Observing System (GEOS-5) atmospheric and coupled General Circulation Model (GCM). Our results show that stationary Rossby waves play a key role in Northern Hemisphere (NH) atmospheric circulation and surface meteorology variability on subseasonal timescales. In particular, such waves have been crucial to the development of recent short-term warm season heat waves and droughts over North America (e.g. the 1988, 1998, and 2012 summer droughts) and northern Eurasia (e.g., the 2003 summer heat wave over Europe and the 2010 summer drought and heat wave over Russia). Through an investigation of the physical processes by which these waves lead to the development of warm season drought in North America, it is further found that these waves can serve as a potential source of drought predictability. In order to properly represent their effect and exploit this source of predictability, a model needs to correctly simulate the Northern Hemisphere (NH) mean jet streams and be able to predict the sources of these waves. Given the NASA GEOS-5 AGCM deficiency in simulating the NH jet streams and tropical convection during boreal summer, an approach has been developed to artificially remove much of model mean biases, which leads to considerable improvement in model simulation and prediction of stationary Rossby waves and drought development in North America. Our study points to the need to identify key model biases that limit model simulation and prediction of regional climate extremes, and diagnose the origin of these biases so as to inform modeling group for model improvement.
NASA Astrophysics Data System (ADS)
Kang, Daiwen
In this research, the sources, distributions, transport, ozone formation potential, and biogenic emissions of VOCs are investigated focusing on three Southeast United States National Parks: Shenandoah National Park, Big Meadows site (SHEN), Great Smoky Mountains National Park at Cove Mountain (GRSM) and Mammoth Cave National Park (MACA). A detailed modeling analysis is conducted using the Multiscale Air Quality SImulation Platform (MAQSIP) focusing on nonmethane hydrocarbons and ozone characterized by high O3 surface concentrations. Nine emissions perturbation using the Multiscale Air Quality SImulation Platform (MAQSIP) focusing on nonmethane hydrocarbons and ozone characterized by high O 3 surface concentrations. In the observation-based analysis, source classification techniques based on correlation coefficient, chemical reactivity, and certain ratios were developed and applied to the data set. Anthropogenic VOCs from automobile exhaust dominate at Mammoth Cave National Park, and at Cove Mountain, Great Smoky Mountains National Park, while at Big Meadows, Shenandoah National Park, the source composition is complex and changed from 1995 to 1996. The dependence of isoprene concentrations on ambient temperatures is investigated, and similar regressional relationships are obtained for all three monitoring locations. Propylene-equivalent concentrations are calculated to account for differences in reaction rates between the OH and individual hydrocarbons, and to thereby estimate their relative contributions to ozone formation. Isoprene fluxes were also estimated for all these rural areas. Model predictions (base scenario) tend to give lower daily maximum O 3 concentrations than observations by 10 to 30%. Model predicted concentrations of lumped paraffin compounds are of the same order of magnitude as the observed values, while the observed concentrations for other species (isoprene, ethene, surrogate olefin, surrogate toluene, and surrogate xylene) are usually an order of magnitude higher than the predictions. Detailed sensitivity and process analyses in terms of ozone and VOC scenarios including the base scenario are designed and utilized in the model simulations. Model predictions are compared with the observed values at the three locations for the same time period. Detailed sensitivity and process analyses in terms of ozone and VOC budgets, and relative importance of various VOCs species are provided. (Abstract shortened by UMI.)
Stum, Marlene S
2008-01-01
This study proposes and tests a systemic family decision-making framework to understand group long-term care insurance (LTCI) enrollment decisions. A random sample of public employees who were offered group LTCI as a workplace benefit were examined. Findings reveal very good predictive efficacy for the overall conceptual framework with a pseudo R2 value of .687, and reinforced the contributions of factors within the family system. Enrollees were more likely to have discussed the decision with others, used information sources, and had prior experience when compared to non-enrollees. Perceived health status, financial knowledge, attitudes regarding the role of private insurance, risk taking, and coverage features were additional factors related to enrollment decisions. The findings help to inform policymakers about the potential of LTCI as one strategy for financing long-term care.
A utility/cost analysis of breast cancer risk prediction algorithms
NASA Astrophysics Data System (ADS)
Abbey, Craig K.; Wu, Yirong; Burnside, Elizabeth S.; Wunderlich, Adam; Samuelson, Frank W.; Boone, John M.
2016-03-01
Breast cancer risk prediction algorithms are used to identify subpopulations that are at increased risk for developing breast cancer. They can be based on many different sources of data such as demographics, relatives with cancer, gene expression, and various phenotypic features such as breast density. Women who are identified as high risk may undergo a more extensive (and expensive) screening process that includes MRI or ultrasound imaging in addition to the standard full-field digital mammography (FFDM) exam. Given that there are many ways that risk prediction may be accomplished, it is of interest to evaluate them in terms of expected cost, which includes the costs of diagnostic outcomes. In this work we perform an expected-cost analysis of risk prediction algorithms that is based on a published model that includes the costs associated with diagnostic outcomes (true-positive, false-positive, etc.). We assume the existence of a standard screening method and an enhanced screening method with higher scan cost, higher sensitivity, and lower specificity. We then assess expected cost of using a risk prediction algorithm to determine who gets the enhanced screening method under the strong assumption that risk and diagnostic performance are independent. We find that if risk prediction leads to a high enough positive predictive value, it will be cost-effective regardless of the size of the subpopulation. Furthermore, in terms of the hit-rate and false-alarm rate of the of the risk prediction algorithm, iso-cost contours are lines with slope determined by properties of the available diagnostic systems for screening.
McEwan, Phil; Bennett Wilton, Hayley; Ong, Albert C M; Ørskov, Bjarne; Sandford, Richard; Scolari, Francesco; Cabrera, Maria-Cristina V; Walz, Gerd; O'Reilly, Karl; Robinson, Paul
2018-02-13
Autosomal dominant polycystic kidney disease (ADPKD) is the leading inheritable cause of end-stage renal disease (ESRD); however, the natural course of disease progression is heterogeneous between patients. This study aimed to develop a natural history model of ADPKD that predicted progression rates and long-term outcomes in patients with differing baseline characteristics. The ADPKD Outcomes Model (ADPKD-OM) was developed using available patient-level data from the placebo arm of the Tolvaptan Efficacy and Safety in Management of ADPKD and its Outcomes Study (TEMPO 3:4; ClinicalTrials.gov identifier NCT00428948). Multivariable regression equations estimating annual rates of ADPKD progression, in terms of total kidney volume (TKV) and estimated glomerular filtration rate, formed the basis of the lifetime patient-level simulation model. Outputs of the ADPKD-OM were compared against external data sources to validate model accuracy and generalisability to other ADPKD patient populations, then used to predict long-term outcomes in a cohort matched to the overall TEMPO 3:4 study population. A cohort with baseline patient characteristics consistent with TEMPO 3:4 was predicted to reach ESRD at a mean age of 52 years. Most patients (85%) were predicted to reach ESRD by the age of 65 years, with many progressing to ESRD earlier in life (18, 36 and 56% by the age of 45, 50 and 55 years, respectively). Consistent with previous research and clinical opinion, analyses supported the selection of baseline TKV as a prognostic factor for ADPKD progression, and demonstrated its value as a strong predictor of future ESRD risk. Validation exercises and illustrative analyses confirmed the ability of the ADPKD-OM to accurately predict disease progression towards ESRD across a range of clinically-relevant patient profiles. The ADPKD-OM represents a robust tool to predict natural disease progression and long-term outcomes in ADPKD patients, based on readily available and/or measurable clinical characteristics. In conjunction with clinical judgement, it has the potential to support decision-making in research and clinical practice.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zachara, John M.; Chen, Xingyuan; Murray, Chris
In this study, a well-field within a uranium (U) plume in the groundwater-surface water transition zone was monitored for a 3 year period for water table elevation and dissolved solutes. The plume discharges to the Columbia River, which displays a dramatic spring stage surge resulting from snowmelt. Groundwater exhibits a low hydrologic gradient and chemical differences with river water. River water intrudes the site in spring. Specific aims were to assess the impacts of river intrusion on dissolved uranium (U aq), specific conductance (SpC), and other solutes, and to discriminate between transport, geochemical, and source term heterogeneity effects. Time seriesmore » trends for U aq and SpC were complex and displayed large temporal and well-to-well variability as a result of water table elevation fluctuations, river water intrusion, and changes in groundwater flow directions. The wells were clustered into subsets exhibiting common behaviors resulting from the intrusion dynamics of river water and the location of source terms. Hot-spots in U aq varied in location with increasing water table elevation through the combined effects of advection and source term location. Heuristic reactive transport modeling with PFLOTRAN demonstrated that mobilized U aq was transported between wells and source terms in complex trajectories, and was diluted as river water entered and exited the groundwater system. While U aq time-series concentration trends varied significantly from year-to-year as a result of climate-caused differences in the spring hydrograph, common and partly predictable response patterns were observed that were driven by water table elevation, and the extent and duration of river water intrusion.« less
Zachara, John M.; Chen, Xingyuan; Murray, Chris; ...
2016-03-04
In this study, a well-field within a uranium (U) plume in the groundwater-surface water transition zone was monitored for a 3 year period for water table elevation and dissolved solutes. The plume discharges to the Columbia River, which displays a dramatic spring stage surge resulting from snowmelt. Groundwater exhibits a low hydrologic gradient and chemical differences with river water. River water intrudes the site in spring. Specific aims were to assess the impacts of river intrusion on dissolved uranium (U aq), specific conductance (SpC), and other solutes, and to discriminate between transport, geochemical, and source term heterogeneity effects. Time seriesmore » trends for U aq and SpC were complex and displayed large temporal and well-to-well variability as a result of water table elevation fluctuations, river water intrusion, and changes in groundwater flow directions. The wells were clustered into subsets exhibiting common behaviors resulting from the intrusion dynamics of river water and the location of source terms. Hot-spots in U aq varied in location with increasing water table elevation through the combined effects of advection and source term location. Heuristic reactive transport modeling with PFLOTRAN demonstrated that mobilized U aq was transported between wells and source terms in complex trajectories, and was diluted as river water entered and exited the groundwater system. While U aq time-series concentration trends varied significantly from year-to-year as a result of climate-caused differences in the spring hydrograph, common and partly predictable response patterns were observed that were driven by water table elevation, and the extent and duration of river water intrusion.« less
Aggregating and Predicting Sequence Labels from Crowd Annotations
Nguyen, An T.; Wallace, Byron C.; Li, Junyi Jessy; Nenkova, Ani; Lease, Matthew
2017-01-01
Despite sequences being core to NLP, scant work has considered how to handle noisy sequence labels from multiple annotators for the same text. Given such annotations, we consider two complementary tasks: (1) aggregating sequential crowd labels to infer a best single set of consensus annotations; and (2) using crowd annotations as training data for a model that can predict sequences in unannotated text. For aggregation, we propose a novel Hidden Markov Model variant. To predict sequences in unannotated text, we propose a neural approach using Long Short Term Memory. We evaluate a suite of methods across two different applications and text genres: Named-Entity Recognition in news articles and Information Extraction from biomedical abstracts. Results show improvement over strong baselines. Our source code and data are available online1. PMID:29093611
NASA Astrophysics Data System (ADS)
Festa, G.; Picozzi, M.; Alessandro, C.; Colombelli, S.; Cattaneo, M.; Chiaraluce, L.; Elia, L.; Martino, C.; Marzorati, S.; Supino, M.; Zollo, A.
2017-12-01
Earthquake early warning systems (EEWS) are systems nowadays contributing to the seismic risk mitigation actions, both in terms of losses and societal resilience, by issuing an alert promptly after the earthquake origin and before the ground shaking impacts the targets to be protected. EEWS systems can be grouped in two main classes: network based and stand-alone systems. Network based EEWS make use of dense seismic networks surrounding the fault (e.g. Near Fault Observatory; NFO) generating the event. The rapid processing of the P-wave early portion allows for the location and magnitude estimation of the event then used to predict the shaking through ground motion prediction equations. Stand-alone systems instead analyze the early P-wave signal to predict the ground shaking carried by the late S or surface waves, through empirically calibrated scaling relationships, at the recording site itself. We compared the network-based (PRESTo, PRobabilistic and Evolutionary early warning SysTem, www.prestoews.org, Satriano et al., 2011) and the stand-alone (SAVE, on-Site-Alert-leVEl, Caruso et al., 2017) systems, by analyzing their performance during the 2016-2017 Central Italy sequence. We analyzed 9 earthquakes having magnitude 5.0 < M < 6.5 at about 200 stations located within 200 km from the epicentral area, including stations of The Altotiberina NFO (TABOO). Performances are evaluated in terms of rate of success of ground shaking intensity prediction and available lead-time, i.e. the time available for security actions. PRESTo also evaluated the accuracy of location and magnitude. Both systems well predict the ground shaking nearby the event source, with a success rate around 90% within the potential damage zone. The lead-time is significantly larger for the network based system, increasing to more than 10s at 40 km from the event epicentre. The stand-alone system better performs in the near-source region showing a positive albeit small lead-time (<3s). Far away from the source, the performances slightly degrade, mostly owing to uncertain calibration of attenuation relationships. This study opens to the possibility of making EEWS operational in Italy, based on the available acceleration networks, by improving the capability of reducing the lead-time related to data telemetry.
Modeling individual differences in working memory performance: a source activation account
Daily, Larry Z.; Lovett, Marsha C.; Reder, Lynne M.
2008-01-01
Working memory resources are needed for processing and maintenance of information during cognitive tasks. Many models have been developed to capture the effects of limited working memory resources on performance. However, most of these models do not account for the finding that different individuals show different sensitivities to working memory demands, and none of the models predicts individual subjects' patterns of performance. We propose a computational model that accounts for differences in working memory capacity in terms of a quantity called source activation, which is used to maintain goal-relevant information in an available state. We apply this model to capture the working memory effects of individual subjects at a fine level of detail across two experiments. This, we argue, strengthens the interpretation of source activation as working memory capacity. PMID:19079561
Characterising RNA secondary structure space using information entropy
2013-01-01
Comparative methods for RNA secondary structure prediction use evolutionary information from RNA alignments to increase prediction accuracy. The model is often described in terms of stochastic context-free grammars (SCFGs), which generate a probability distribution over secondary structures. It is, however, unclear how this probability distribution changes as a function of the input alignment. As prediction programs typically only return a single secondary structure, better characterisation of the underlying probability space of RNA secondary structures is of great interest. In this work, we show how to efficiently compute the information entropy of the probability distribution over RNA secondary structures produced for RNA alignments by a phylo-SCFG, and implement it for the PPfold model. We also discuss interpretations and applications of this quantity, including how it can clarify reasons for low prediction reliability scores. PPfold and its source code are available from http://birc.au.dk/software/ppfold/. PMID:23368905
Broadband Trailing Edge Noise Predictions in the Time Domain. Revised
NASA Technical Reports Server (NTRS)
Casper, Jay; Farassat, Fereidoun
2003-01-01
A recently developed analytic result in acoustics, "Formulation 1B," is used to compute broadband trailing edge noise from an unsteady surface pressure distribution on a thin airfoil in the time domain. This formulation is a new solution of the Ffowcs Willliams-Hawkings equation with the loading source term, and has been shown in previous research to provide time domain predictions of broadband noise that are in excellent agreement with experimental results. Furthermore, this formulation lends itself readily to rotating reference frames and statistical analysis of broadband trailing edge noise. Formulation 1B is used to calculate the far field noise radiated from the trailing edge of a NACA 0012 airfoil in low Mach number flows, by using both analytical and experimental data on the airfoil surface. The acoustic predictions are compared with analytical results and experimental measurements that are available in the literature. Good agreement between predictions and measurements is obtained.
The evolution of methods for noise prediction of high speed rotors and propellers in the time domain
NASA Technical Reports Server (NTRS)
Farassat, F.
1986-01-01
Linear wave equation models which have been used over the years at NASA Langley for describing noise emissions from high speed rotating blades are summarized. The noise sources are assumed to lie on a moving surface, and analysis of the situation has been based on the Ffowcs Williams-Hawkings (FW-H) equation. Although the equation accounts for two surface and one volume source, the NASA analyses have considered only the surface terms. Several variations on the FW-H model are delineated for various types of applications, noting the computational benefits of removing the frequency dependence of the calculations. Formulations are also provided for compact and noncompact sources, and features of Long's subsonic integral equation and Farassat's high speed integral equation are discussed. The selection of subsonic or high speed models is dependent on the Mach number of the blade surface where the source is located.
NASA Astrophysics Data System (ADS)
Khan, T.; Perlinger, J. A.; Urban, N. R.
2017-12-01
Certain toxic, persistent, bioaccumulative, and semivolatile compounds known as atmosphere-surface exchangeable pollutants or ASEPs are emitted into the environment by primary sources, are transported, deposited to water surfaces, and can be later re-emitted causing the water to act as a secondary source. Polychlorinated biphenyl (PCB) compounds, a class of ASEPs, are of major concern in the Laurentian Great Lakes because of their historical use primarily as additives to oils and industrial fluids, and discharge from industrial sources. Following the ban on production in the U.S. in 1979, atmospheric concentrations of PCBs in the Lake Superior region decreased rapidly. Subsequently, PCB concentrations in the lake surface water also reached near equilibrium as the atmospheric levels of PCBs declined. However, previous studies on long-term PCB levels and trends in lake trout and walleye suggested that the initial rate of decline of PCB concentrations in fish has leveled off in Lake Superior. In this study, a dynamic multimedia flux model was developed with the objective to investigate the observed levelling off of PCB concentrations in Lake Superior fish. The model structure consists of two water layers (the epilimnion and the hypolimnion), and the surface mixed sediment layer, while atmospheric deposition is the primary external pathway of PCB inputs to the lake. The model was applied for different PCB congeners having a range of hydrophobicity and volatility. Using this model, we compare the long-term trends in predicted PCB concentrations in different environmental media with relevant available measurements for Lake Superior. We examine the seasonal depositional and exchange patterns, the relative importance of different process terms, and provide the most probable source of the current observed PCB levels in Lake Superior fish. In addition, we evaluate the role of current atmospheric PCB levels in sustaining the observed fish concentrations and appraise the need for continuous atmospheric PCB monitoring by the Great Lakes Integrated Atmospheric Deposition Network. By combining the modeled lake and biota response times resulting from atmospheric PCB inputs, we predict the time scale for safe fish consumption in Lake Superior.
NASA Astrophysics Data System (ADS)
Bilbro, Griff L.; Hou, Danqiong; Yin, Hong; Trew, Robert J.
2009-02-01
We have quantitatively modeled the conduction current and charge storage of an HFET in terms its physical dimensions and material properties. For DC or small-signal RF operation, no adjustable parameters are necessary to predict the terminal characteristics of the device. Linear performance measures such as small-signal gain and input admittance can be predicted directly from the geometric structure and material properties assumed for the device design. We have validated our model at low-frequency against experimental I-V measurements and against two-dimensional device simulations. We discuss our recent extension of our model to include a larger class of electron velocity-field curves. We also discuss the recent reformulation of our model to facilitate its implementation in commercial large-signal high-frequency circuit simulators. Large signal RF operation is more complex. First, the highest CW microwave power is fundamentally bounded by a brief, reversible channel breakdown in each RF cycle. Second, the highest experimental measurements of efficiency, power, or linearity always require harmonic load pull and possibly also harmonic source pull. Presently, our model accounts for these facts with an adjustable breakdown voltage and with adjustable load impedances and source impedances for the fundamental frequency and its harmonics. This has allowed us to validate our model for large signal RF conditions by simultaneously fitting experimental measurements of output power, gain, and power added efficiency of real devices. We show that the resulting model can be used to compare alternative device designs in terms of their large signal performance, such as their output power at 1dB gain compression or their third order intercept points. In addition, the model provides insight into new device physics features enabled by the unprecedented current and voltage levels of AlGaN/GaN HFETs, including non-ohmic resistance in the source access regions and partial depletion of the 2DEG in the drain access region.
Hierarchical Ensemble Methods for Protein Function Prediction
2014-01-01
Protein function prediction is a complex multiclass multilabel classification problem, characterized by multiple issues such as the incompleteness of the available annotations, the integration of multiple sources of high dimensional biomolecular data, the unbalance of several functional classes, and the difficulty of univocally determining negative examples. Moreover, the hierarchical relationships between functional classes that characterize both the Gene Ontology and FunCat taxonomies motivate the development of hierarchy-aware prediction methods that showed significantly better performances than hierarchical-unaware “flat” prediction methods. In this paper, we provide a comprehensive review of hierarchical methods for protein function prediction based on ensembles of learning machines. According to this general approach, a separate learning machine is trained to learn a specific functional term and then the resulting predictions are assembled in a “consensus” ensemble decision, taking into account the hierarchical relationships between classes. The main hierarchical ensemble methods proposed in the literature are discussed in the context of existing computational methods for protein function prediction, highlighting their characteristics, advantages, and limitations. Open problems of this exciting research area of computational biology are finally considered, outlining novel perspectives for future research. PMID:25937954
Sources of Uncertainty and the Interpretation of Short-Term Fluctuations
NASA Astrophysics Data System (ADS)
Lewandowsky, S.; Risbey, J.; Cowtan, K.; Rahmstorf, S.
2016-12-01
The alleged significant slowdown in global warming during the first decade of the 21st century, and the appearance of a discrepancy between models and observations, has attracted considerable research attention. We trace the history of this research and show how its conclusions were shaped by several sources of uncertainty and ambiguity about models and observations. We show that as those sources of uncertainty were gradually eliminated by further research, insufficient evidence remained to infer any discrepancy between models and observations or a significant slowing of warming. Specifically, we show that early research had to contend with uncertainties about coverage biases in the global temperature record and biases in the sea surface temperature observations which turned out to have exaggerated the extent of slowing. In addition, uncertainties in the observed forcings were found to have exaggerated the mismatch between models and observations. Further sources of uncertainty that were ultimately eliminated involved the use of incommensurate sea surface temperature data between models and observations and a tacit interpretation of model projections as predictions or forecasts. After all those sources of uncertainty were eliminated, the most recent research finds little evidence for an unusual slowdown or a discrepancy between models and observations. We discuss whether these different kinds of uncertainty could have been anticipated or managed differently, and how one can apply those lessons to future short-term fluctuations in warming.
NASA Astrophysics Data System (ADS)
Bieringer, Paul E.; Rodriguez, Luna M.; Vandenberghe, Francois; Hurst, Jonathan G.; Bieberbach, George; Sykes, Ian; Hannan, John R.; Zaragoza, Jake; Fry, Richard N.
2015-12-01
Accurate simulations of the atmospheric transport and dispersion (AT&D) of hazardous airborne materials rely heavily on the source term parameters necessary to characterize the initial release and meteorological conditions that drive the downwind dispersion. In many cases the source parameters are not known and consequently based on rudimentary assumptions. This is particularly true of accidental releases and the intentional releases associated with terrorist incidents. When available, meteorological observations are often not representative of the conditions at the location of the release and the use of these non-representative meteorological conditions can result in significant errors in the hazard assessments downwind of the sensors, even when the other source parameters are accurately characterized. Here, we describe a computationally efficient methodology to characterize both the release source parameters and the low-level winds (eg. winds near the surface) required to produce a refined downwind hazard. This methodology, known as the Variational Iterative Refinement Source Term Estimation (STE) Algorithm (VIRSA), consists of a combination of modeling systems. These systems include a back-trajectory based source inversion method, a forward Gaussian puff dispersion model, a variational refinement algorithm that uses both a simple forward AT&D model that is a surrogate for the more complex Gaussian puff model and a formal adjoint of this surrogate model. The back-trajectory based method is used to calculate a ;first guess; source estimate based on the available observations of the airborne contaminant plume and atmospheric conditions. The variational refinement algorithm is then used to iteratively refine the first guess STE parameters and meteorological variables. The algorithm has been evaluated across a wide range of scenarios of varying complexity. It has been shown to improve the source parameters for location by several hundred percent (normalized by the distance from source to the closest sampler), and improve mass estimates by several orders of magnitude. Furthermore, it also has the ability to operate in scenarios with inconsistencies between the wind and airborne contaminant sensor observations and adjust the wind to provide a better match between the hazard prediction and the observations.
Assessing risk of non-compliance of phosphorus standards for lakes in England and Wales
NASA Astrophysics Data System (ADS)
Duethmann, D.; Anthony, S.; Carvalho, L.; Spears, B.
2009-04-01
High population densities, use of inorganic fertilizer and intensive livestock agriculture have increased phosphorus loads to lakes, and accelerated eutrophication is a major pressure for many lakes. The EC Water Framework Directive (WFD) requires that good chemical and ecological quality is restored in all surface water bodies by 2015. Total phosphorus (TP) standards for lakes in England and Wales have been agreed recently, and our aim was to estimate what percentage of lakes in England and Wales is at risk of failing these standards. With measured lake phosphorus concentrations only being available for a small number of lakes, such an assessment had to be model based. The study also makes a source apportionment of phosphorus inputs into lakes. Phosphorus loads were estimated from a range of sources including agricultural loads, sewage effluents, septic tanks, diffuse urban sources, atmospheric deposition, groundwater and bank erosion. Lake phosphorus concentrations were predicted using the Vollenweider model, and the model framework was satisfactorily tested against available observed lake concentration data. Even though predictions for individual lakes remain uncertain, results for a population of lakes are considered as sufficiently robust. A scenario analysis was carried out to investigate to what extent reductions in phosphorus loads would increase the number of lakes achieving good ecological status in terms of TP standards. Applying the model to all lakes in England and Wales greater than 1 ha, it was calculated that under current conditions roughly two thirds of the lakes would fail the good ecological status with respect to phosphorus. According to our estimates, agricultural phosphorus loads represent the most frequent dominant source for the majority of catchments, but diffuse urban runoff also is important in many lakes. Sewage effluents are the most frequent dominant source for large lake catchments greater than 100 km². The evaluation in terms of total load can be misleading in terms of what sources need to be tackled by catchment management for most of the lakes. For example sewage effluents are responsible for the majority of the total load but are the dominant source in only a small number of larger lake catchments. If loads from all sources were halved this would potentially increase the number of complying lakes to two thirds but require substantial measures to reduce phosphorus inputs to lakes. For agriculture, required changes would have to go beyond improvements of agricultural practise, and need to include reducing the intensity of land use. The time required for many lakes to respond to reduced nutrient loading is likely to extend beyond the current timelines of the WFD due to internal loading and biological resistances.
Intraseasonal Variability in the Atmosphere-Ocean Climate System. Second Edition
NASA Technical Reports Server (NTRS)
Lau, William K. M.; Waliser, Duane E.
2011-01-01
Understanding and predicting the intraseasonal variability (ISV) of the ocean and atmosphere is crucial to improving long-range environmental forecasts and the reliability of climate change projections through climate models. This updated, comprehensive and authoritative second edition has a balance of observation, theory and modeling and provides a single source of reference for all those interested in this important multi-faceted natural phenomenon and its relation to major short-term climatic variations.
NASA Astrophysics Data System (ADS)
Mukhopadhyay, Saumyadip; Abraham, John
2012-07-01
The unsteady flamelet progress variable (UFPV) model has been proposed by Pitsch and Ihme ["An unsteady/flamelet progress variable method for LES of nonpremixed turbulent combustion," AIAA Paper No. 2005-557, 2005] for modeling the averaged/filtered chemistry source terms in Reynolds averaged simulations and large eddy simulations of reacting non-premixed combustion. In the UFPV model, a look-up table of source terms is generated as a function of mixture fraction Z, scalar dissipation rate χ, and progress variable C by solving the unsteady flamelet equations. The assumption is that the unsteady flamelet represents the evolution of the reacting mixing layer in the non-premixed flame. We assess the accuracy of the model in predicting autoignition and flame development in compositionally stratified n-heptane/air mixtures using direct numerical simulations (DNS). The focus in this work is primarily on the assessment of accuracy of the probability density functions (PDFs) employed for obtaining averaged source terms. The performance of commonly employed presumed functions, such as the dirac-delta distribution function, the β distribution function, and statistically most likely distribution (SMLD) approach in approximating the shapes of the PDFs of the reactive and the conserved scalars is evaluated. For unimodal distributions, it is observed that functions that need two-moment information, e.g., the β distribution function and the SMLD approach with two-moment closure, are able to reasonably approximate the actual PDF. As the distribution becomes multimodal, higher moment information is required. Differences are observed between the ignition trends obtained from DNS and those predicted by the look-up table, especially for smaller gradients where the flamelet assumption becomes less applicable. The formulation assumes that the shape of the χ(Z) profile can be modeled by an error function which remains unchanged in the presence of heat release. We show that this assumption is not accurate.
Madjidi, Faramarz; Behroozy, Ali
2014-01-01
Exposure to visible light and near infrared (NIR) radiation in the wavelength region of 380 to 1400 nm may cause thermal retinal injury. In this analysis, the effective spectral radiance of a hot source is replaced by its temperature in the exposure limit values in the region of 380-1400 nm. This article describes the development and implementation of a computer code to predict those temperatures, corresponding to the exposure limits proposed by the American Conference of Governmental Industrial Hygienists (ACGIH). Viewing duration and apparent diameter of the source were inputs for the computer code. At the first stage, an infinite series was created for calculation of spectral radiance by integration with Planck's law. At the second stage for calculation of effective spectral radiance, the initial terms of this infinite series were selected and integration was performed by multiplying these terms by a weighting factor R(λ) in the wavelength region 380-1400 nm. At the third stage, using a computer code, the source temperature that can emit the same effective spectral radiance was found. As a result, based only on measuring the source temperature and accounting for the exposure time and the apparent diameter of the source, it is possible to decide whether the exposure to visible and NIR in any 8-hr workday is permissible. The substitution of source temperature for effective spectral radiance provides a convenient way to evaluate exposure to visible light and NIR.
NASA Astrophysics Data System (ADS)
Murru, Maura; Falcone, Giuseppe; Console, Rodolfo
2016-04-01
The present study is carried out in the framework of the Center for Seismic Hazard (CPS) INGV, under the agreement signed in 2015 with the Department of Civil Protection for developing a new model of seismic hazard of the country that can update the current reference (MPS04-S1; zonesismiche.mi.ingv.it and esse1.mi.ingv.it) released between 2004 and 2006. In this initiative, we participate with the Long-Term Stress Transfer (LTST) Model to provide the annual occurrence rate of a seismic event on the entire Italian territory, from a Mw4.5 minimum magnitude, considering bins of 0.1 magnitude units on geographical cells of 0.1° x 0.1°. Our methodology is based on the fusion of a statistical time-dependent renewal model (Brownian Passage Time, BPT, Matthews at al., 2002) with a physical model which considers the permanent effect in terms of stress that undergoes a seismogenic source in result of the earthquakes that occur on surrounding sources. For each considered catalog (historical, instrumental and individual seismogenic sources) we determined a distinct rate value for each cell of 0.1° x 0.1° for the next 50 yrs. If the cell falls within one of the sources in question, we adopted the respective value of rate, which is referred only to the magnitude of the event characteristic. This value of rate is divided by the number of grid cells that fall on the horizontal projection of the source. If instead the cells fall outside of any seismic source we considered the average value of the rate obtained from the historical and the instrumental catalog, using the method of Frankel (1995). The annual occurrence rate was computed for any of the three considered distributions (Poisson, BPT and BPT with inclusion of stress transfer).
Evaluating Emergent Constraints for Equilibrium Climate Sensitivity
Caldwell, Peter M.; Zelinka, Mark D.; Klein, Stephen A.
2018-04-23
Emergent constraints are quantities that are observable from current measurements and have skill predicting future climate. Here, this study explores 19 previously proposed emergent constraints related to equilibrium climate sensitivity (ECS; the global-average equilibrium surface temperature response to CO 2 doubling). Several constraints are shown to be closely related, emphasizing the importance for careful understanding of proposed constraints. A new method is presented for decomposing correlation between an emergent constraint and ECS into terms related to physical processes and geographical regions. Using this decomposition, one can determine whether the processes and regions explaining correlation with ECS correspond to the physicalmore » explanation offered for the constraint. Shortwave cloud feedback is generally found to be the dominant contributor to correlations with ECS because it is the largest source of intermodel spread in ECS. In all cases, correlation results from interaction between a variety of terms, reflecting the complex nature of ECS and the fact that feedback terms and forcing are themselves correlated with each other. For 4 of the 19 constraints, the originally proposed explanation for correlation is borne out by our analysis. These four constraints all predict relatively high climate sensitivity. The credibility of six other constraints is called into question owing to correlation with ECS coming mainly from unexpected sources and/or lack of robustness to changes in ensembles. Another six constraints lack a testable explanation and hence cannot be confirmed. Lastly, the fact that this study casts doubt upon more constraints than it confirms highlights the need for caution when identifying emergent constraints from small ensembles.« less
Evaluating Emergent Constraints for Equilibrium Climate Sensitivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caldwell, Peter M.; Zelinka, Mark D.; Klein, Stephen A.
Emergent constraints are quantities that are observable from current measurements and have skill predicting future climate. Here, this study explores 19 previously proposed emergent constraints related to equilibrium climate sensitivity (ECS; the global-average equilibrium surface temperature response to CO 2 doubling). Several constraints are shown to be closely related, emphasizing the importance for careful understanding of proposed constraints. A new method is presented for decomposing correlation between an emergent constraint and ECS into terms related to physical processes and geographical regions. Using this decomposition, one can determine whether the processes and regions explaining correlation with ECS correspond to the physicalmore » explanation offered for the constraint. Shortwave cloud feedback is generally found to be the dominant contributor to correlations with ECS because it is the largest source of intermodel spread in ECS. In all cases, correlation results from interaction between a variety of terms, reflecting the complex nature of ECS and the fact that feedback terms and forcing are themselves correlated with each other. For 4 of the 19 constraints, the originally proposed explanation for correlation is borne out by our analysis. These four constraints all predict relatively high climate sensitivity. The credibility of six other constraints is called into question owing to correlation with ECS coming mainly from unexpected sources and/or lack of robustness to changes in ensembles. Another six constraints lack a testable explanation and hence cannot be confirmed. Lastly, the fact that this study casts doubt upon more constraints than it confirms highlights the need for caution when identifying emergent constraints from small ensembles.« less
Measurements and Predictions of the Noise from Three-Stream Jets
NASA Technical Reports Server (NTRS)
Henderson, Brenda S.; Leib, Stewart J.; Wernet, Mark P.
2015-01-01
An experimental and numerical investigation of the noise produced by high-subsonic and supersonic three-stream jets was conducted. The exhaust system consisted of externally-mixed-convergent nozzles and an external plug. Bypass- and tertiary-to-core area ratios between 1.0 and 2.5, and 0.4 and 1.0, respectively, were studied. Axisymmetric and offset tertiary nozzles were investigated for heated and unheated conditions. For axisymmetric configurations, the addition of the third stream was found to reduce peak- and high-frequency acoustic levels in the peak-jet-noise direction, with greater reductions at the lower bypass-to-core area ratios. For the offset configurations, an offset duct was found to decrease acoustic levels on the thick side of the tertiary nozzle relative to those produced by the simulated two-stream jet with up to 8 dB mid-frequency noise reduction at large angles to the jet inlet axis. Noise reduction in the peak-jet-noise direction was greater for supersonic core speeds than for subsonic core speeds. The addition of a tertiary nozzle insert used to divert the third-stream jet to one side of the nozzle system provided no noise reduction. Noise predictions are presented for selected cases using a method based on an acoustic analogy with mean flow interaction effects accounted for using a Green's function, computed in terms of its coupled azimuthal modes for the offset cases, and a source model previously used for round and rectangular jets. Comparisons of the prediction results with data show that the noise model predicts the observed increase in low-frequency noise with the introduction of a third, axisymmetric stream, but not the high-frequency reduction. For an offset third stream, the model predicts the observed trend of decreased sound levels on the thick side of the jet compared with the thin side, but the predicted azimuthal variations are much less than those seen in the data. Also, the shift of the spectral peak to lower frequencies with increasing polar angle is over-predicted. For an offset third stream with a heated core, it is shown that including the enthalpy-flux source terms in the acoustic analogy model improves predictions compared with those obtained using only the momentum flux.
Measurements and Predictions of the Noise from Three-Stream Jets
NASA Technical Reports Server (NTRS)
Henderson, Brenda S.; Leib, Stewart J.; Wernet, Mark P.
2015-01-01
An experimental and numerical investigation of the noise produced by high-subsonic and supersonic three-stream jets was conducted. The exhaust system consisted of externally-mixed-convergent nozzles and an external plug. Bypass- and tertiary- to-core area ratios between 1.0 and 2.5, and 0.4 and 1.0, respectively, were studied. Axisymmetric and offset tertiary nozzles were investigated for heated and unheated conditions. For axisymmetric configurations, the addition of the third stream was found to reduce peak- and high-frequency acoustic levels in the peak-jet-noise direction, with greater reductions at the lower bypass-to-core area ratios. For the offset configurations, an offset duct was found to decrease acoustic levels on the thick side of the tertiary nozzle relative to those produced by the simulated two-stream jet with up to 8 dB mid-frequency noise reduction at large angles to the jet inlet axis. Noise reduction in the peak-jet-noise direction was greater for supersonic core speeds than for subsonic core speeds. The addition of a tertiary nozzle insert used to divert the third-stream jet to one side of the nozzle system provided no noise reduction. Noise predictions are presented for selected cases using a method based on an acoustic analogy with mean flow interaction effects accounted for using a Green's function, computed in terms of its coupled azimuthal modes for the offset cases, and a source model previously used for round and rectangular jets. Comparisons of the prediction results with data show that the noise model predicts the observed increase in low-frequency noise with the introduction of a third, axisymmetric stream, but not the high-frequency reduction. For an offset third stream, the model predicts the observed trend of decreased sound levels on the thick side of the jet compared with the thin side, but the predicted azimuthal variations are much less than those seen in the data. Also, the shift of the spectral peak to lower frequencies with increasing polar angle is over-predicted. For an offset third stream with a heated core, it is shown that including the enthalpy-flux source terms in the acoustic analogy model improves predictions compared with those obtained using only the momentum- flux.
Predicting Football Matches Results using Bayesian Networks for English Premier League (EPL)
NASA Astrophysics Data System (ADS)
Razali, Nazim; Mustapha, Aida; Yatim, Faiz Ahmad; Aziz, Ruhaya Ab
2017-08-01
The issues of modeling asscoiation football prediction model has become increasingly popular in the last few years and many different approaches of prediction models have been proposed with the point of evaluating the attributes that lead a football team to lose, draw or win the match. There are three types of approaches has been considered for predicting football matches results which include statistical approaches, machine learning approaches and Bayesian approaches. Lately, many studies regarding football prediction models has been produced using Bayesian approaches. This paper proposes a Bayesian Networks (BNs) to predict the results of football matches in term of home win (H), away win (A) and draw (D). The English Premier League (EPL) for three seasons of 2010-2011, 2011-2012 and 2012-2013 has been selected and reviewed. K-fold cross validation has been used for testing the accuracy of prediction model. The required information about the football data is sourced from a legitimate site at http://www.football-data.co.uk. BNs achieved predictive accuracy of 75.09% in average across three seasons. It is hoped that the results could be used as the benchmark output for future research in predicting football matches results.
Short-term prediction of solar energy in Saudi Arabia using automated-design fuzzy logic systems
2017-01-01
Solar energy is considered as one of the main sources for renewable energy in the near future. However, solar energy and other renewable energy sources have a drawback related to the difficulty in predicting their availability in the near future. This problem affects optimal exploitation of solar energy, especially in connection with other resources. Therefore, reliable solar energy prediction models are essential to solar energy management and economics. This paper presents work aimed at designing reliable models to predict the global horizontal irradiance (GHI) for the next day in 8 stations in Saudi Arabia. The designed models are based on computational intelligence methods of automated-design fuzzy logic systems. The fuzzy logic systems are designed and optimized with two models using fuzzy c-means clustering (FCM) and simulated annealing (SA) algorithms. The first model uses FCM based on the subtractive clustering algorithm to automatically design the predictor fuzzy rules from data. The second model is using FCM followed by simulated annealing algorithm to enhance the prediction accuracy of the fuzzy logic system. The objective of the predictor is to accurately predict next-day global horizontal irradiance (GHI) using previous-day meteorological and solar radiation observations. The proposed models use observations of 10 variables of measured meteorological and solar radiation data to build the model. The experimentation and results of the prediction are detailed where the root mean square error of the prediction was approximately 88% for the second model tuned by simulated annealing compared to 79.75% accuracy using the first model. This results demonstrate a good modeling accuracy of the second model despite that the training and testing of the proposed models were carried out using spatially and temporally independent data. PMID:28806754
Short-term prediction of solar energy in Saudi Arabia using automated-design fuzzy logic systems.
Almaraashi, Majid
2017-01-01
Solar energy is considered as one of the main sources for renewable energy in the near future. However, solar energy and other renewable energy sources have a drawback related to the difficulty in predicting their availability in the near future. This problem affects optimal exploitation of solar energy, especially in connection with other resources. Therefore, reliable solar energy prediction models are essential to solar energy management and economics. This paper presents work aimed at designing reliable models to predict the global horizontal irradiance (GHI) for the next day in 8 stations in Saudi Arabia. The designed models are based on computational intelligence methods of automated-design fuzzy logic systems. The fuzzy logic systems are designed and optimized with two models using fuzzy c-means clustering (FCM) and simulated annealing (SA) algorithms. The first model uses FCM based on the subtractive clustering algorithm to automatically design the predictor fuzzy rules from data. The second model is using FCM followed by simulated annealing algorithm to enhance the prediction accuracy of the fuzzy logic system. The objective of the predictor is to accurately predict next-day global horizontal irradiance (GHI) using previous-day meteorological and solar radiation observations. The proposed models use observations of 10 variables of measured meteorological and solar radiation data to build the model. The experimentation and results of the prediction are detailed where the root mean square error of the prediction was approximately 88% for the second model tuned by simulated annealing compared to 79.75% accuracy using the first model. This results demonstrate a good modeling accuracy of the second model despite that the training and testing of the proposed models were carried out using spatially and temporally independent data.
NASA Technical Reports Server (NTRS)
Kraft, R. E.
1996-01-01
The objective of this effort is to develop an analytical model for the coupling of active noise control (ANC) piston-type actuators that are mounted flush to the inner and outer walls of an annular duct to the modes in the duct generated by the actuator motion. The analysis will be used to couple the ANC actuators to the modal analysis propagation computer program for the annular duct, to predict the effects of active suppression of fan-generated engine noise sources. This combined program will then be available to assist in the design or evaluation of ANC systems in fan engine annular exhaust ducts. An analysis has been developed to predict the modes generated in an annular duct due to the coupling of flush-mounted ring actuators on the inner and outer walls of the duct. The analysis has been combined with a previous analysis for the coupling of modes to a cylindrical duct in a FORTRAN computer program to perform the computations. The method includes the effects of uniform mean flow in the duct. The program can be used for design or evaluation purposes for active noise control hardware for turbofan engines. Predictions for some sample cases modeled after the geometry of the NASA Lewis ANC Fan indicate very efficient coupling in both the inlet and exhaust ducts for the m = 6 spinning mode at frequencies where only a single radial mode is cut-on. Radial mode content in higher order cut-off modes at the source plane and the required actuator displacement amplitude to achieve 110 dB SPL levels in the desired mode were predicted. Equivalent cases with and without flow were examined for the cylindrical and annular geometry, and little difference was found for a duct flow Mach number of 0.1. The actuator ring coupling program will be adapted as a subroutine to the cylindrical duct modal analysis and the exhaust duct modal analysis. This will allow the fan source to be defined in terms of characteristic modes at the fan source plane and predict the propagation to the arbitrarily-located ANC source plane. The actuator velocities can then be determined to generate the anti-phase mode. The resulting combined fan source/ANC pressure can then be calculated at any desired wall sensor position. The actuator velocities can be determined manually or using a simulation of a control system feedback loop. This will provide a very useful ANC system design and evaluation tool.
Mohammed, Emad A; Naugler, Christopher
2017-01-01
Demand forecasting is the area of predictive analytics devoted to predicting future volumes of services or consumables. Fair understanding and estimation of how demand will vary facilitates the optimal utilization of resources. In a medical laboratory, accurate forecasting of future demand, that is, test volumes, can increase efficiency and facilitate long-term laboratory planning. Importantly, in an era of utilization management initiatives, accurately predicted volumes compared to the realized test volumes can form a precise way to evaluate utilization management initiatives. Laboratory test volumes are often highly amenable to forecasting by time-series models; however, the statistical software needed to do this is generally either expensive or highly technical. In this paper, we describe an open-source web-based software tool for time-series forecasting and explain how to use it as a demand forecasting tool in clinical laboratories to estimate test volumes. This tool has three different models, that is, Holt-Winters multiplicative, Holt-Winters additive, and simple linear regression. Moreover, these models are ranked and the best one is highlighted. This tool will allow anyone with historic test volume data to model future demand.
Mohammed, Emad A.; Naugler, Christopher
2017-01-01
Background: Demand forecasting is the area of predictive analytics devoted to predicting future volumes of services or consumables. Fair understanding and estimation of how demand will vary facilitates the optimal utilization of resources. In a medical laboratory, accurate forecasting of future demand, that is, test volumes, can increase efficiency and facilitate long-term laboratory planning. Importantly, in an era of utilization management initiatives, accurately predicted volumes compared to the realized test volumes can form a precise way to evaluate utilization management initiatives. Laboratory test volumes are often highly amenable to forecasting by time-series models; however, the statistical software needed to do this is generally either expensive or highly technical. Method: In this paper, we describe an open-source web-based software tool for time-series forecasting and explain how to use it as a demand forecasting tool in clinical laboratories to estimate test volumes. Results: This tool has three different models, that is, Holt-Winters multiplicative, Holt-Winters additive, and simple linear regression. Moreover, these models are ranked and the best one is highlighted. Conclusion: This tool will allow anyone with historic test volume data to model future demand. PMID:28400996
A critical review of principal traffic noise models: Strategies and implications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garg, Naveen, E-mail: ngarg@mail.nplindia.ernet.in; Department of Mechanical, Production and Industrial Engineering, Delhi Technological University, Delhi 110042; Maji, Sagar
2014-04-01
The paper presents an exhaustive comparison of principal traffic noise models adopted in recent years in developed nations. The comparison is drawn on the basis of technical attributes including source modelling and sound propagation algorithms. Although the characterization of source in terms of rolling and propulsion noise in conjunction with advanced numerical methods for sound propagation has significantly reduced the uncertainty in traffic noise predictions, the approach followed is quite complex and requires specialized mathematical skills for predictions which is sometimes quite cumbersome for town planners. Also, it is sometimes difficult to follow the best approach when a variety ofmore » solutions have been proposed. This paper critically reviews all these aspects pertaining to the recent models developed and adapted in some countries and also discusses the strategies followed and implications of these models. - Highlights: • Principal traffic noise models developed are reviewed. • Sound propagation algorithms used in traffic noise models are compared. • Implications of models are discussed.« less
Low Data Drug Discovery with One-Shot Learning
2017-01-01
Recent advances in machine learning have made significant contributions to drug discovery. Deep neural networks in particular have been demonstrated to provide significant boosts in predictive power when inferring the properties and activities of small-molecule compounds (Ma, J. et al. J. Chem. Inf. Model.2015, 55, 263–27425635324). However, the applicability of these techniques has been limited by the requirement for large amounts of training data. In this work, we demonstrate how one-shot learning can be used to significantly lower the amounts of data required to make meaningful predictions in drug discovery applications. We introduce a new architecture, the iterative refinement long short-term memory, that, when combined with graph convolutional neural networks, significantly improves learning of meaningful distance metrics over small-molecules. We open source all models introduced in this work as part of DeepChem, an open-source framework for deep-learning in drug discovery (Ramsundar, B. deepchem.io. https://github.com/deepchem/deepchem, 2016). PMID:28470045
Solar Corona/Wind Composition and Origins of the Solar Wind
NASA Astrophysics Data System (ADS)
Lepri, S. T.; Gilbert, J. A.; Landi, E.; Shearer, P.; von Steiger, R.; Zurbuchen, T.
2014-12-01
Measurements from ACE and Ulysses have revealed a multifaceted solar wind, with distinctly different kinetic and compositional properties dependent on the source region of the wind. One of the major outstanding issues in heliophysics concerns the origin and also predictability of quasi-stationary slow solar wind. While the fast solar wind is now proven to originate within large polar coronal holes, the source of the slow solar wind remains particularly elusive and has been the subject of long debate, leading to models that are stationary and also reconnection based - such as interchange or so-called S-web based models. Our talk will focus on observational constraints of solar wind sources and their evolution during the solar cycle. In particular, we will point out long-term variations of wind composition and dynamic properties, particularly focused on the abundance of elements with low First Ionization Potential (FIP), which have been routinely measured on both ACE and Ulysses spacecraft. We will use these in situ observations, and remote sensing data where available, to provide constraints for solar wind origin during the solar cycle, and on their correspondence to predictions for models of the solar wind.
NASA Astrophysics Data System (ADS)
Muhammad, Ario; Goda, Katsuichiro
2018-03-01
This study investigates the impact of model complexity in source characterization and digital elevation model (DEM) resolution on the accuracy of tsunami hazard assessment and fatality estimation through a case study in Padang, Indonesia. Two types of earthquake source models, i.e. complex and uniform slip models, are adopted by considering three resolutions of DEMs, i.e. 150 m, 50 m, and 10 m. For each of the three grid resolutions, 300 complex source models are generated using new statistical prediction models of earthquake source parameters developed from extensive finite-fault models of past subduction earthquakes, whilst 100 uniform slip models are constructed with variable fault geometry without slip heterogeneity. The results highlight that significant changes to tsunami hazard and fatality estimates are observed with regard to earthquake source complexity and grid resolution. Coarse resolution (i.e. 150 m) leads to inaccurate tsunami hazard prediction and fatality estimation, whilst 50-m and 10-m resolutions produce similar results. However, velocity and momentum flux are sensitive to the grid resolution and hence, at least 10-m grid resolution needs to be implemented when considering flow-based parameters for tsunami hazard and risk assessments. In addition, the results indicate that the tsunami hazard parameters and fatality number are more sensitive to the complexity of earthquake source characterization than the grid resolution. Thus, the uniform models are not recommended for probabilistic tsunami hazard and risk assessments. Finally, the findings confirm that uncertainties of tsunami hazard level and fatality in terms of depth, velocity and momentum flux can be captured and visualized through the complex source modeling approach. From tsunami risk management perspectives, this indeed creates big data, which are useful for making effective and robust decisions.
Efthimiou, George C; Bartzis, John G; Berbekar, Eva; Hertwig, Denise; Harms, Frank; Leitl, Bernd
2015-06-26
The capability to predict short-term maximum individual exposure is very important for several applications including, for example, deliberate/accidental release of hazardous substances, odour fluctuations or material flammability level exceedance. Recently, authors have proposed a simple approach relating maximum individual exposure to parameters such as the fluctuation intensity and the concentration integral time scale. In the first part of this study (Part I), the methodology was validated against field measurements, which are governed by the natural variability of atmospheric boundary conditions. In Part II of this study, an in-depth validation of the approach is performed using reference data recorded under truly stationary and well documented flow conditions. For this reason, a boundary-layer wind-tunnel experiment was used. The experimental dataset includes 196 time-resolved concentration measurements which detect the dispersion from a continuous point source within an urban model of semi-idealized complexity. The data analysis allowed the improvement of an important model parameter. The model performed very well in predicting the maximum individual exposure, presenting a factor of two of observations equal to 95%. For large time intervals, an exponential correction term has been introduced in the model based on the experimental observations. The new model is capable of predicting all time intervals giving an overall factor of two of observations equal to 100%.
NASA Astrophysics Data System (ADS)
Schliep, E. M.; Gelfand, A. E.; Holland, D. M.
2015-12-01
There is considerable demand for accurate air quality information in human health analyses. The sparsity of ground monitoring stations across the United States motivates the need for advanced statistical models to predict air quality metrics, such as PM2.5, at unobserved sites. Remote sensing technologies have the potential to expand our knowledge of PM2.5 spatial patterns beyond what we can predict from current PM2.5 monitoring networks. Data from satellites have an additional advantage in not requiring extensive emission inventories necessary for most atmospheric models that have been used in earlier data fusion models for air pollution. Statistical models combining monitoring station data with satellite-obtained aerosol optical thickness (AOT), also referred to as aerosol optical depth (AOD), have been proposed in the literature with varying levels of success in predicting PM2.5. The benefit of using AOT is that satellites provide complete gridded spatial coverage. However, the challenges involved with using it in fusion models are (1) the correlation between the two data sources varies both in time and in space, (2) the data sources are temporally and spatially misaligned, and (3) there is extensive missingness in the monitoring data and also in the satellite data due to cloud cover. We propose a hierarchical autoregressive spatially varying coefficients model to jointly model the two data sources, which addresses the foregoing challenges. Additionally, we offer formal model comparison for competing models in terms of model fit and out of sample prediction of PM2.5. The models are applied to daily observations of PM2.5 and AOT in the summer months of 2013 across the conterminous United States. Most notably, during this time period, we find small in-sample improvement incorporating AOT into our autoregressive model but little out-of-sample predictive improvement.
Influence of heat conducting substrates on explosive crystallization in thin layers
NASA Astrophysics Data System (ADS)
Schneider, Wilhelm
2017-09-01
Crystallization in a thin, initially amorphous layer is considered. The layer is in thermal contact with a substrate of very large dimensions. The energy equation of the layer contains source and sink terms. The source term is due to liberation of latent heat in the crystallization process, while the sink term is due to conduction of heat into the substrate. To determine the latter, the heat diffusion equation for the substrate is solved by applying Duhamel's integral. Thus, the energy equation of the layer becomes a heat diffusion equation with a time integral as an additional term. The latter term indicates that the heat loss due to the substrate depends on the history of the process. To complete the set of equations, the crystallization process is described by a rate equation for the degree of crystallization. The governing equations are then transformed to a moving co-ordinate system in order to analyze crystallization waves that propagate with invariant properties. Dual solutions are found by an asymptotic expansion for large activation energies of molecular diffusion. By introducing suitable variables, the results can be presented in a universal form that comprises the influence of all non-dimensional parameters that govern the process. Of particular interest for applications is the prediction of a critical heat loss parameter for the existence of crystallization waves with invariant properties.
Spatio-temporal modeling of chronic PM 10 exposure for the Nurses' Health Study
NASA Astrophysics Data System (ADS)
Yanosky, Jeff D.; Paciorek, Christopher J.; Schwartz, Joel; Laden, Francine; Puett, Robin; Suh, Helen H.
2008-06-01
Chronic epidemiological studies of airborne particulate matter (PM) have typically characterized the chronic PM exposures of their study populations using city- or county-wide ambient concentrations, which limit the studies to areas where nearby monitoring data are available and which ignore within-city spatial gradients in ambient PM concentrations. To provide more spatially refined and precise chronic exposure measures, we used a Geographic Information System (GIS)-based spatial smoothing model to predict monthly outdoor PM10 concentrations in the northeastern and midwestern United States. This model included monthly smooth spatial terms and smooth regression terms of GIS-derived and meteorological predictors. Using cross-validation and other pre-specified selection criteria, terms for distance to road by road class, urban land use, block group and county population density, point- and area-source PM10 emissions, elevation, wind speed, and precipitation were found to be important determinants of PM10 concentrations and were included in the final model. Final model performance was strong (cross-validation R2=0.62), with little bias (-0.4 μg m-3) and high precision (6.4 μg m-3). The final model (with monthly spatial terms) performed better than a model with seasonal spatial terms (cross-validation R2=0.54). The addition of GIS-derived and meteorological predictors improved predictive performance over spatial smoothing (cross-validation R2=0.51) or inverse distance weighted interpolation (cross-validation R2=0.29) methods alone and increased the spatial resolution of predictions. The model performed well in both rural and urban areas, across seasons, and across the entire time period. The strong model performance demonstrates its suitability as a means to estimate individual-specific chronic PM10 exposures for large populations.
NASA Astrophysics Data System (ADS)
Georgiou, Katerina; Abramoff, Rose; Harte, John; Riley, William; Torn, Margaret
2017-04-01
Climatic, atmospheric, and land-use changes all have the potential to alter soil microbial activity via abiotic effects on soil or mediated by changes in plant inputs. Recently, many promising microbial models of soil organic carbon (SOC) decomposition have been proposed to advance understanding and prediction of climate and carbon (C) feedbacks. Most of these models, however, exhibit unrealistic oscillatory behavior and SOC insensitivity to long-term changes in C inputs. Here we diagnose the sources of instability in four models that span the range of complexity of these recent microbial models, by sequentially adding complexity to a simple model to include microbial physiology, a mineral sorption isotherm, and enzyme dynamics. We propose a formulation that introduces density-dependence of microbial turnover, which acts to limit population sizes and reduce oscillations. We compare these models to results from 24 long-term C-input field manipulations, including the Detritus Input and Removal Treatment (DIRT) experiments, to show that there are clear metrics that can be used to distinguish and validate the inherent dynamics of each model structure. We find that widely used first-order models and microbial models without density-dependence cannot readily capture the range of long-term responses observed across the DIRT experiments as a direct consequence of their model structures. The proposed formulation improves predictions of long-term C-input changes, and implies greater SOC storage associated with CO2-fertilization-driven increases in C inputs over the coming century compared to common microbial models. Finally, we discuss our findings in the context of improving microbial model behavior for inclusion in Earth System Models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davison,Brian H.
2000-12-31
Biofiltration systems can be used for treatment of volatile organic compounds (VOCs); however, the systems are poorly understood and are normally operated as ''black boxes''. Common operational problems associated with biofilters include fouling, deactivation, and overgrowth, all of which make them ineffective for continuous, long-term use. The objective of this investigation was to develop generic methods for long-term stable operation, in particular by using selective limitation of supplemental nutrients while maintaining high activity. As part of this effort, we have provided a deeper fundamental understanding of the important biological and transport mechanisms in biodestruction of sparingly soluble VOCs and havemore » extended this approach and mathematical models to additional systems of high priority EM relevance--direct degradation and cometabolic degradation of priority pollutants such as BTEX and chlorinated organics. Innovative aspects of this project included development of a user-friendly two-dimensional predictive model/program for MS Windows 95/98/2000 to elucidate mass transfer and kinetic limitations in these systems, isolation of a unique microorganism capable of using sparingly soluble organic and chloroorganic VOCs as its sole carbon and energy source, and making long-term growth possible by successfully decoupling growth and degradation metabolisms in operating trickle bed bioreactors.« less
Glied, Sherry; Zaylor, Abigail
2015-07-01
The authors assess how Medicare financing and projections of future costs have changed since 2000. They also assess the impact of legislative reforms on the sources and levels of financing and compare cost forecasts made at different times. Although the aging U.S. population and rising health care costs are expected to increase the share of gross domestic product devoted to Medicare, changes made in the program over the past decade have helped stabilize Medicare's financial outlook--even as benefits have been expanded. Long-term forecasting uncertainty should make policymakers and beneficiaries wary of dramatic changes to the program in the near term that are intended to alter its long-term forecast: the range of error associated with cost forecasts rises as the forecast window lengthens. Instead, policymakers should focus on the immediate policy window, taking steps to reduce the current burden of Medicare costs by containing spending today.
NASA Astrophysics Data System (ADS)
Hu, Jianlin; Jathar, Shantanu; Zhang, Hongliang; Ying, Qi; Chen, Shu-Hua; Cappa, Christopher D.; Kleeman, Michael J.
2017-04-01
Organic aerosol (OA) is a major constituent of ultrafine particulate matter (PM0. 1). Recent epidemiological studies have identified associations between PM0. 1 OA and premature mortality and low birth weight. In this study, the source-oriented UCD/CIT model was used to simulate the concentrations and sources of primary organic aerosols (POA) and secondary organic aerosols (SOA) in PM0. 1 in California for a 9-year (2000-2008) modeling period with 4 km horizontal resolution to provide more insights about PM0. 1 OA for health effect studies. As a related quality control, predicted monthly average concentrations of fine particulate matter (PM2. 5) total organic carbon at six major urban sites had mean fractional bias of -0.31 to 0.19 and mean fractional errors of 0.4 to 0.59. The predicted ratio of PM2. 5 SOA / OA was lower than estimates derived from chemical mass balance (CMB) calculations by a factor of 2-3, which suggests the potential effects of processes such as POA volatility, additional SOA formation mechanism, and missing sources. OA in PM0. 1, the focus size fraction of this study, is dominated by POA. Wood smoke is found to be the single biggest source of PM0. 1 OA in winter in California, while meat cooking, mobile emissions (gasoline and diesel engines), and other anthropogenic sources (mainly solvent usage and waste disposal) are the most important sources in summer. Biogenic emissions are predicted to be the largest PM0. 1 SOA source, followed by mobile sources and other anthropogenic sources, but these rankings are sensitive to the SOA model used in the calculation. Air pollution control programs aiming to reduce the PM0. 1 OA concentrations should consider controlling solvent usage, waste disposal, and mobile emissions in California, but these findings should be revisited after the latest science is incorporated into the SOA exposure calculations. The spatial distributions of SOA associated with different sources are not sensitive to the choice of SOA model, although the absolute amount of SOA can change significantly. Therefore, the spatial distributions of PM0. 1 POA and SOA over the 9-year study period provide useful information for epidemiological studies to further investigate the associations with health outcomes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwantes, Jon M.
Kelly Fitzgerald Kelly Fitzgerald assisted with laboratory testing for an ongoing R&D project known as Electrochemically Modulated Separation (EMS) for on-line rapid preseparations of actinides prior to mass spectrometry analysis. Ryne Burgess Ryne Burgess used SCALE 5.1 ORIGEN-ARP to predict isotope libraries for the Units 1, 2 and 3 reactors and Unit 4 spent fuel pool for comparing against measurements of environmental sampled collected at the site in order to identify the source terms of the accident. Comparison of the cesium 134/137 and cesium 136/137 ratios observed in environmental samples and ORIGEN-ARP predictions indicated that the Unit 4 Spent Fuelmore » Pool did not significantly contribute to radionuclide release during the Fukushima Daiichi accident.« less
NASA Technical Reports Server (NTRS)
Tiwari, S. N.; Lakshmanan, B.
1993-01-01
A high-speed shear layer is studied using compressibility corrected Reynolds stress turbulence model which employs newly developed model for pressure-strain correlation. MacCormack explicit prediction-corrector method is used for solving the governing equations and the turbulence transport equations. The stiffness arising due to source terms in the turbulence equations is handled by a semi-implicit numerical technique. Results obtained using the new model show a sharper reduction in growth rate with increasing convective Mach number. Some improvements were also noted in the prediction of the normalized streamwise stress and Reynolds shear stress. The computed results are in good agreement with the experimental data.
Outlook for alternative energy sources. [aviation fuels
NASA Technical Reports Server (NTRS)
Card, M. E.
1980-01-01
Predictions are made concerning the development of alternative energy sources in the light of the present national energy situation. Particular emphasis is given to the impact of alternative fuels development on aviation fuels. The future outlook for aircraft fuels is that for the near term, there possibly will be no major fuel changes, but minor specification changes may be possible if supplies decrease. In the midterm, a broad cut fuel may be used if current development efforts are successful. As synfuel production levels increase beyond the 1990's there may be some mixtures of petroleum-based and synfuel products with the possibility of some shale distillate and indirect coal liquefaction products near the year 2000.
A Flexible Spatio-Temporal Model for Air Pollution with Spatial and Spatio-Temporal Covariates.
Lindström, Johan; Szpiro, Adam A; Sampson, Paul D; Oron, Assaf P; Richards, Mark; Larson, Tim V; Sheppard, Lianne
2014-09-01
The development of models that provide accurate spatio-temporal predictions of ambient air pollution at small spatial scales is of great importance for the assessment of potential health effects of air pollution. Here we present a spatio-temporal framework that predicts ambient air pollution by combining data from several different monitoring networks and deterministic air pollution model(s) with geographic information system (GIS) covariates. The model presented in this paper has been implemented in an R package, SpatioTemporal, available on CRAN. The model is used by the EPA funded Multi-Ethnic Study of Atherosclerosis and Air Pollution (MESA Air) to produce estimates of ambient air pollution; MESA Air uses the estimates to investigate the relationship between chronic exposure to air pollution and cardiovascular disease. In this paper we use the model to predict long-term average concentrations of NO x in the Los Angeles area during a ten year period. Predictions are based on measurements from the EPA Air Quality System, MESA Air specific monitoring, and output from a source dispersion model for traffic related air pollution (Caline3QHCR). Accuracy in predicting long-term average concentrations is evaluated using an elaborate cross-validation setup that accounts for a sparse spatio-temporal sampling pattern in the data, and adjusts for temporal effects. The predictive ability of the model is good with cross-validated R 2 of approximately 0.7 at subject sites. Replacing four geographic covariate indicators of traffic density with the Caline3QHCR dispersion model output resulted in very similar prediction accuracy from a more parsimonious and more interpretable model. Adding traffic-related geographic covariates to the model that included Caline3QHCR did not further improve the prediction accuracy.
Summer drought predictability over Europe: empirical versus dynamical forecasts
NASA Astrophysics Data System (ADS)
Turco, Marco; Ceglar, Andrej; Prodhomme, Chloé; Soret, Albert; Toreti, Andrea; Doblas-Reyes Francisco, J.
2017-08-01
Seasonal climate forecasts could be an important planning tool for farmers, government and insurance companies that can lead to better and timely management of seasonal climate risks. However, climate seasonal forecasts are often under-used, because potential users are not well aware of the capabilities and limitations of these products. This study aims at assessing the merits and caveats of a statistical empirical method, the ensemble streamflow prediction system (ESP, an ensemble based on reordering historical data) and an operational dynamical forecast system, the European Centre for Medium-Range Weather Forecasts—System 4 (S4) in predicting summer drought in Europe. Droughts are defined using the Standardized Precipitation Evapotranspiration Index for the month of August integrated over 6 months. Both systems show useful and mostly comparable deterministic skill. We argue that this source of predictability is mostly attributable to the observed initial conditions. S4 shows only higher skill in terms of ability to probabilistically identify drought occurrence. Thus, currently, both approaches provide useful information and ESP represents a computationally fast alternative to dynamical prediction applications for drought prediction.
Libert, Marie; Schütz, Marta Kerber; Esnault, Loïc; Féron, Damien; Bildstein, Olivier
2014-06-01
This study emphasizes different experimental approaches and provides perspectives to apprehend biocorrosion phenomena in the specific disposal environment by investigating microbial activity with regard to the modification of corrosion rate, which in turn can have an impact on the safety of radioactive waste geological disposal. It is found that iron-reducing bacteria are able to use corrosion products such as iron oxides and "dihydrogen" as new energy sources, especially in the disposal environment which contains low amounts of organic matter. Moreover, in the case of sulphate-reducing bacteria, the results show that mixed aerobic and anaerobic conditions are the most hazardous for stainless steel materials, a situation which is likely to occur in the early stage of a geological disposal. Finally, an integrated methodological approach is applied to validate the understanding of the complex processes and to design experiments aiming at the acquisition of kinetic data used in long term predictive modelling of biocorrosion processes. © 2013.
Localized Enzymatic Degradation of Polymers: Physics and Scaling Laws
NASA Astrophysics Data System (ADS)
Lalitha Sridhar, Shankar; Vernerey, Franck
2018-03-01
Biodegradable polymers are naturally abundant in living matter and have led to great advances in controlling environmental pollution due to synthetic polymer products, harnessing renewable energy from biofuels, and in the field of biomedicine. One of the most prevalent mechanisms of biodegradation involves enzyme-catalyzed depolymerization by biological agents. Despite numerous studies dedicated to understanding polymer biodegradation in different environments, a simple model that predicts the macroscopic behavior (mass and structural loss) in terms of microphysical processes (enzyme transport and reaction) is lacking. An interesting phenomenon occurs when an enzyme source (released by a biological agent) attacks a tight polymer mesh that restricts free diffusion. A fuzzy interface separating the intact and fully degraded polymer propagates away from the source and into the polymer as the enzymes diffuse and react in time. Understanding the characteristics of this interface will provide crucial insight into the biodegradation process and potential ways to precisely control it. In this work, we present a centrosymmetric model of biodegradation by characterizing the moving fuzzy interface in terms of its speed and width. The model predicts that the characteristics of this interface are governed by two time scales, namely the polymer degradation and enzyme transport times, which in turn depend on four main polymer and enzyme properties. A key finding of this work is simple scaling laws that can be used to guide biodegradation of polymers in different applications.
On Acoustic Source Specification for Rotor-Stator Interaction Noise Prediction
NASA Technical Reports Server (NTRS)
Nark, Douglas M.; Envia, Edmane; Burley, Caesy L.
2010-01-01
This paper describes the use of measured source data to assess the effects of acoustic source specification on rotor-stator interaction noise predictions. Specifically, the acoustic propagation and radiation portions of a recently developed coupled computational approach are used to predict tonal rotor-stator interaction noise from a benchmark configuration. In addition to the use of full measured data, randomization of source mode relative phases is also considered for specification of the acoustic source within the computational approach. Comparisons with sideline noise measurements are performed to investigate the effects of various source descriptions on both inlet and exhaust predictions. The inclusion of additional modal source content is shown to have a much greater influence on the inlet results. Reasonable agreement between predicted and measured levels is achieved for the inlet, as well as the exhaust when shear layer effects are taken into account. For the number of trials considered, phase randomized predictions follow statistical distributions similar to those found in previous statistical source investigations. The shape of the predicted directivity pattern relative to measurements also improved with phase randomization, having predicted levels generally within one standard deviation of the measured levels.
Nonlinear synthesis of infrasound propagation through an inhomogeneous, absorbing atmosphere.
de Groot-Hedlin, C D
2012-08-01
An accurate and efficient method to predict infrasound amplitudes from large explosions in the atmosphere is required for diverse source types, including bolides, volcanic eruptions, and nuclear and chemical explosions. A finite-difference, time-domain approach is developed to solve a set of nonlinear fluid dynamic equations for total pressure, temperature, and density fields rather than acoustic perturbations. Three key features for the purpose of synthesizing nonlinear infrasound propagation in realistic media are that it includes gravitational terms, it allows for acoustic absorption, including molecular vibration losses at frequencies well below the molecular vibration frequencies, and the environmental models are constrained to have axial symmetry, allowing a three-dimensional simulation to be reduced to two dimensions. Numerical experiments are performed to assess the algorithm's accuracy and the effect of source amplitudes and atmospheric variability on infrasound waveforms and shock formation. Results show that infrasound waveforms steepen and their associated spectra are shifted to higher frequencies for nonlinear sources, leading to enhanced infrasound attenuation. Results also indicate that nonlinear infrasound amplitudes depend strongly on atmospheric temperature and pressure variations. The solution for total field variables and insertion of gravitational terms also allows for the computation of other disturbances generated by explosions, including gravity waves.
An Empirical Temperature Variance Source Model in Heated Jets
NASA Technical Reports Server (NTRS)
Khavaran, Abbas; Bridges, James
2012-01-01
An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.
A radio source occultation experiment with comet Austin 1982g, with unusual results
NASA Technical Reports Server (NTRS)
De Pater, I.; Ip, W.-H.
1984-01-01
A radio source occultation by comet Austin 1982g was observed on September 15-16, 1982. A change in the apparent position of 1242 + 41 by 1.3 arcsec occurred when the source was 220,000 km away from the cometary ion tail. If this change was due to refraction by the cometary plasma, it indicates an electron density of the plasma of about 10,000/cu cm. When the radio source was on the other side of the plasma tail, at a distance of 230,000 km, the position angle of the electric vector of the radio source changed gradually over about 140 deg within two hours. This observation cannot be explained in terms of ionospheric Faraday rotation, and results from either an intrinsic change in the radio source or Faraday rotation in the cometary plasma due to a change in the direction and/or strength of the magnetic field. In the latter case, the cometary coma must have an electron density and a magnetic field strength orders of magnitude larger than current theories predict.
En route noise levels from propfan test assessment airplane
NASA Technical Reports Server (NTRS)
Garber, Donald P.; Willshire, William L., Jr.
1994-01-01
The en route noise test was designed to characterize propagation of propfan noise from cruise altitudes to the ground. In-flight measurements of propfan source levels and directional patterns were made by a chase plane flying in formation with the propfan test assessment (PTA) airplane. Ground noise measurements were taken during repeated flights over a distributed microphone array. The microphone array on the ground was used to provide ensemble-averaged estimates of mean flyover noise levels, establish confidence limits for those means, and measure propagation-induced noise variability. Even for identical nominal cruise conditions, peak sound levels for individual overflights varied substantially about the average, particularly when overflights were performed on different days. Large day-to-day variations in peak level measurements appeared to be caused by large day-to-day differences in propagation conditions and tended to obscure small variations arising from operating conditions. A parametric evaluation of the sensitivity of this prediction method to weather measurement and source level uncertainties was also performed. In general, predictions showed good agreement with measurements. However, the method was unable to predict short-term variability of ensemble-averaged data within individual overflights. Although variations in absorption appear to be the dominant factor in variations of peak sound levels recorded on the ground, accurate predictions of those levels require that a complete description of operational conditions be taken into account. The comprehensive and integrated methods presented in this paper have adequately predicted ground-measured sound levels. On average, peak sound levels were predicted within 3 dB for each of the three different cruise conditions.
Flight Test Results: CTAS Cruise/Descent Trajectory Prediction Accuracy for En route ATC Advisories
NASA Technical Reports Server (NTRS)
Green, S.; Grace, M.; Williams, D.
1999-01-01
The Center/TRACON Automation System (CTAS), under development at NASA Ames Research Center, is designed to assist controllers with the management and control of air traffic transitioning to/from congested airspace. This paper focuses on the transition from the en route environment, to high-density terminal airspace, under a time-based arrival-metering constraint. Two flight tests were conducted at the Denver Air Route Traffic Control Center (ARTCC) to study trajectory-prediction accuracy, the key to accurate Decision Support Tool advisories such as conflict detection/resolution and fuel-efficient metering conformance. In collaboration with NASA Langley Research Center, these test were part of an overall effort to research systems and procedures for the integration of CTAS and flight management systems (FMS). The Langley Transport Systems Research Vehicle Boeing 737 airplane flew a combined total of 58 cruise-arrival trajectory runs while following CTAS clearance advisories. Actual trajectories of the airplane were compared to CTAS and FMS predictions to measure trajectory-prediction accuracy and identify the primary sources of error for both. The research airplane was used to evaluate several levels of cockpit automation ranging from conventional avionics to a performance-based vertical navigation (VNAV) FMS. Trajectory prediction accuracy was analyzed with respect to both ARTCC radar tracking and GPS-based aircraft measurements. This paper presents detailed results describing the trajectory accuracy and error sources. Although differences were found in both accuracy and error sources, CTAS accuracy was comparable to the FMS in terms of both meter-fix arrival-time performance (in support of metering) and 4D-trajectory prediction (key to conflict prediction). Overall arrival time errors (mean plus standard deviation) were measured to be approximately 24 seconds during the first flight test (23 runs) and 15 seconds during the second flight test (25 runs). The major source of error during these tests was found to be the predicted winds aloft used by CTAS. Position and velocity estimates of the airplane provided to CTAS by the ATC Host radar tracker were found to be a relatively insignificant error source for the trajectory conditions evaluated. Airplane performance modeling errors within CTAS were found to not significantly affect arrival time errors when the constrained descent procedures were used. The most significant effect related to the flight guidance was observed to be the cross-track and turn-overshoot errors associated with conventional VOR guidance. Lateral navigation (LNAV) guidance significantly reduced both the cross-track and turn-overshoot error. Pilot procedures and VNAV guidance were found to significantly reduce the vertical profile errors associated with atmospheric and aircraft performance model errors.
Multi-Temporal Decomposed Wind and Load Power Models for Electric Energy Systems
NASA Astrophysics Data System (ADS)
Abdel-Karim, Noha
This thesis is motivated by the recognition that sources of uncertainties in electric power systems are multifold and may have potentially far-reaching effects. In the past, only system load forecast was considered to be the main challenge. More recently, however, the uncertain price of electricity and hard-to-predict power produced by renewable resources, such as wind and solar, are making the operating and planning environment much more challenging. The near-real-time power imbalances are compensated by means of frequency regulation and generally require fast-responding costly resources. Because of this, a more accurate forecast and look-ahead scheduling would result in a reduced need for expensive power balancing. Similarly, long-term planning and seasonal maintenance need to take into account long-term demand forecast as well as how the short-term generation scheduling is done. The better the demand forecast, the more efficient planning will be as well. Moreover, computer algorithms for scheduling and planning are essential in helping the system operators decide what to schedule and planners what to build. This is needed given the overall complexity created by different abilities to adjust the power output of generation technologies, demand uncertainties and by the network delivery constraints. Given the growing presence of major uncertainties, it is likely that the main control applications will use more probabilistic approaches. Today's predominantly deterministic methods will be replaced by methods which account for key uncertainties as decisions are made. It is well-understood that although demand and wind power cannot be predicted at very high accuracy, taking into consideration predictions and scheduling in a look-ahead way over several time horizons generally results in more efficient and reliable utilization, than when decisions are made assuming deterministic, often worst-case scenarios. This change is in approach is going to ultimately require new electricity market rules capable of providing the right incentives to manage uncertainties and of differentiating various technologies according to the rate at which they can respond to ever changing conditions. Given the overall need for modeling uncertainties in electric energy systems, we consider in this thesis the problem of multi-temporal modeling of wind and demand power, in particular. Historic data is used to derive prediction models for several future time horizons. Short-term prediction models derived can be used for look-ahead economic dispatch and unit commitment, while the long-term annual predictive models can be used for investment planning. As expected, the accuracy of such predictive models depends on the time horizons over which the predictions are made, as well as on the nature of uncertain signals. It is shown that predictive models obtained using the same general modeling approaches result in different accuracy for wind than for demand power. In what follows, we introduce several models which have qualitatively different patterns, ranging from hourly to annual. We first transform historic time-stamped data into the Fourier Transform (Fr) representation. The frequency domain data representation is used to decompose the wind and load power signals and to derive predictive models relevant for short-term and long-term predictions using extracted spectral techniques. The short-term results are interpreted next as a Linear Prediction Coding Model (LPC) and its accuracy is analyzed. Next, a new Markov-Based Sensitivity Model (MBSM) for short term prediction has been proposed and the dispatched costs of uncertainties for different predictive models with comparisons have been developed. Moreover, the Discrete Markov Process (DMP) representation is applied to help assess probabilities of most likely short-, medium- and long-term states and the related multi-temporal risks. In addition, this thesis discusses operational impacts of wind power integration in different scenario levels by performing more than 9,000 AC Optimal Power Flow runs. The effects of both wind and load variations on system constraints and costs are presented. The limitations of DC Optimal Power Flow (DCOPF) vs. ACOPF are emphasized by means of system convergence problems due to the effect of wind power on changing line flows and net power injections. By studying the effect of having wind power on line flows, we found that the divergence problem applies in areas with high wind and hydro generation capacity share (cheap generations). (Abstract shortened by UMI.).
Long-term fitness consequences of early environment in a long-lived ungulate
Festa-Bianchet, Marco; Pelletier, Fanie
2017-01-01
Cohort effects can be a major source of heterogeneity and play an important role in population dynamics. Silver-spoon effects, when environmental quality at birth improves future performance regardless of the adult environment, can induce strong lagged responses on population growth. Alternatively, the external predictive adaptive response (PAR) hypothesis predicts that organisms will adjust their developmental trajectory and physiology during early life in anticipation of expected adult conditions but has rarely been assessed in wild species. We used over 40 years of detailed individual monitoring of bighorn ewes (Ovis canadensis) to quantify long-term cohort effects on survival and reproduction. We then tested both the silver-spoon and the PAR hypotheses. Cohort effects involved a strong interaction between birth and current environments: reproduction and survival were lowest for ewes that were born and lived at high population densities. This interaction, however, does not support the PAR hypothesis because individuals with matching high-density birth and adult environments had reduced fitness. Instead, individuals born at high density had overall lower lifetime fitness suggesting a silver-spoon effect. Early-life conditions can induce long-term changes in fitness components, and their effects on cohort fitness vary according to adult environment. PMID:28424347
Learning in Noise: Dynamic Decision-Making in a Variable Environment
Gureckis, Todd M.; Love, Bradley C.
2009-01-01
In engineering systems, noise is a curse, obscuring important signals and increasing the uncertainty associated with measurement. However, the negative effects of noise and uncertainty are not universal. In this paper, we examine how people learn sequential control strategies given different sources and amounts of feedback variability. In particular, we consider people’s behavior in a task where short- and long-term rewards are placed in conflict (i.e., the best option in the short-term is worst in the long-term). Consistent with a model based on reinforcement learning principles (Gureckis & Love, in press), we find that learners differentially weight information predictive of the current task state. In particular, when cues that signal state are noisy and uncertain, we find that participants’ ability to identify an optimal strategy is strongly impaired relative to equivalent amounts of uncertainty that obscure the rewards/valuations of those states. In other situations, we find that noise and uncertainty in reward signals may paradoxically improve performance by encouraging exploration. Our results demonstrate how experimentally-manipulated task variability can be used to test predictions about the mechanisms that learners engage in dynamic decision making tasks. PMID:20161328
Gardner, Benjamin
2015-01-01
The term 'habit' is widely used to predict and explain behaviour. This paper examines use of the term in the context of health-related behaviour, and explores how the concept might be made more useful. A narrative review is presented, drawing on a scoping review of 136 empirical studies and 8 literature reviews undertaken to document usage of the term 'habit', and methods to measure it. A coherent definition of 'habit', and proposals for improved methods for studying it, were derived from findings. Definitions of 'habit' have varied in ways that are often implicit and not coherently linked with an underlying theory. A definition is proposed whereby habit is a process by which a stimulus generates an impulse to act as a result of a learned stimulus-response association. Habit-generated impulses may compete or combine with impulses and inhibitions arising from other sources, including conscious decision-making, to influence responses, and need not generate behaviour. Most research on habit is based on correlational studies using self-report measures. Adopting a coherent definition of 'habit', and a wider range of paradigms, designs and measures to study it, may accelerate progress in habit theory and application.
NASA Astrophysics Data System (ADS)
Caldararu, Silvia; Purves, Drew W.; Smith, Matthew J.
2017-04-01
Improving international food security under a changing climate and increasing human population will be greatly aided by improving our ability to modify, understand and predict crop growth. What we predominantly have at our disposal are either process-based models of crop physiology or statistical analyses of yield datasets, both of which suffer from various sources of error. In this paper, we present a generic process-based crop model (PeakN-crop v1.0) which we parametrise using a Bayesian model-fitting algorithm to three different sources: data-space-based vegetation indices, eddy covariance productivity measurements and regional crop yields. We show that the model parametrised without data, based on prior knowledge of the parameters, can largely capture the observed behaviour but the data-constrained model greatly improves both the model fit and reduces prediction uncertainty. We investigate the extent to which each dataset contributes to the model performance and show that while all data improve on the prior model fit, the satellite-based data and crop yield estimates are particularly important for reducing model error and uncertainty. Despite these improvements, we conclude that there are still significant knowledge gaps, in terms of available data for model parametrisation, but our study can help indicate the necessary data collection to improve our predictions of crop yields and crop responses to environmental changes.
Visuomotor adaptation needs a validation of prediction error by feedback error
Gaveau, Valérie; Prablanc, Claude; Laurent, Damien; Rossetti, Yves; Priot, Anne-Emmanuelle
2014-01-01
The processes underlying short-term plasticity induced by visuomotor adaptation to a shifted visual field are still debated. Two main sources of error can induce motor adaptation: reaching feedback errors, which correspond to visually perceived discrepancies between hand and target positions, and errors between predicted and actual visual reafferences of the moving hand. These two sources of error are closely intertwined and difficult to disentangle, as both the target and the reaching limb are simultaneously visible. Accordingly, the goal of the present study was to clarify the relative contributions of these two types of errors during a pointing task under prism-displaced vision. In “terminal feedback error” condition, viewing of their hand by subjects was allowed only at movement end, simultaneously with viewing of the target. In “movement prediction error” condition, viewing of the hand was limited to movement duration, in the absence of any visual target, and error signals arose solely from comparisons between predicted and actual reafferences of the hand. In order to prevent intentional corrections of errors, a subthreshold, progressive stepwise increase in prism deviation was used, so that subjects remained unaware of the visual deviation applied in both conditions. An adaptive aftereffect was observed in the “terminal feedback error” condition only. As far as subjects remained unaware of the optical deviation and self-assigned pointing errors, prediction error alone was insufficient to induce adaptation. These results indicate a critical role of hand-to-target feedback error signals in visuomotor adaptation; consistent with recent neurophysiological findings, they suggest that a combination of feedback and prediction error signals is necessary for eliciting aftereffects. They also suggest that feedback error updates the prediction of reafferences when a visual perturbation is introduced gradually and cognitive factors are eliminated or strongly attenuated. PMID:25408644
The interaction of sound with a poroelastic ground
NASA Astrophysics Data System (ADS)
Hickey, C. J.
2012-12-01
An airborne acoustic wave impinging on the surface of the ground provides a good mechanical source for investigating the near surface. Since the ground is porous, the impinging sound wave induces motion of the fluid within the pores as well as vibrating the solid framework. The most complete understanding of the interaction of airborne sound with the ground is to treat the ground as a poroelastic or poroviscoelastic medium. This treatment predicts that three types of waves can propagate in a ground with a deformable framework: two compressional waves, the fast or Type I and slow or Type II wave and one shear wave. Model calculations of the energy partition and an air-soil interface predict that most of the energy is partitioned into the Type II compressional wave, less into the Type I compressional wave, and little energy is partitioned into the shear wave. However, when measuring the solid motion of the soil one must consider how much of that wave energy is in terms of solid velocity. The deformation associated with Type II compressional wave has only a small contribution from the solid component whereas the bulk deformation of the Type I compressional wave has a solid to fluid deformation ratio of approximately one. This modeling suggests that the soil solid velocity induced by an acoustic source is associated with the Type I compressional wave. In other words, the airborne source is simply an inefficient seismic source.
NASA Astrophysics Data System (ADS)
Malusà, Marco G.; Wang, Jiangang; Garzanti, Eduardo; Liu, Zhi-Chao; Villa, Igor M.; Wittmann, Hella
2017-10-01
Detrital thermochronology is often employed to assess the evolutionary stage of an entire orogenic belt using the lag-time approach, i.e., the difference between the cooling and depositional ages of detrital mineral grains preserved in a stratigraphic succession. The impact of different eroding sources to the final sediment sink is controlled by several factors, including the short-term erosion rate and the mineral fertility of eroded bedrock. Here, we use apatite fertility data and cosmogenic-derived erosion rates in the Po river catchment (Alps-Apennines) to calculate the expected percentage of apatite grains supplied to the modern Po delta from the major Alpine and Apenninic eroding sources. We test these predictions by using a cutting-edge dataset of trace-element and Nd-isotope signatures on 871 apatite grains from 14 modern sand samples, and we use apatite fission-track data to validate our geochemical approach to provenance discrimination. We found that apatite grains shed from different sources are geochemically distinct. Apatites from the Lepontine dome in the Central Alps show relative HREE enrichment, lower concentrations in Ce and U, and higher 147Sm/144Nd ratios compared to apatites derived from the External Massifs. Derived provenance budgets point to a dominant apatite contribution to the Po delta from the high-fertility Lepontine dome, consistent with the range independently predicted from cosmonuclide and mineral-fertility data. Our results demonstrate that the single-mineral record in the final sediment sink can be largely determined by high-fertility source rocks exposed in rapidly eroding areas within the drainage. This implies that the detrital thermochronology record may reflect processes affecting relatively small parts of the orogenic system under consideration. A reliable approach to lag-time analysis would thus benefit from an independent provenance discrimination of dated mineral grains, which may allow to proficiently reconsider many previous interpretations of detrital thermochronology datasets in terms of orogenic-wide steady state.
Estimating Sources and Fluxes of Dissolved and Particulate Organic Matter in UK Rivers
NASA Astrophysics Data System (ADS)
Adams, Jessica; Tipping, Edward; Quinton, John; Old, Gareth
2014-05-01
Over the past two centuries, pools and fluxes of carbon, nitrogen and phosphorus in UK ecosystems have been altered by intensification of agriculture, land use change and atmospheric pollution leading to acidification and eutrophication of surface waters. In addition to this, climate change is now also predicted to substantially impact these systems. The CEH Long Term Large Scale (LTLS) project therefore aims to simulate the pools and fluxes of carbon, nitrogen and phosphorus and their stoichiometry during the cycling process. Through the N14C model, simulations of the release of C, N and P through drainage water and erosion processes will be predicted using historical climate data, which will be tested using contemporary data. For present data, water from four UK catchments (Ribble, Wiltshire Avon, Conwy, Dee) were collected at the tidal limit of each river, which included a combination of high and low flow samples predicted using 5 day forecasts and local weather station data. These samples were filtered, centrifuged and sent to the NERC radiocarbon facility for analysis by accelerator mass spectrometry (AMS) to obtain both PO14C and DO14C data. Radiocarbon enables a unique and dynamic way of estimating long term turnover rates of organic matter, and has proven to be an invaluable tool for measuring upland terrestrial and aquatic systems. It has however, been scarcely used in larger, lowland river systems. Since the riverine organic matter captured is likely to have originated from terrestrial and riparian sources, the radiocarbon data will be a rigorous test of the model's ability to simulate the coupling of erosion and leaching processes, and stoichiometric relationships between C:N:P.
Richardson, Claire; Rutherford, Shannon; Agranovski, Igor
2018-06-01
Given the significance of mining as a source of particulates, accurate characterization of emissions is important for the development of appropriate emission estimation techniques for use in modeling predictions and to inform regulatory decisions. The currently available emission estimation methods for Australian open-cut coal mines relate primarily to total suspended particulates and PM 10 (particulate matter with an aerodynamic diameter <10 μm), and limited data are available relating to the PM 2.5 (<2.5 μm) size fraction. To provide an initial analysis of the appropriateness of the currently available emission estimation techniques, this paper presents results of sampling completed at three open-cut coal mines in Australia. The monitoring data demonstrate that the particulate size fraction varies for different mining activities, and that the region in which the mine is located influences the characteristics of the particulates emitted to the atmosphere. The proportion of fine particulates in the sample increased with distance from the source, with the coarse fraction being a more significant proportion of total suspended particulates close to the source of emissions. In terms of particulate composition, the results demonstrate that the particulate emissions are predominantly sourced from naturally occurring geological material, and coal comprises less than 13% of the overall emissions. The size fractionation exhibited by the sampling data sets is similar to that adopted in current Australian emission estimation methods but differs from the size fractionation presented in the U.S. Environmental Protection Agency methodology. Development of region-specific emission estimation techniques for PM 10 and PM 2.5 from open-cut coal mines is necessary to allow accurate prediction of particulate emissions to inform regulatory decisions and for use in modeling predictions. Development of region-specific emission estimation techniques for PM 10 and PM 2.5 from open-cut coal mines is necessary to allow accurate prediction of particulate emissions to inform regulatory decisions and for use in modeling predictions. Comprehensive air quality monitoring was undertaken, and corresponding recommendations were provided.
Shibata, Tomoyuki; Solo-Gabriele, Helena M; Sinigalliano, Christopher D; Gidley, Maribeth L; Plano, Lisa R W; Fleisher, Jay M; Wang, John D; Elmir, Samir M; He, Guoqing; Wright, Mary E; Abdelzaher, Amir M; Ortega, Cristina; Wanless, David; Garza, Anna C; Kish, Jonathan; Scott, Troy; Hollenbeck, Julie; Backer, Lorraine C; Fleming, Lora E
2010-11-01
The objectives of this work were to compare enterococci (ENT) measurements based on the membrane filter, ENT(MF) with alternatives that can provide faster results including alternative enterococci methods (e.g., chromogenic substrate (CS), and quantitative polymerase chain reaction (qPCR)), and results from regression models based upon environmental parameters that can be measured in real-time. ENT(MF) were also compared to source tracking markers (Staphylococcus aureus, Bacteroidales human and dog markers, and Catellicoccus gull marker) in an effort to interpret the variability of the signal. Results showed that concentrations of enterococci based upon MF (<2 to 3320 CFU/100 mL) were significantly different from the CS and qPCR methods (p < 0.01). The correlations between MF and CS (r = 0.58, p < 0.01) were stronger than between MF and qPCR (r ≤ 0.36, p < 0.01). Enterococci levels by MF, CS, and qPCR methods were positively correlated with turbidity and tidal height. Enterococci by MF and CS were also inversely correlated with solar radiation but enterococci by qPCR was not. The regression model based on environmental variables provided fair qualitative predictions of enterococci by MF in real-time, for daily geometric mean levels, but not for individual samples. Overall, ENT(MF) was not significantly correlated with source tracking markers with the exception of samples collected during one storm event. The inability of the regression model to predict ENT(MF) levels for individual samples is likely due to the different sources of ENT impacting the beach at any given time, making it particularly difficult to to predict short-term variability of ENT(MF) for environmental parameters.
Development, Evaluation, and Application of a Primary Aerosol Model.
Wang, I T; Chico, T; Huang, Y H; Farber, R J
1999-09-01
The Segmented-Plume Primary Aerosol Model (SPPAM) has been developed over the past several years. The earlier model development goals were simply to generalize the widely used Industrial Source Complex Short-Term (ISCST) model to simulate plume transport and dispersion under light wind conditions and to handle a large number of roadway or line sources. The goals have been expanded to include development of improved algorithm for effective plume transport velocity, more accurate and efficient line and area source dispersion algorithms, and recently, a more realistic and computationally efficient algorithm for plume depletion due to particle dry deposition. A performance evaluation of the SPPAM has been carried out using the 1983 PNL dual tracer experimental data. The results show the model predictions to be in good agreement with observations in both plume advection-dispersion and particulate matter (PM) depletion by dry deposition. For PM 2.5 impact analysis, the SPPAM has been applied to the Rubidoux area of California. Emission sources included in the modeling analysis are: paved road dust, diesel vehicular exhaust, gasoline vehicular exhaust, and tire wear particles from a large number of roadways in Rubidoux and surrounding areas. For the selected modeling periods, the predicted primary PM 2.5 to primary PM10 concentration ratios for the Rubidoux sampling station are in the range of 0.39-0.46. The organic fractions of the primary PM 2.5 impacts are estimated to be at least 34-41%. Detailed modeling results indicate that the relatively high organic fractions are primarily due to the proximity of heavily traveled roadways north of the sampling station. The predictions are influenced by a number of factors; principal among them are the receptor locations relative to major roadways, the volume and composition of traffic on these roadways, and the prevailing meteorological conditions.
NASA Astrophysics Data System (ADS)
Riley, W. J.; Zhu, Q.; Tang, J.
2016-12-01
The land models integrated in Earth System Models (ESMs) are critical components necessary to predict soil carbon dynamics and carbon-climate interactions under a changing climate. Yet, these models have been shown to have poor predictive power when compared with observations and ignore many processes known to the observational communities to influence above and belowground carbon dynamics. Here I will report work to tightly couple observations and perturbation experiment results with development of an ESM land model (ALM), focusing on nutrient constraints of the terrestrial C cycle. Using high-frequency flux tower observations and short-term nitrogen and phosphorus perturbation experiments, we show that conceptualizing plant and soil microbe interactions as a multi-substrate, multi-competitor kinetic network allows for accurate prediction of nutrient acquisition. Next, using multiple-year FACE and fertilization response observations at many forest sites, we show that capturing the observed responses requires representation of dynamic allocation to respond to the resulting stresses. Integrating the mechanisms implied by these observations into ALM leads to much lower observational bias and to very different predictions of long-term soil and aboveground C stocks and dynamics, and therefore C-climate feedbacks. I describe how these types of observational constraints are being integrated into the open-source International Land Model Benchmarking (ILAMB) package, and end with the argument that consolidating as many observations of all sorts for easy use by modelers is an important goal to improve C-climate feedback predictions.
Yo, Chia-Hung; Lee, Si-Huei; Chang, Shy-Shin; Lee, Matthew Chien-Hung; Lee, Chien-Chang
2014-02-20
We performed a systematic review and meta-analysis of studies on high-sensitivity C-reactive protein (hs-CRP) assays to see whether these tests are predictive of atrial fibrillation (AF) recurrence after cardioversion. Systematic review and meta-analysis. PubMed, EMBASE and Cochrane databases as well as a hand search of the reference lists in the retrieved articles from inception to December 2013. This review selected observational studies in which the measurements of serum CRP were used to predict AF recurrence. An hs-CRP assay was defined as any CRP test capable of measuring serum CRP to below 0.6 mg/dL. We summarised test performance characteristics with the use of forest plots, hierarchical summary receiver operating characteristic curves and bivariate random effects models. Meta-regression analysis was performed to explore the source of heterogeneity. We included nine qualifying studies comprising a total of 347 patients with AF recurrence and 335 controls. A CRP level higher than the optimal cut-off point was an independent predictor of AF recurrence after cardioversion (summary adjusted OR: 3.33; 95% CI 2.10 to 5.28). The estimated pooled sensitivity and specificity for hs-CRP was 71.0% (95% CI 63% to 78%) and 72.0% (61% to 81%), respectively. Most studies used a CRP cut-off point of 1.9 mg/L to predict long-term AF recurrence (77% sensitivity, 65% specificity), and 3 mg/L to predict short-term AF recurrence (73% sensitivity, 71% specificity). hs-CRP assays are moderately accurate in predicting AF recurrence after successful cardioversion.
A partial differential equation for pseudocontact shift.
Charnock, G T P; Kuprov, Ilya
2014-10-07
It is demonstrated that pseudocontact shift (PCS), viewed as a scalar or a tensor field in three dimensions, obeys an elliptic partial differential equation with a source term that depends on the Hessian of the unpaired electron probability density. The equation enables straightforward PCS prediction and analysis in systems with delocalized unpaired electrons, particularly for the nuclei located in their immediate vicinity. It is also shown that the probability density of the unpaired electron may be extracted, using a regularization procedure, from PCS data.
Orbital Debris Quarterly News. Volume 13; No. 1
NASA Technical Reports Server (NTRS)
Liou, J.-C. (Editor); Shoots, Debi (Editor)
2009-01-01
Topics discussed include: new debris from a decommissioned satellite with a nuclear power source; debris from the destruction of the Fengyun-1C meteorological satellite; quantitative analysis of the European Space Agency's Automated Transfer Vehicle 'Jules Verne' reentry event; microsatellite impact tests; solar cycle 24 predictions and other long-term projections and geosynchronus (GEO) environment for the Orbital Debris Engineering Model (ORDEM2008). Abstracts from the NASA Orbital Debris Program Office, examining satellite reentry risk assessments and statistical issues for uncontrolled reentry hazards, are also included.
The NASA Space Radiation Health Program
NASA Technical Reports Server (NTRS)
Schimmerling, W.; Sulzman, F. M.
1994-01-01
The NASA Space Radiation Health Program is a part of the Life Sciences Division in the Office of Space Science and Applications (OSSA). The goal of the Space Radiation Health Program is development of scientific bases for assuring adequate radiation protection in space. A proposed research program will determine long-term health risks from exposure to cosmic rays and other radiation. Ground-based animal models will be used to predict risk of exposures at varying levels from various sources and the safe levels for manned space flight.
Ma-Kellams, Christine; Bishop, Brianna; Zhang, Mei Fong; Villagrana, Brian
2017-01-01
To what extent could "Big Data" predict the results of the 2016 U.S. presidential election better than more conventional sources of aggregate measures? To test this idea, the present research used Google search trends versus other forms of state-level data (i.e., both behavioral measures like the incidence of hate crimes, hate groups, and police brutality and implicit measures like Implicit Association Test (IAT) data) to predict each state's popular vote for the 2016 presidential election. Results demonstrate that, when taken in isolation, zero-order correlations reveal that prevalence of hate groups, prevalence of hate crimes, Google searches for racially charged terms (i.e., related to White supremacy groups, racial slurs, and the Nazi movement), and political conservatism were all significant predictors of popular support for Trump. However, subsequent hierarchical regression analyses show that when these predictors are considered simultaneously, only Google search data for historical White supremacy terms (e.g., "Adolf Hitler") uniquely predicted election outcomes earlier and beyond political conservatism. Thus, Big Data, in the form of Google search, emerged as a more potent predictor of political behavior than other aggregate measures, including implicit attitudes and behavioral measures of racial bias. Implications for the role of racial bias in the 2016 presidential election in particular and the utility of Google search data more generally are discussed.
Tabaton, Massimo; Odetti, Patrizio; Cammarata, Sergio; Borghi, Roberta; Monacelli, Fiammetta; Caltagirone, Carlo; Bossù, Paola; Buscema, Massimo; Grossi, Enzo
2010-01-01
The search for markers that are able to predict the conversion of amnestic mild cognitive impairment (aMCI) to Alzheimer's disease (AD) is crucial for early mechanistic therapies. Using artificial neural networks (ANNs), 22 variables that are known risk factors of AD were analyzed in 80 patients with aMCI, for a period spanning at least 2 years. The cases were chosen from 195 aMCI subjects recruited by four Italian Alzheimer's disease units. The parameters of glucose metabolism disorder, female gender, and apolipoprotein E epsilon3/epsilon4 genotype were found to be the biological variables with high relevance for predicting the conversion of aMCI. The scores of attention and short term memory tests also were predictors. Surprisingly, the plasma concentration of amyloid-beta (42) had a low predictive value. The results support the utility of ANN analysis as a new tool in the interpretation of data from heterogeneous and distinct sources.
The scope and control of attention: Sources of variance in working memory capacity.
Chow, Michael; Conway, Andrew R A
2015-04-01
Working memory capacity is a strong positive predictor of many cognitive abilities, across various domains. The pattern of positive correlations across domains has been interpreted as evidence for a unitary source of inter-individual differences in behavior. However, recent work suggests that there are multiple sources of variance contributing to working memory capacity. The current study (N = 71) investigates individual differences in the scope and control of attention, in addition to the number and resolution of items maintained in working memory. Latent variable analyses indicate that the scope and control of attention reflect independent sources of variance and each account for unique variance in general intelligence. Also, estimates of the number of items maintained in working memory are consistent across tasks and related to general intelligence whereas estimates of resolution are task-dependent and not predictive of intelligence. These results provide insight into the structure of working memory, as well as intelligence, and raise new questions about the distinction between number and resolution in visual short-term memory.
Trailing Edge Noise Prediction Based on a New Acoustic Formulation
NASA Technical Reports Server (NTRS)
Casper, J.; Farassat, F.
2002-01-01
A new analytic result in acoustics called 'Formulation 1B,' proposed by Farassat, is used to compute broadband trailing edge noise from an unsteady surface pressure distribution on a thin airfoil in the time domain. This formulation is a new solution of the Ffowcs Williams-Hawkings equation with the loading source term, and has been shown in previous research to provide time domain predictions of broadband noise that are in excellent agreement with experiment. Furthermore, this formulation lends itself readily to rotating reference frames and statistical analysis of broadband trailing edge noise. Formulation 1B is used to calculate the far field noise radiated from the trailing edge of a NACA 0012 airfoil in low Mach number flows, using both analytical and experimental data on the airfoil surface. The results are compared to analytical results and experimental measurements that are available in the literature. Good agreement between predictions and measurements is obtained.
The acoustic field of a point source in a uniform boundary layer over an impedance plane
NASA Technical Reports Server (NTRS)
Zorumski, W. E.; Willshire, W. L., Jr.
1986-01-01
The acoustic field of a point source in a boundary layer above an impedance plane is investigated anatytically using Obukhov quasi-potential functions, extending the normal-mode theory of Chunchuzov (1984) to account for the effects of finite ground-plane impedance and source height. The solution is found to be asymptotic to the surface-wave term studies by Wenzel (1974) in the limit of vanishing wind speed, suggesting that normal-mode theory can be used to model the effects of an atmospheric boundary layer on infrasonic sound radiation. Model predictions are derived for noise-generation data obtained by Willshire (1985) at the Medicine Bow wind-turbine facility. Long-range downwind propagation is found to behave as a cylindrical wave, with attention proportional to the wind speed, the boundary-layer displacement thickness, the real part of the ground admittance, and the square of the frequency.
Empirical calibration of the near-infrared Ca II triplet - III. Fitting functions
NASA Astrophysics Data System (ADS)
Cenarro, A. J.; Gorgas, J.; Cardiel, N.; Vazdekis, A.; Peletier, R. F.
2002-02-01
Using a near-infrared stellar library of 706 stars with a wide coverage of atmospheric parameters, we study the behaviour of the CaII triplet strength in terms of effective temperature, surface gravity and metallicity. Empirical fitting functions for recently defined line-strength indices, namely CaT*, CaT and PaT, are provided. These functions can be easily implemented into stellar population models to provide accurate predictions for integrated CaII strengths. We also present a thorough study of the various error sources and their relation to the residuals of the derived fitting functions. Finally, the derived functional forms and the behaviour of the predicted CaII are compared with those of previous works in the field.
Determination of near and far field acoustics for advanced propeller configurations
NASA Technical Reports Server (NTRS)
Korkan, K. D.; Jaeger, S. M.; Kim, J. H.
1989-01-01
A method has been studied for predicting the acoustic field of the SR-3 transonic propfan using flow data generated by two versions of the NASPROP-E computer code. Since the flow fields calculated by the solvers include the shock-wave system of the propeller, the nonlinear quadrupole noise source term is included along with the monopole and dipole noise sources in the calculation of the acoustic near field. Acoustic time histories in the near field are determined by transforming the azimuthal coordinate in the rotating, blade-fixed coordinate system to the time coordinate in a nonrotating coordinate system. Fourier analysis of the pressure time histories is used to obtain the frequency spectra of the near-field noise.
Helicopter external noise prediction and reduction
NASA Astrophysics Data System (ADS)
Lewy, Serge
Helicopter external noise is a major challenge for the manufacturers, both in the civil domain and in the military domain. The strongest acoustic sources are due to the main rotor. Two flight conditions are analyzed in detail because radiated sound is then very loud and very impulsive: (1) high-speed flight, with large thickness and shear terms on the advancing blade side; and (2) descent flight, with blade-vortex interaction for certain rates of descent. In both cases, computational results were obtained and tests on new blade designs have been conducted in wind tunnels. These studies prove that large noise reduction can be achieved. It is shown in conclusion, however, that the other acoustic sources (tail rotor, turboshaft engines) must not be neglected to define a quiet helicopter.
The myths of 'big data' in health care.
Jacofsky, D J
2017-12-01
'Big data' is a term for data sets that are so large or complex that traditional data processing applications are inadequate. Billions of dollars have been spent on attempts to build predictive tools from large sets of poorly controlled healthcare metadata. Companies often sell reports at a physician or facility level based on various flawed data sources, and comparative websites of 'publicly reported data' purport to educate the public. Physicians should be aware of concerns and pitfalls seen in such data definitions, data clarity, data relevance, data sources and data cleaning when evaluating analytic reports from metadata in health care. Cite this article: Bone Joint J 2017;99-B:1571-6. ©2017 The British Editorial Society of Bone & Joint Surgery.
Application of CFD (Fluent) to LNG spills into geometrically complex environments.
Gavelli, Filippo; Bullister, Edward; Kytomaa, Harri
2008-11-15
Recent discussions on the fate of LNG spills into impoundments have suggested that the commonly used combination of SOURCE5 and DEGADIS to predict the flammable vapor dispersion distances is not accurate, as it does not account for vapor entrainment by wind. SOURCE5 assumes the vapor layer to grow upward uniformly in the form of a quiescent saturated gas cloud that ultimately spills over impoundment walls. The rate of spillage is then used as the source term for DEGADIS. A more rigorous approach to predict the flammable vapor dispersion distance is to use a computational fluid dynamics (CFD) model. CFD codes can take into account the physical phenomena that govern the fate of LNG spills into impoundments, such as the mixing between air and the evaporated gas. Before a CFD code can be proposed as an alternate method for the prediction of flammable vapor cloud distances, it has to be validated with proper experimental data. This paper describes the use of Fluent, a widely-used commercial CFD code, to simulate one of the tests in the "Falcon" series of LNG spill tests. The "Falcon" test series was the only series that specifically addressed the effects of impoundment walls and construction obstructions on the behavior and dispersion of the vapor cloud. Most other tests, such as the Coyote and the Burro series, involved spills onto water and relatively flat ground. The paper discusses the critical parameters necessary for a CFD model to accurately predict the behavior of a cryogenic spill in a geometrically complex domain, and presents comparisons between the gas concentrations measured during the Falcon-1 test and those predicted using Fluent. Finally, the paper discusses the effect vapor barriers have in containing part of the spill thereby shortening the ignitable vapor cloud and therefore the required hazard area. This issue was addressed by comparing the Falcon-1 simulation (spill into the impoundment) with the simulation of an identical spill without any impoundment walls, or obstacles within the impoundment area.
Dinov, Ivo D; Heavner, Ben; Tang, Ming; Glusman, Gustavo; Chard, Kyle; Darcy, Mike; Madduri, Ravi; Pa, Judy; Spino, Cathie; Kesselman, Carl; Foster, Ian; Deutsch, Eric W; Price, Nathan D; Van Horn, John D; Ames, Joseph; Clark, Kristi; Hood, Leroy; Hampstead, Benjamin M; Dauer, William; Toga, Arthur W
2016-01-01
A unique archive of Big Data on Parkinson's Disease is collected, managed and disseminated by the Parkinson's Progression Markers Initiative (PPMI). The integration of such complex and heterogeneous Big Data from multiple sources offers unparalleled opportunities to study the early stages of prevalent neurodegenerative processes, track their progression and quickly identify the efficacies of alternative treatments. Many previous human and animal studies have examined the relationship of Parkinson's disease (PD) risk to trauma, genetics, environment, co-morbidities, or life style. The defining characteristics of Big Data-large size, incongruency, incompleteness, complexity, multiplicity of scales, and heterogeneity of information-generating sources-all pose challenges to the classical techniques for data management, processing, visualization and interpretation. We propose, implement, test and validate complementary model-based and model-free approaches for PD classification and prediction. To explore PD risk using Big Data methodology, we jointly processed complex PPMI imaging, genetics, clinical and demographic data. Collective representation of the multi-source data facilitates the aggregation and harmonization of complex data elements. This enables joint modeling of the complete data, leading to the development of Big Data analytics, predictive synthesis, and statistical validation. Using heterogeneous PPMI data, we developed a comprehensive protocol for end-to-end data characterization, manipulation, processing, cleaning, analysis and validation. Specifically, we (i) introduce methods for rebalancing imbalanced cohorts, (ii) utilize a wide spectrum of classification methods to generate consistent and powerful phenotypic predictions, and (iii) generate reproducible machine-learning based classification that enables the reporting of model parameters and diagnostic forecasting based on new data. We evaluated several complementary model-based predictive approaches, which failed to generate accurate and reliable diagnostic predictions. However, the results of several machine-learning based classification methods indicated significant power to predict Parkinson's disease in the PPMI subjects (consistent accuracy, sensitivity, and specificity exceeding 96%, confirmed using statistical n-fold cross-validation). Clinical (e.g., Unified Parkinson's Disease Rating Scale (UPDRS) scores), demographic (e.g., age), genetics (e.g., rs34637584, chr12), and derived neuroimaging biomarker (e.g., cerebellum shape index) data all contributed to the predictive analytics and diagnostic forecasting. Model-free Big Data machine learning-based classification methods (e.g., adaptive boosting, support vector machines) can outperform model-based techniques in terms of predictive precision and reliability (e.g., forecasting patient diagnosis). We observed that statistical rebalancing of cohort sizes yields better discrimination of group differences, specifically for predictive analytics based on heterogeneous and incomplete PPMI data. UPDRS scores play a critical role in predicting diagnosis, which is expected based on the clinical definition of Parkinson's disease. Even without longitudinal UPDRS data, however, the accuracy of model-free machine learning based classification is over 80%. The methods, software and protocols developed here are openly shared and can be employed to study other neurodegenerative disorders (e.g., Alzheimer's, Huntington's, amyotrophic lateral sclerosis), as well as for other predictive Big Data analytics applications.
Pesaranghader, Ahmad; Matwin, Stan; Sokolova, Marina; Beiko, Robert G
2016-05-01
Measures of protein functional similarity are essential tools for function prediction, evaluation of protein-protein interactions (PPIs) and other applications. Several existing methods perform comparisons between proteins based on the semantic similarity of their GO terms; however, these measures are highly sensitive to modifications in the topological structure of GO, tend to be focused on specific analytical tasks and concentrate on the GO terms themselves rather than considering their textual definitions. We introduce simDEF, an efficient method for measuring semantic similarity of GO terms using their GO definitions, which is based on the Gloss Vector measure commonly used in natural language processing. The simDEF approach builds optimized definition vectors for all relevant GO terms, and expresses the similarity of a pair of proteins as the cosine of the angle between their definition vectors. Relative to existing similarity measures, when validated on a yeast reference database, simDEF improves correlation with sequence homology by up to 50%, shows a correlation improvement >4% with gene expression in the biological process hierarchy of GO and increases PPI predictability by > 2.5% in F1 score for molecular function hierarchy. Datasets, results and source code are available at http://kiwi.cs.dal.ca/Software/simDEF CONTACT: ahmad.pgh@dal.ca or beiko@cs.dal.ca Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Predicting who will drop out of nursing courses: a machine learning exercise.
Moseley, Laurence G; Mead, Donna M
2008-05-01
The concepts of causation and prediction are different, and have different implications for practice. This distinction is applied here to studies of the problem of student attrition (although it is more widely applicable). Studies of attrition from nursing courses have tended to concentrate on causation, trying, largely unsuccessfully, to elicit what causes drop out. However, the problem may more fruitfully be cast in terms of predicting who is likely to drop out. One powerful method for attempting to make predictions is rule induction. This paper reports the use of the Answer Tree package from SPSS for that purpose. The main data set consisted of 3978 records on 528 nursing students, split into a training set and a test set. The source was standard university student records. The method obtained 84% sensitivity, 70% specificity, and 94% accuracy on previously unseen cases. The method requires large amounts of high quality data. When such data are available, rule induction offers a way to reduce attrition. It would be desirable to compare its results with those of predictions made by tutors using more informal conventional methods.
Using high frequency CDOM hyperspectral absorption to fingerprint river water sources
NASA Astrophysics Data System (ADS)
Beckler, J. S.; Kirkpatrick, G. J.; Dixon, L. K.; Milbrandt, E. C.
2016-12-01
Quantifying riverine carbon transfer from land to sea is complicated by variability in dissolved organic carbon (DOC), closely-related dissolved organic matter (DOM) and chromophoric dissolved organic matter (CDOM) concentrations, as well as in the composition of the freshwater end members of multiple drainage basins and seasons. Discrete measurements in estuaries have difficulty resolving convoluted upstream watershed dynamics. Optical measurements, however, can provide more continuous data regarding the molecular composition and concentration of the CDOM as it relates to river flow, tidal mixing, and salinity and may be used to fingerprint source waters. For the first time, long-term, hyperspectral CDOM measurements were obtained on filtered Caloosahatchee River estuarine waters using an in situ, long-pathlength spectrophotometric instrument, the Optical Phytoplankton Discriminator (OPD). Through a collaborative monitoring effort among partners within the Gulf of Mexico Coastal Ocean Observing System (GCOOS), ancillary measurements of fluorescent DOM (FDOM) and water quality parameters were also obtained from co-located instrumentation at high frequency. Optical properties demonstrated both short-term (hourly) tidal variations and long-term (daily - weekly) variations corresponding to changes in riverine flow and salinity. The optical properties of the river waters are demonstrated to be a dilution-adjusted linear combination of the optical properties of the source waters comprising the overall composition (e.g. Lake Okeechobee, watershed drainage basins, Gulf of Mexico). Overall, these techniques are promising as a tool to more accurately constrain the carbon flux to the ocean and to predict the optical quality of coastal waters.
Kakouros, Nikolaos; Gluckman, Tyler J; Conte, John V; Kickler, Thomas S; Laws, Katherine; Barton, Bruce A; Rade, Jeffrey J
2017-11-02
Systemic thromboxane generation, not suppressible by standard aspirin therapy and likely arising from nonplatelet sources, increases the risk of atherothrombosis and death in patients with cardiovascular disease. In the RIGOR (Reduction in Graft Occlusion Rates) study, greater nonplatelet thromboxane generation occurred early compared with late after coronary artery bypass graft surgery, although only the latter correlated with graft failure. We hypothesize that a similar differential association exists between nonplatelet thromboxane generation and long-term clinical outcome. Five-year outcome data were analyzed for 290 RIGOR subjects taking aspirin with suppressed platelet thromboxane generation. Multivariable modeling was performed to define the relative predictive value of the urine thromboxane metabolite, 11-dehydrothromboxane B 2 (11-dhTXB 2 ), measured 3 days versus 6 months after surgery on the composite end point of death, myocardial infarction, revascularization or stroke, and death alone. 11-dhTXB 2 measured 3 days after surgery did not independently predict outcome, whereas 11-dhTXB 2 >450 pg/mg creatinine measured 6 months after surgery predicted the composite end point (adjusted hazard ratio, 1.79; P =0.02) and death (adjusted hazard ratio, 2.90; P =0.01) at 5 years compared with lower values. Additional modeling revealed 11-dhTXB 2 measured early after surgery associated with several markers of inflammation, in contrast to 11-dhTXB 2 measured 6 months later, which highly associated with oxidative stress. Long-term nonplatelet thromboxane generation after coronary artery bypass graft surgery is a novel risk factor for 5-year adverse outcome, including death. In contrast, nonplatelet thromboxane generation in the early postoperative period appears to be driven predominantly by inflammation and did not independently predict long-term clinical outcome. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley.
A first look at global flash drought: long term change and short term predictability
NASA Astrophysics Data System (ADS)
Yuan, Xing; Wang, Linying; Ji, Peng
2017-04-01
"Flash drought" became popular after the unexpected 2012 central USA drought, mainly due to its rapid development, low predictability and devastating impacts on water resources and crop yields. A pilot study by Mo and Lettenmaier (2015) found that flash drought, based on a definition of concurrent heat extreme, soil moisture deficit and evapotranspiration (ET) enhancement at pentad scale, were in decline over USA during recent 100 years. Meanwhile, a recent work indicated that the occurrence of flash drought in China was doubled during the past 30 years, where a severe flash drought in the summer of 2013 ravaged 13 provinces in southern China. As global warming increases the frequency of heat waves and accelerates the hydrological cycle, the flash drought is expected to increase in general, but its trend might also be affected by interannual to decadal climate oscillations. To consolidate the hotspots of flash drought and the effects of climate change on flash drought, a global inventory is being conducted by using multi-source observations (in-situ, satellite and reanalysis), CMIP5 historical simulations and future projections under different forcing scenarios, as well as global land surface hydrological modeling for key variables including surface air temperature, soil moisture and ET. In particular, a global picture of the flash drought distribution, the contribution of naturalized and anthropogenic forcings to global flash drought change, and the risk of global flash drought in the future, will be presented. Besides investigating the long-term change of flash drought, providing reliable early warning is also essential to developing adaptation strategies. While regional drought early warning systems have been emerging in recent decade, forecasting of flash drought is still at an exploratory stage due to limited understanding of flash drought predictability. Here, a set of sub-seasonal to seasonal (S2S) hindcast datasets are being used to assess the short term predictability of flash drought via a perfect model assumption.
Global circulation as the main source of cloud activity on Titan
Rodriguez, S.; Le, Mouelic S.; Rannou, P.; Tobie, G.; Baines, K.H.; Barnes, J.W.; Griffith, C.A.; Hirtzig, M.; Pitman, K.M.; Sotin, Christophe; Brown, R.H.; Buratti, B.J.; Clark, R.N.; Nicholson, P.D.
2009-01-01
Clouds on Titan result from the condensation of methane and ethane and, as on other planets, are primarily structured by circulation of the atmosphere. At present, cloud activity mainly occurs in the southern (summer) hemisphere, arising near the pole and at mid-latitudes from cumulus updrafts triggered by surface heating and/or local methane sources, and at the north (winter) pole, resulting from the subsidence and condensation of ethane-rich air into the colder troposphere. General circulation models predict that this distribution should change with the seasons on a 15-year timescale, and that clouds should develop under certain circumstances at temperate latitudes (40??) in the winter hemisphere. The models, however, have hitherto been poorly constrained and their long-term predictions have not yet been observationally verified. Here we report that the global spatial cloud coverage on Titan is in general agreement with the models, confirming that cloud activity is mainly controlled by the global circulation. The non-detection of clouds at latitude 40??N and the persistence of the southern clouds while the southern summer is ending are, however, both contrary to predictions. This suggests that Titans equator-to-pole thermal contrast is overestimated in the models and that its atmosphere responds to the seasonal forcing with a greater inertia than expected. ?? 2009 Macmillan Publishers Limited. All rights reserved.
Heterogeneity of long-history migration predicts emotion recognition accuracy.
Wood, Adrienne; Rychlowska, Magdalena; Niedenthal, Paula M
2016-06-01
Recent work (Rychlowska et al., 2015) demonstrated the power of a relatively new cultural dimension, historical heterogeneity, in predicting cultural differences in the endorsement of emotion expression norms. Historical heterogeneity describes the number of source countries that have contributed to a country's present-day population over the last 500 years. People in cultures originating from a large number of source countries may have historically benefited from greater and clearer emotional expressivity, because they lacked a common language and well-established social norms. We therefore hypothesized that in addition to endorsing more expressive display rules, individuals from heterogeneous cultures will also produce facial expressions that are easier to recognize by people from other cultures. By reanalyzing cross-cultural emotion recognition data from 92 papers and 82 cultures, we show that emotion expressions of people from heterogeneous cultures are more easily recognized by observers from other cultures than are the expressions produced in homogeneous cultures. Heterogeneity influences expression recognition rates alongside the individualism-collectivism of the perceivers' culture, as more individualistic cultures were more accurate in emotion judgments than collectivistic cultures. This work reveals the present-day behavioral consequences of long-term historical migration patterns and demonstrates the predictive power of historical heterogeneity. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Near Field Modeling for the Maule Tsunami from DART, GPS and Finite Fault Solutions (Invited)
NASA Astrophysics Data System (ADS)
Arcas, D.; Chamberlin, C.; Lagos, M.; Ramirez-Herrera, M.; Tang, L.; Wei, Y.
2010-12-01
The earthquake and tsunami of February, 27, 2010 in central Chile has rekindled an interest in developing techniques to predict the impact of near field tsunamis along the Chilean coastline. Following the earthquake, several initiatives were proposed to increase the density of seismic, pressure and motion sensors along the South American trench, in order to provide field data that could be used to estimate tsunami impact on the coast. However, the precise use of those data in the elaboration of a quantitative assessment of coastal tsunami damage has not been clarified. The present work makes use of seismic, Deep-ocean Assessment and Reporting of Tsunamis (DART®) systems, and GPS measurements obtained during the Maule earthquake to initiate a number of tsunami inundation models along the rupture area by expressing different versions of the seismic crustal deformation in terms of NOAA’s tsunami unit source functions. Translation of all available real-time data into a feasible tsunami source is essential in near-field tsunami impact prediction in which an impact assessment must be generated under very stringent time constraints. Inundation results from each different source are then contrasted with field and tide gauge data by comparing arrival time, maximum wave height, maximum inundation and tsunami decay rate, using field data collected by the authors.
NASA Astrophysics Data System (ADS)
Dunlap, L.; Li, C.; Dickerson, R. R.; Krotkov, N. A.
2015-12-01
Weather systems, particularly mid-latitude wave cyclones, have been known to play an important role in the short-term variation of near-surface air pollution. Ground measurements and model simulations have demonstrated that stagnant air and minimal precipitation associated with high pressure systems are conducive to pollutant accumulation. With the passage of a cold front, built up pollution is transported downwind of the emission sources or washed out by precipitation. This concept is important to note when studying long-term changes in spatio-temporal pollution distribution, but has not been studied in detail from space. In this study, we focus on East Asia (especially the industrialized eastern China), where numerous large power plants and other point sources as well as area sources emit large amounts of SO2, an important gaseous pollutant and a precursor of aerosols. Using data from the Aura Ozone Monitoring Instrument (OMI) we show that such weather driven distribution can indeed be discerned from satellite data by utilizing probability distribution functions (PDFs) of SO2 column content. These PDFs are multimodal and give insight into the background pollution level at a given location and contribution from local and upwind emission sources. From these PDFs it is possible to determine the frequency for a given region to have SO2 loading that exceeds the background amount. By comparing OMI-observed long-term change in the frequency with meteorological data, we can gain insights into the effects of climate change (e.g., the weakening of Asian monsoon) on regional air quality. Such insight allows for better interpretation of satellite measurements as well as better prediction of future pollution distribution as a changing climate gives way to changing weather patterns.
NASA Astrophysics Data System (ADS)
Head, J. W., III
2016-12-01
Improved 3D global simulations (GCMs) of the early martian climate have found that for atmospheric pressures greater than a fraction of a bar, atmospheric-surface thermal coupling occurs and the adiabatic cooling effect (ACE) causes temperatures in the southern uplands to fall significantly below the global average. Long-term climate evolution simulations indicate that in these circumstances, water ice is transported to the highlands from low-lying regions for a wide range of obliquities. Conditions are too cold (MAT 225 K) to permit the presence of long-term surface liquid water, including streams, lakes and oceans. The LNIH equilibrium state predicts: 1) a global permafrost layer, 2) a horizontally stratified hydrological cycle/system, 3) thick ice deposits in the southern uplands, 4) an extended water ice cap on the southern pole, and 5) no rainfall, streams lakes or oceans. The majority of these predictions are in direct conflict with the observed fluvial/lacustrine geologic record. Can non-equilibrium conditions in a LNIH scenario explain these conflicts by transient heating and melting of the LNIH? As steps in the comprehensive testing of this "Late Noachian Icy Highlands" (LNIH) model we explore the predictions for geologic settings and processes in both equilibrium and non-equilibrium climate states. We assess the following sources of disequilibrium: 1) Top-down heating and melting: a) impact cratering, b) extrusive/explosive volcanism, and c) short-term emission of greenhouse gases. 2) Bottom up heating and melting: a) enhanced regional-global geothermal gradients, and b) thick ice accumulation to cause/sustain basal melting, wet-based glaciation and runoff. We assess these disequilibrium mechanisms in terms of: 1) the altitude dependence of melting, 2) melting duration, 3) volumes of meltwater produced, 4) predicted locations of meltwater production, and 6) comparison to the distribution of fluvial/lacustrine features. We find that the Late Noachian Icy Highlands climate model cannot be reconciled with observations unless punctuated non-equilibrium conditions occur. We show that the best candidates for LNIH disequilibrium conditions involve top-down heating and melting conditions chronologically summing in duration to more than tens of thousands to millions of years.
A Comparison of Synoptic Classification Methods for Application to Wind Power Prediction
NASA Astrophysics Data System (ADS)
Fowler, P.; Basu, S.
2008-12-01
Wind energy is a highly variable resource. To make it competitive with other sources of energy for integration on the power grid, at the very least, a day-ahead forecast of power output must be available. In many grid operations worldwide, next-day power output is scheduled in 30 minute intervals and grid management routinely occurs at real time. Maintenance and repairs require costly time to complete and must be scheduled along with normal operations. Revenue is dependent on the reliability of the entire system. In other words, there is financial and managerial benefit to short-term prediction of wind power. One approach to short-term forecasting is to combine a data centric method such as an artificial neural network with a physically based approach like numerical weather prediction (NWP). The key is in associating high-dimensional NWP model output with the most appropriately trained neural network. Because neural networks perform the best in the situations they are designed for, one can hypothesize that if one can identify similar recurring states in historical weather data, this data can be used to train multiple custom designed neural networks to be used when called upon by numerical prediction. Identifying similar recurring states may offer insight to how a neural network forecast can be improved, but amassing the knowledge and utilizing it efficiently in the time required for power prediction would be difficult for a human to master, thus showing the advantage of classification. Classification methods are important tools for short-term forecasting because they can be unsupervised, objective, and computationally quick. They primarily involve categorizing data sets in to dominant weather classes, but there are numerous ways to define a class and a great variety in interpretation of the results. In the present study a collection of classification methods are used on a sampling of atmospheric variables from the North American Regional Reanalysis data set. The results will be discussed in relation to their use for short-term wind power forecasting by neural networks.
Prediction of long-term transverse creep compliance in high-temperature IM7/LaRC-RP46 composites
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, F.G.; Potter, B.D.
1994-12-31
An experimental study is performed which predicts long-term tensile transverse creep compliance of high-temperature IM7/LaRC-RP46 composites from short-term creep and recovery tests. The short-term tests were conducted for various stress levels at various fixed temperatures. Predictive nonlinear viscoelastic model developed by Schapery and experimental procedure were used to predict the long-term results in terms of master curve extrapolated from short-term tests.
Analysis of the Source and Ground Motions from the 2017 M8.2 Tehuantepec and M7.1 Puebla Earthquakes
NASA Astrophysics Data System (ADS)
Melgar, D.; Sahakian, V. J.; Perez-Campos, X.; Quintanar, L.; Ramirez-Guzman, L.; Spica, Z.; Espindola, V. H.; Ruiz-Angulo, A.; Cabral-Cano, E.; Baltay, A.; Geng, J.
2017-12-01
The September 2017 Tehuantepec and Puebla earthquakes were intra-slab earthquakes that together caused significant damage in broad regions of Mexico, including the states of Oaxaca, Chiapas, Morelos, Puebla, Mexico, and Mexico City. Ground motions in Mexico City have approximately the same angle of incidence from both earthquakes and potentially sample similar paths close to the city. We examine site effects and source terms by analysis of residuals between Ground-Motion Prediction Equations (GMPEs) and observed ground motions for both of these events at stations from the Servicio Sismólogico Nacional, Instituto de Ingeniería, and the Instituto de Geofísica Red del Valle de Mexico networks. GMPEs are a basis for seismic design, but also provide median ground motion values to act as a basis for comparison of individual earthquakes and site responses. First, we invert for finite-fault slip inversions for Tehuantepec with high-rate GPS, static GPS, tide gauge and DART buoy data, and for Puebla with high-rate GPS and strong motion data. Using the distance from the stations with ground motion observations to the derived slip models, we use the GMPEs of Garcia et al. (2005), Zhao et al. (2006), and Abrahamson, Silva and Kamai (2014), to compute predicted values of peak ground acceleration and velocity (PGA and PGV) and response spectral accelerations (SA). Residuals between observed and predicted ground motion parameters are then computed for each recording, and are decomposed into event and site components using a mixed effects regression. We analyze these residuals as an adjustment away from median ground motions in the region to glean information about the earthquake source properties, as well as local site response in and outside of the Mexico City basin. The event and site terms are then compared with available values of stress drop for the two earthquakes, and Vs30 values for the sites, respectively. This analysis is useful in determining which GMPE is most appropriate in the central Mexico region, important for future ground motion studies and rapid response products such as ShakeMap.
NASA Technical Reports Server (NTRS)
Mark, W. D.
1979-01-01
Application of the transfer function approach to predict the resulting interior noise contribution requires gearbox vibration sources and paths to be characterized in the frequency domain. Tooth-face deviations from perfect involute surfaces were represented in terms of Legendre polynomials which may be directly interpreted in terms of tooth-spacing errors, mean and random deviations associated with involute slope and fullness, lead mismatch and crowning, and analogous higher-order components. The contributions of these components to the spectrum of the static transmission error is discussed and illustrated using a set of measurements made on a pair of helicopter spur gears. The general methodology presented is applicable to both spur and helical gears.
One-dimensional pion, kaon, and proton femtoscopy in Pb-Pb collisions at s NN = 2.76 TeV
Adam, J.; Adamová, D.; Aggarwal, M. M.; ...
2015-11-19
Tmore » he size of the particle emission region in high-energy collisions can be deduced using the femtoscopic correlations of particle pairs at low relative momentum. Such correlations arise due to quantum statistics and Coulomb and strong final state interactions. In this paper, results are presented from femtoscopic analyses of π ± π ±, K ± K ±, K$$0\\atop{S}$$K$$0\\atop{S}$$, pp , and $$\\overline{p}$$ $$\\overline{p}$$ correlations from Pb-Pb collisions at s NN = 2.76 eV by the ALICE experiment at the LHC. One-dimensional radii of the system are extracted from correlation functions in terms of the invariant momentum difference of the pair. he comparison of the measured radii with the predictions from a hydrokinetic model is discussed. he pion and kaon source radii display a monotonic decrease with increasing average pair transverse mass m which is consistent with hydrodynamic model predictions for central collisions. Lastly, the kaon and proton source sizes can be reasonably described by approximate m scaling.« less
One-dimensional pion, kaon, and proton femtoscopy in Pb-Pb collisions at √{sNN}=2.76 TeV
NASA Astrophysics Data System (ADS)
Adam, J.; Adamová, D.; Aggarwal, M. M.; Aglieri Rinella, G.; Agnello, M.; Agrawal, N.; Ahammed, Z.; Ahn, S. U.; Aimo, I.; Aiola, S.; Ajaz, M.; Akindinov, A.; Alam, S. N.; Aleksandrov, D.; Alessandro, B.; Alexandre, D.; Alfaro Molina, R.; Alici, A.; Alkin, A.; Alme, J.; Alt, T.; Altinpinar, S.; Altsybeev, I.; Alves Garcia Prado, C.; Andrei, C.; Andronic, A.; Anguelov, V.; Anielski, J.; Antičić, T.; Antinori, F.; Antonioli, P.; Aphecetche, L.; Appelshäuser, H.; Arcelli, S.; Armesto, N.; Arnaldi, R.; Arsene, I. C.; Arslandok, M.; Audurier, B.; Augustinus, A.; Averbeck, R.; Azmi, M. D.; Bach, M.; Badalà, A.; Baek, Y. W.; Bagnasco, S.; Bailhache, R.; Bala, R.; Baldisseri, A.; Baltasar Dos Santos Pedrosa, F.; Baral, R. C.; Barbano, A. M.; Barbera, R.; Barile, F.; Barnaföldi, G. G.; Barnby, L. S.; Barret, V.; Bartalini, P.; Barth, K.; Bartke, J.; Bartsch, E.; Basile, M.; Bastid, N.; Basu, S.; Bathen, B.; Batigne, G.; Batista Camejo, A.; Batyunya, B.; Batzing, P. C.; Bearden, I. G.; Beck, H.; Bedda, C.; Behera, N. K.; Belikov, I.; Bellini, F.; Bello Martinez, H.; Bellwied, R.; Belmont, R.; Belmont-Moreno, E.; Belyaev, V.; Bencedi, G.; Beole, S.; Berceanu, I.; Bercuci, A.; Berdnikov, Y.; Berenyi, D.; Bertens, R. A.; Berzano, D.; Betev, L.; Bhasin, A.; Bhat, I. R.; Bhati, A. K.; Bhattacharjee, B.; Bhom, J.; Bianchi, L.; Bianchi, N.; Bianchin, C.; Bielčík, J.; Bielčíková, J.; Bilandzic, A.; Biswas, R.; Biswas, S.; Bjelogrlic, S.; Blanco, F.; Blau, D.; Blume, C.; Bock, F.; Bogdanov, A.; Bøggild, H.; Boldizsár, L.; Bombara, M.; Book, J.; Borel, H.; Borissov, A.; Borri, M.; Bossú, F.; Botje, M.; Botta, E.; Böttger, S.; Braun-Munzinger, P.; Bregant, M.; Breitner, T.; Broker, T. A.; Browning, T. A.; Broz, M.; Brucken, E. J.; Bruna, E.; Bruno, G. E.; Budnikov, D.; Buesching, H.; Bufalino, S.; Buncic, P.; Busch, O.; Buthelezi, Z.; Butt, J. B.; Buxton, J. T.; Caffarri, D.; Cai, X.; Caines, H.; Calero Diaz, L.; Caliva, A.; Calvo Villar, E.; Camerini, P.; Carena, F.; Carena, W.; Castillo Castellanos, J.; Castro, A. J.; Casula, E. A. R.; Cavicchioli, C.; Ceballos Sanchez, C.; Cepila, J.; Cerello, P.; Cerkala, J.; Chang, B.; Chapeland, S.; Chartier, M.; Charvet, J. L.; Chattopadhyay, S.; Chattopadhyay, S.; Chelnokov, V.; Cherney, M.; Cheshkov, C.; Cheynis, B.; Chibante Barroso, V.; Chinellato, D. D.; Chochula, P.; Choi, K.; Chojnacki, M.; Choudhury, S.; Christakoglou, P.; Christensen, C. H.; Christiansen, P.; Chujo, T.; Chung, S. U.; Chunhui, Z.; Cicalo, C.; Cifarelli, L.; Cindolo, F.; Cleymans, J.; Colamaria, F.; Colella, D.; Collu, A.; Colocci, M.; Conesa Balbastre, G.; Conesa Del Valle, Z.; Connors, M. E.; Contreras, J. G.; Cormier, T. M.; Corrales Morales, Y.; Cortés Maldonado, I.; Cortese, P.; Cosentino, M. R.; Costa, F.; Crochet, P.; Cruz Albino, R.; Cuautle, E.; Cunqueiro, L.; Dahms, T.; Dainese, A.; Danu, A.; Das, D.; Das, I.; Das, S.; Dash, A.; Dash, S.; de, S.; de Caro, A.; de Cataldo, G.; de Cuveland, J.; de Falco, A.; de Gruttola, D.; De Marco, N.; de Pasquale, S.; Deisting, A.; Deloff, A.; Dénes, E.; D'Erasmo, G.; di Bari, D.; di Mauro, A.; di Nezza, P.; Diaz Corchero, M. A.; Dietel, T.; Dillenseger, P.; Divià, R.; Djuvsland, Ø.; Dobrin, A.; Dobrowolski, T.; Domenicis Gimenez, D.; Dönigus, B.; Dordic, O.; Dubey, A. K.; Dubla, A.; Ducroux, L.; Dupieux, P.; Ehlers, R. J.; Elia, D.; Engel, H.; Erazmus, B.; Erdemir, I.; Erhardt, F.; Eschweiler, D.; Espagnon, B.; Estienne, M.; Esumi, S.; Eum, J.; Evans, D.; Evdokimov, S.; Eyyubova, G.; Fabbietti, L.; Fabris, D.; Faivre, J.; Fantoni, A.; Fasel, M.; Feldkamp, L.; Felea, D.; Feliciello, A.; Feofilov, G.; Ferencei, J.; Fernández Téllez, A.; Ferreiro, E. G.; Ferretti, A.; Festanti, A.; Feuillard, V. J. G.; Figiel, J.; Figueredo, M. A. S.; Filchagin, S.; Finogeev, D.; Fionda, F. M.; Fiore, E. M.; Fleck, M. G.; Floris, M.; Foertsch, S.; Foka, P.; Fokin, S.; Fragiacomo, E.; Francescon, A.; Frankenfeld, U.; Fuchs, U.; Furget, C.; Furs, A.; Fusco Girard, M.; Gaardhøje, J. J.; Gagliardi, M.; Gago, A. M.; Gallio, M.; Gangadharan, D. R.; Ganoti, P.; Gao, C.; Garabatos, C.; Garcia-Solis, E.; Gargiulo, C.; Gasik, P.; Germain, M.; Gheata, A.; Gheata, M.; Ghosh, P.; Ghosh, S. K.; Gianotti, P.; Giubellino, P.; Giubilato, P.; Gladysz-Dziadus, E.; Glässel, P.; Gomez Ramirez, A.; González-Zamora, P.; Gorbunov, S.; Görlich, L.; Gotovac, S.; Grabski, V.; Graczykowski, L. K.; Graham, K. L.; Grelli, A.; Grigoras, A.; Grigoras, C.; Grigoriev, V.; Grigoryan, A.; Grigoryan, S.; Grinyov, B.; Grion, N.; Grosse-Oetringhaus, J. F.; Grossiord, J.-Y.; Grosso, R.; Guber, F.; Guernane, R.; Guerzoni, B.; Gulbrandsen, K.; Gulkanyan, H.; Gunji, T.; Gupta, A.; Gupta, R.; Haake, R.; Haaland, Ø.; Hadjidakis, C.; Haiduc, M.; Hamagaki, H.; Hamar, G.; Hansen, A.; Harris, J. W.; Hartmann, H.; Harton, A.; Hatzifotiadou, D.; Hayashi, S.; Heckel, S. T.; Heide, M.; Helstrup, H.; Herghelegiu, A.; Herrera Corral, G.; Hess, B. A.; Hetland, K. F.; Hilden, T. E.; Hillemanns, H.; Hippolyte, B.; Hosokawa, R.; Hristov, P.; Huang, M.; Humanic, T. J.; Hussain, N.; Hussain, T.; Hutter, D.; Hwang, D. S.; Ilkaev, R.; Ilkiv, I.; Inaba, M.; Ionita, C.; Ippolitov, M.; Irfan, M.; Ivanov, M.; Ivanov, V.; Izucheev, V.; Jacobs, P. M.; Jadlovska, S.; Jahnke, C.; Jang, H. J.; Janik, M. A.; Jayarathna, P. H. S. Y.; Jena, C.; Jena, S.; Jimenez Bustamante, R. T.; Jones, P. G.; Jung, H.; Jusko, A.; Kalinak, P.; Kalweit, A.; Kamin, J.; Kang, J. H.; Kaplin, V.; Kar, S.; Karasu Uysal, A.; Karavichev, O.; Karavicheva, T.; Karpechev, E.; Kebschull, U.; Keidel, R.; Keijdener, D. L. D.; Keil, M.; Khan, K. H.; Khan, M. M.; Khan, P.; Khan, S. A.; Khanzadeev, A.; Kharlov, Y.; Kileng, B.; Kim, B.; Kim, D. W.; Kim, D. J.; Kim, H.; Kim, J. S.; Kim, M.; Kim, M.; Kim, S.; Kim, T.; Kirsch, S.; Kisel, I.; Kiselev, S.; Kisiel, A.; Kiss, G.; Klay, J. L.; Klein, C.; Klein, J.; Klein-Bösing, C.; Kluge, A.; Knichel, M. L.; Knospe, A. G.; Kobayashi, T.; Kobdaj, C.; Kofarago, M.; Kollegger, T.; Kolojvari, A.; Kondratiev, V.; Kondratyeva, N.; Kondratyuk, E.; Konevskikh, A.; Kopcik, M.; Kouzinopoulos, C.; Kovalenko, O.; Kovalenko, V.; Kowalski, M.; Kox, S.; Koyithatta Meethaleveedu, G.; Kral, J.; Králik, I.; Kravčáková, A.; Krelina, M.; Kretz, M.; Krivda, M.; Krizek, F.; Kryshen, E.; Krzewicki, M.; Kubera, A. M.; Kučera, V.; Kugathasan, T.; Kuhn, C.; Kuijer, P. G.; Kulakov, I.; Kumar, J.; Kumar, L.; Kurashvili, P.; Kurepin, A.; Kurepin, A. B.; Kuryakin, A.; Kushpil, S.; Kweon, M. J.; Kwon, Y.; La Pointe, S. L.; La Rocca, P.; Lagana Fernandes, C.; Lakomov, I.; Langoy, R.; Lara, C.; Lardeux, A.; Lattuca, A.; Laudi, E.; Lea, R.; Leardini, L.; Lee, G. R.; Lee, S.; Legrand, I.; Lemmon, R. C.; Lenti, V.; Leogrande, E.; León Monzón, I.; Leoncino, M.; Lévai, P.; Li, S.; Li, X.; Lien, J.; Lietava, R.; Lindal, S.; Lindenstruth, V.; Lippmann, C.; Lisa, M. A.; Ljunggren, H. M.; Lodato, D. F.; Loenne, P. I.; Loggins, V. R.; Loginov, V.; Loizides, C.; Lopez, X.; López Torres, E.; Lowe, A.; Luettig, P.; Lunardon, M.; Luparello, G.; Luz, P. H. F. N. D.; Maevskaya, A.; Mager, M.; Mahajan, S.; Mahmood, S. M.; Maire, A.; Majka, R. D.; Malaev, M.; Maldonado Cervantes, I.; Malinina, L.; Mal'Kevich, D.; Malzacher, P.; Mamonov, A.; Manceau, L.; Manko, V.; Manso, F.; Manzari, V.; Marchisone, M.; Mareš, J.; Margagliotti, G. V.; Margotti, A.; Margutti, J.; Marín, A.; Markert, C.; Marquard, M.; Martin, N. A.; Martin Blanco, J.; Martinengo, P.; Martínez, M. I.; Martínez García, G.; Martinez Pedreira, M.; Martynov, Y.; Mas, A.; Masciocchi, S.; Masera, M.; Masoni, A.; Massacrier, L.; Mastroserio, A.; Masui, H.; Matyja, A.; Mayer, C.; Mazer, J.; Mazzoni, M. A.; McDonald, D.; Meddi, F.; Menchaca-Rocha, A.; Meninno, E.; Mercado Pérez, J.; Meres, M.; Miake, Y.; Mieskolainen, M. M.; Mikhaylov, K.; Milano, L.; Milosevic, J.; Minervini, L. M.; Mischke, A.; Mishra, A. N.; Miśkowiec, D.; Mitra, J.; Mitu, C. M.; Mohammadi, N.; Mohanty, B.; Molnar, L.; Montaño Zetina, L.; Montes, E.; Morando, M.; Moreira de Godoy, D. A.; Moretto, S.; Morreale, A.; Morsch, A.; Muccifora, V.; Mudnic, E.; Mühlheim, D.; Muhuri, S.; Mukherjee, M.; Mulligan, J. D.; Munhoz, M. G.; Murray, S.; Musa, L.; Musinsky, J.; Nandi, B. K.; Nania, R.; Nappi, E.; Naru, M. U.; Nattrass, C.; Nayak, K.; Nayak, T. K.; Nazarenko, S.; Nedosekin, A.; Nellen, L.; Ng, F.; Nicassio, M.; Niculescu, M.; Niedziela, J.; Nielsen, B. S.; Nikolaev, S.; Nikulin, S.; Nikulin, V.; Noferini, F.; Nomokonov, P.; Nooren, G.; Noris, J. C. C.; Norman, J.; Nyanin, A.; Nystrand, J.; Oeschler, H.; Oh, S.; Oh, S. K.; Ohlson, A.; Okatan, A.; Okubo, T.; Olah, L.; Oleniacz, J.; Oliveira da Silva, A. C.; Oliver, M. H.; Onderwaater, J.; Oppedisano, C.; Ortiz Velasquez, A.; Oskarsson, A.; Otwinowski, J.; Oyama, K.; Ozdemir, M.; Pachmayer, Y.; Pagano, P.; Paić, G.; Pajares, C.; Pal, S. K.; Pan, J.; Pandey, A. K.; Pant, D.; Papcun, P.; Papikyan, V.; Pappalardo, G. S.; Pareek, P.; Park, W. J.; Parmar, S.; Passfeld, A.; Paticchio, V.; Patra, R. N.; Paul, B.; Peitzmann, T.; Pereira da Costa, H.; Pereira de Oliveira Filho, E.; Peresunko, D.; Pérez Lara, C. E.; Perez Lezama, E.; Peskov, V.; Pestov, Y.; Petráček, V.; Petrov, V.; Petrovici, M.; Petta, C.; Piano, S.; Pikna, M.; Pillot, P.; Pinazza, O.; Pinsky, L.; Piyarathna, D. B.; Płoskoń, M.; Planinic, M.; Pluta, J.; Pochybova, S.; Podesta-Lerma, P. L. M.; Poghosyan, M. G.; Polichtchouk, B.; Poljak, N.; Poonsawat, W.; Pop, A.; Porteboeuf-Houssais, S.; Porter, J.; Pospisil, J.; Prasad, S. K.; Preghenella, R.; Prino, F.; Pruneau, C. A.; Pshenichnov, I.; Puccio, M.; Puddu, G.; Pujahari, P.; Punin, V.; Putschke, J.; Qvigstad, H.; Rachevski, A.; Raha, S.; Rajput, S.; Rak, J.; Rakotozafindrabe, A.; Ramello, L.; Raniwala, R.; Raniwala, S.; Räsänen, S. S.; Rascanu, B. T.; Rathee, D.; Read, K. F.; Real, J. S.; Redlich, K.; Reed, R. J.; Rehman, A.; Reichelt, P.; Reidt, F.; Ren, X.; Renfordt, R.; Reolon, A. R.; Reshetin, A.; Rettig, F.; Revol, J.-P.; Reygers, K.; Riabov, V.; Ricci, R. A.; Richert, T.; Richter, M.; Riedler, P.; Riegler, W.; Riggi, F.; Ristea, C.; Rivetti, A.; Rocco, E.; Rodríguez Cahuantzi, M.; Rodriguez Manso, A.; Røed, K.; Rogochaya, E.; Rohr, D.; Röhrich, D.; Romita, R.; Ronchetti, F.; Ronflette, L.; Rosnet, P.; Rossi, A.; Roukoutakis, F.; Roy, A.; Roy, C.; Roy, P.; Rubio Montero, A. J.; Rui, R.; Russo, R.; Ryabinkin, E.; Ryabov, Y.; Rybicki, A.; Sadovsky, S.; Šafařík, K.; Sahlmuller, B.; Sahoo, P.; Sahoo, R.; Sahoo, S.; Sahu, P. K.; Saini, J.; Sakai, S.; Saleh, M. A.; Salgado, C. A.; Salzwedel, J.; Sambyal, S.; Samsonov, V.; Sanchez Castro, X.; Šándor, L.; Sandoval, A.; Sano, M.; Santagati, G.; Sarkar, D.; Scapparone, E.; Scarlassara, F.; Scharenberg, R. P.; Schiaua, C.; Schicker, R.; Schmidt, C.; Schmidt, H. R.; Schuchmann, S.; Schukraft, J.; Schulc, M.; Schuster, T.; Schutz, Y.; Schwarz, K.; Schweda, K.; Scioli, G.; Scomparin, E.; Scott, R.; Seeder, K. S.; Seger, J. E.; Sekiguchi, Y.; Sekihata, D.; Selyuzhenkov, I.; Senosi, K.; Seo, J.; Serradilla, E.; Sevcenco, A.; Shabanov, A.; Shabetai, A.; Shadura, O.; Shahoyan, R.; Shangaraev, A.; Sharma, A.; Sharma, N.; Shigaki, K.; Shtejer, K.; Sibiriak, Y.; Siddhanta, S.; Sielewicz, K. M.; Siemiarczuk, T.; Silvermyr, D.; Silvestre, C.; Simatovic, G.; Simonetti, G.; Singaraju, R.; Singh, R.; Singha, S.; Singhal, V.; Sinha, B. C.; Sinha, T.; Sitar, B.; Sitta, M.; Skaali, T. B.; Slupecki, M.; Smirnov, N.; Snellings, R. J. M.; Snellman, T. W.; Søgaard, C.; Soltz, R.; Song, J.; Song, M.; Song, Z.; Soramel, F.; Sorensen, S.; Spacek, M.; Spiriti, E.; Sputowska, I.; Spyropoulou-Stassinaki, M.; Srivastava, B. K.; Stachel, J.; Stan, I.; Stefanek, G.; Steinpreis, M.; Stenlund, E.; Steyn, G.; Stiller, J. H.; Stocco, D.; Strmen, P.; Suaide, A. A. P.; Sugitate, T.; Suire, C.; Suleymanov, M.; Sultanov, R.; Šumbera, M.; Symons, T. J. M.; Szabo, A.; Szanto de Toledo, A.; Szarka, I.; Szczepankiewicz, A.; Szymanski, M.; Takahashi, J.; Tanaka, N.; Tangaro, M. A.; Tapia Takaki, J. D.; Tarantola Peloni, A.; Tarhini, M.; Tariq, M.; Tarzila, M. G.; Tauro, A.; Tejeda Muñoz, G.; Telesca, A.; Terasaki, K.; Terrevoli, C.; Teyssier, B.; Thäder, J.; Thomas, D.; Tieulent, R.; Timmins, A. R.; Toia, A.; Trogolo, S.; Trubnikov, V.; Trzaska, W. H.; Tsuji, T.; Tumkin, A.; Turrisi, R.; Tveter, T. S.; Ullaland, K.; Uras, A.; Usai, G. L.; Utrobicic, A.; Vajzer, M.; Vala, M.; Valencia Palomo, L.; Vallero, S.; van der Maarel, J.; van Hoorne, J. W.; van Leeuwen, M.; Vanat, T.; Vande Vyvre, P.; Varga, D.; Vargas, A.; Vargyas, M.; Varma, R.; Vasileiou, M.; Vasiliev, A.; Vauthier, A.; Vechernin, V.; Veen, A. M.; Veldhoen, M.; Velure, A.; Venaruzzo, M.; Vercellin, E.; Vergara Limón, S.; Vernet, R.; Verweij, M.; Vickovic, L.; Viesti, G.; Viinikainen, J.; Vilakazi, Z.; Villalobos Baillie, O.; Vinogradov, A.; Vinogradov, L.; Vinogradov, Y.; Virgili, T.; Vislavicius, V.; Viyogi, Y. P.; Vodopyanov, A.; Völkl, M. A.; Voloshin, K.; Voloshin, S. A.; Volpe, G.; von Haller, B.; Vorobyev, I.; Vranic, D.; Vrláková, J.; Vulpescu, B.; Vyushin, A.; Wagner, B.; Wagner, J.; Wang, H.; Wang, M.; Wang, Y.; Watanabe, D.; Watanabe, Y.; Weber, M.; Weber, S. G.; Wessels, J. P.; Westerhoff, U.; Wiechula, J.; Wikne, J.; Wilde, M.; Wilk, G.; Wilkinson, J.; Williams, M. C. S.; Windelband, B.; Winn, M.; Yaldo, C. G.; Yang, H.; Yang, P.; Yano, S.; Yin, Z.; Yokoyama, H.; Yoo, I.-K.; Yurchenko, V.; Yushmanov, I.; Zaborowska, A.; Zaccolo, V.; Zaman, A.; Zampolli, C.; Zanoli, H. J. C.; Zaporozhets, S.; Zardoshti, N.; Zarochentsev, A.; Závada, P.; Zaviyalov, N.; Zbroszczyk, H.; Zgura, I. S.; Zhalov, M.; Zhang, H.; Zhang, X.; Zhang, Y.; Zhao, C.; Zhigareva, N.; Zhou, D.; Zhou, Y.; Zhou, Z.; Zhu, H.; Zhu, J.; Zhu, X.; Zichichi, A.; Zimmermann, A.; Zimmermann, M. B.; Zinovjev, G.; Zyzak, M.; Alice Collaboration
2015-11-01
The size of the particle emission region in high-energy collisions can be deduced using the femtoscopic correlations of particle pairs at low relative momentum. Such correlations arise due to quantum statistics and Coulomb and strong final state interactions. In this paper, results are presented from femtoscopic analyses of π±π±,K±K±,KS0KS0,p p , and p ¯p ¯ correlations from Pb-Pb collisions at √{sNN}=2.76 TeV by the ALICE experiment at the LHC. One-dimensional radii of the system are extracted from correlation functions in terms of the invariant momentum difference of the pair. The comparison of the measured radii with the predictions from a hydrokinetic model is discussed. The pion and kaon source radii display a monotonic decrease with increasing average pair transverse mass mT which is consistent with hydrodynamic model predictions for central collisions. The kaon and proton source sizes can be reasonably described by approximate mT scaling.
Thermoelastic stress in oceanic lithosphere due to hotspot reheating
NASA Technical Reports Server (NTRS)
Zhu, Anning; Wiens, Douglas A.
1991-01-01
The effect of hotspot reheating on the intraplate stress field is investigated by modeling the three-dimensional thermal stress field produced by nonuniform temperature changes in an elastic plate. Temperature perturbations are calculated assuming that the lithosphere is heated by a source in the lower part of the thermal lithosphere. A thermal stress model for the elastic lithosphere is calculated by superposing the stress fields resulting from temperature changes in small individual elements. The stress in an elastic plate resulting from a temperature change in each small element is expressed as an infinite series, wherein each term is a source or an image modified from a closed-from half-space solution. The thermal stress solution is applied to midplate swells in oceanic lithosphere with various thermal structures and plate velocities. The results predict a stress field with a maximum deviatoric stress on the order of 100 MPa covering a broad area around the hotspot plume. The predicted principal stress orientations show a complicated geographical pattern, with horizontal extension perpendicular to the hotspot track at shallow depths and compression along the track near the bottom of the elastic lithosphere.
A numerical study on dual-phase-lag model of bio-heat transfer during hyperthermia treatment.
Kumar, P; Kumar, Dinesh; Rai, K N
2015-01-01
The success of hyperthermia in the treatment of cancer depends on the precise prediction and control of temperature. It was absolutely a necessity for hyperthermia treatment planning to understand the temperature distribution within living biological tissues. In this paper, dual-phase-lag model of bio-heat transfer has been studied using Gaussian distribution source term under most generalized boundary condition during hyperthermia treatment. An approximate analytical solution of the present problem has been done by Finite element wavelet Galerkin method which uses Legendre wavelet as a basis function. Multi-resolution analysis of Legendre wavelet in the present case localizes small scale variations of solution and fast switching of functional bases. The whole analysis is presented in dimensionless form. The dual-phase-lag model of bio-heat transfer has compared with Pennes and Thermal wave model of bio-heat transfer and it has been found that large differences in the temperature at the hyperthermia position and time to achieve the hyperthermia temperature exist, when we increase the value of τT. Particular cases when surface subjected to boundary condition of 1st, 2nd and 3rd kind are discussed in detail. The use of dual-phase-lag model of bio-heat transfer and finite element wavelet Galerkin method as a solution method helps in precise prediction of temperature. Gaussian distribution source term helps in control of temperature during hyperthermia treatment. So, it makes this study more useful for clinical applications. Copyright © 2015 Elsevier Ltd. All rights reserved.
Stochastic Analysis of Orbital Lifetimes of Spacecraft
NASA Technical Reports Server (NTRS)
Sasamoto, Washito; Goodliff, Kandyce; Cornelius, David
2008-01-01
A document discusses (1) a Monte-Carlo-based methodology for probabilistic prediction and analysis of orbital lifetimes of spacecraft and (2) Orbital Lifetime Monte Carlo (OLMC)--a Fortran computer program, consisting of a previously developed long-term orbit-propagator integrated with a Monte Carlo engine. OLMC enables modeling of variances of key physical parameters that affect orbital lifetimes through the use of probability distributions. These parameters include altitude, speed, and flight-path angle at insertion into orbit; solar flux; and launch delays. The products of OLMC are predicted lifetimes (durations above specified minimum altitudes) for the number of user-specified cases. Histograms generated from such predictions can be used to determine the probabilities that spacecraft will satisfy lifetime requirements. The document discusses uncertainties that affect modeling of orbital lifetimes. Issues of repeatability, smoothness of distributions, and code run time are considered for the purpose of establishing values of code-specific parameters and number of Monte Carlo runs. Results from test cases are interpreted as demonstrating that solar-flux predictions are primary sources of variations in predicted lifetimes. Therefore, it is concluded, multiple sets of predictions should be utilized to fully characterize the lifetime range of a spacecraft.
NASA Astrophysics Data System (ADS)
Qiu, Sihang; Chen, Bin; Wang, Rongxiao; Zhu, Zhengqiu; Wang, Yuan; Qiu, Xiaogang
2018-04-01
Hazardous gas leak accident has posed a potential threat to human beings. Predicting atmospheric dispersion and estimating its source become increasingly important in emergency management. Current dispersion prediction and source estimation models cannot satisfy the requirement of emergency management because they are not equipped with high efficiency and accuracy at the same time. In this paper, we develop a fast and accurate dispersion prediction and source estimation method based on artificial neural network (ANN), particle swarm optimization (PSO) and expectation maximization (EM). The novel method uses a large amount of pre-determined scenarios to train the ANN for dispersion prediction, so that the ANN can predict concentration distribution accurately and efficiently. PSO and EM are applied for estimating the source parameters, which can effectively accelerate the process of convergence. The method is verified by the Indianapolis field study with a SF6 release source. The results demonstrate the effectiveness of the method.
Development of a linearized unsteady aerodynamic analysis for cascade gust response predictions
NASA Technical Reports Server (NTRS)
Verdon, Joseph M.; Hall, Kenneth C.
1990-01-01
A method for predicting the unsteady aerodynamic response of a cascade of airfoils to entropic, vortical, and acoustic gust excitations is being developed. Here, the unsteady flow is regarded as a small perturbation of a nonuniform isentropic and irrotational steady background flow. A splitting technique is used to decompose the linearized unsteady velocity into rotational and irrotational parts leading to equations for the complex amplitudes of the linearized unsteady entropy, rotational velocity, and velocity potential that are coupled only sequentially. The entropic and rotational velocity fluctuations are described by transport equations for which closed-form solutions in terms of the mean-flow drift and stream functions can be determined. The potential fluctuation is described by an inhomogeneous convected wave equation in which the source term depends on the rotational velocity field, and is determined using finite-difference procedures. The analytical and numerical techniques used to determine the linearized unsteady flow are outlined. Results are presented to indicate the status of the solution procedure and to demonstrate the impact of blade geometry and mean blade loading on the aerodynamic response of cascades to vortical gust excitations. The analysis described herein leads to very efficient predictions of cascade unsteady aerodynamic response phenomena making it useful for turbomachinery aeroelastic and aeroacoustic design applications.
Technologies for Turbofan Noise Reduction
NASA Technical Reports Server (NTRS)
Huff, Dennis
2005-01-01
An overview presentation of NASA's engine noise research since 1992 is given for subsonic commercial aircraft applications. Highlights are included from the Advanced Subsonic Technology (AST) Noise Reduction Program and the Quiet Aircraft Technology (QAT) project with emphasis on engine source noise reduction. Noise reduction goals for 10 EPNdB by 207 and 20 EPNdB by 2022 are reviewed. Fan and jet noise technologies are highlighted from the AST program including higher bypass ratio propulsion, scarf inlets, forward-swept fans, swept/leaned stators, chevron nozzles, noise prediction methods, and active noise control for fans. Source diagnostic tests for fans and jets that have been completed over the past few years are presented showing how new flow measurement methods such as Particle Image Velocimetry (PIV) have played a key role in understanding turbulence, the noise generation process, and how to improve noise prediction methods. Tests focused on source decomposition have helped identify which engine components need further noise reduction. The role of Computational AeroAcoustics (CAA) for fan noise prediction is presented. Advanced noise reduction methods such as Hershel-Quincke tubes and trailing edge blowing for fan noise that are currently being pursued n the QAT program are also presented. Highlights are shown form engine validation and flight demonstrations that were done in the late 1990's with Pratt & Whitney on their PW4098 engine and Honeywell on their TFE-731-60 engine. Finally, future propulsion configurations currently being studied that show promise towards meeting NASA's long term goal of 20 dB noise reduction are shown including a Dual Fan Engine concept on a Blended Wing Body aircraft.
Clarke, M G; Kennedy, K P; MacDonagh, R P
2009-01-01
To develop a clinical prediction model enabling the calculation of an individual patient's life expectancy (LE) and survival probability based on age, sex, and comorbidity for use in the joint decision-making process regarding medical treatment. A computer software program was developed with a team of 3 clinicians, 2 professional actuaries, and 2 professional computer programmers. This incorporated statistical spreadsheet and database access design methods. Data sources included life insurance industry actuarial rating factor tables (public and private domain), Government Actuary Department UK life tables, professional actuarial sources, and evidence-based medical literature. The main outcome measures were numerical and graphical display of comorbidity-adjusted LE; 5-, 10-, and 15-year survival probability; in addition to generic UK population LE. Nineteen medical conditions, which impacted significantly on LE in actuarial terms and were commonly encountered in clinical practice, were incorporated in the final model. Numerical and graphical representations of statistical predictions of LE and survival probability were successfully generated for patients with either no comorbidity or a combination of the 19 medical conditions included. Validation and testing, including actuarial peer review, confirmed consistency with the data sources utilized. The evidence-based actuarial data utilized in this computer program design represent a valuable resource for use in the clinical decision-making process, where an accurate objective assessment of patient LE can so often make the difference between patients being offered or denied medical and surgical treatment. Ongoing development to incorporate additional comorbidities and enable Web-based access will enhance its use further.
Accuracy-preserving source term quadrature for third-order edge-based discretization
NASA Astrophysics Data System (ADS)
Nishikawa, Hiroaki; Liu, Yi
2017-09-01
In this paper, we derive a family of source term quadrature formulas for preserving third-order accuracy of the node-centered edge-based discretization for conservation laws with source terms on arbitrary simplex grids. A three-parameter family of source term quadrature formulas is derived, and as a subset, a one-parameter family of economical formulas is identified that does not require second derivatives of the source term. Among the economical formulas, a unique formula is then derived that does not require gradients of the source term at neighbor nodes, thus leading to a significantly smaller discretization stencil for source terms. All the formulas derived in this paper do not require a boundary closure, and therefore can be directly applied at boundary nodes. Numerical results are presented to demonstrate third-order accuracy at interior and boundary nodes for one-dimensional grids and linear triangular/tetrahedral grids over straight and curved geometries.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gowardhan, Akshay; Neuscamman, Stephanie; Donetti, John
Aeolus is an efficient three-dimensional computational fluid dynamics code based on finite volume method developed for predicting transport and dispersion of contaminants in a complex urban area. It solves the time dependent incompressible Navier-Stokes equation on a regular Cartesian staggered grid using a fractional step method. It also solves a scalar transport equation for temperature and using the Boussinesq approximation. The model also includes a Lagrangian dispersion model for predicting the transport and dispersion of atmospheric contaminants. The model can be run in an efficient Reynolds Average Navier-Stokes (RANS) mode with a run time of several minutes, or a moremore » detailed Large Eddy Simulation (LES) mode with run time of hours for a typical simulation. This report describes the model components, including details on the physics models used in the code, as well as several model validation efforts. Aeolus wind and dispersion predictions are compared to field data from the Joint Urban Field Trials 2003 conducted in Oklahoma City (Allwine et al 2004) including both continuous and instantaneous releases. Newly implemented Aeolus capabilities include a decay chain model and an explosive Radiological Dispersal Device (RDD) source term; these capabilities are described. Aeolus predictions using the buoyant explosive RDD source are validated against two experimental data sets: the Green Field explosive cloud rise experiments conducted in Israel (Sharon et al 2012) and the Full-Scale RDD Field Trials conducted in Canada (Green et al 2016).« less
NASA Astrophysics Data System (ADS)
Guarnaccia, Claudio; Quartieri, Joseph; Tepedino, Carmine
2017-06-01
One of the most hazardous physical polluting agents, considering their effects on human health, is acoustical noise. Airports are a strong source of acoustical noise, due to the airplanes turbines, to the aero-dynamical noise of transits, to the acceleration or the breaking during the take-off and landing phases of aircrafts, to the road traffic around the airport, etc.. The monitoring and the prediction of the acoustical level emitted by airports can be very useful to assess the impact on human health and activities. In the airports noise scenario, thanks to flights scheduling, the predominant sources may have a periodic behaviour. Thus, a Time Series Analysis approach can be adopted, considering that a general trend and a seasonal behaviour can be highlighted and used to build a predictive model. In this paper, two different approaches are adopted, thus two predictive models are constructed and tested. The first model is based on deterministic decomposition and is built composing the trend, that is the long term behaviour, the seasonality, that is the periodic component, and the random variations. The second model is based on seasonal autoregressive moving average, and it belongs to the stochastic class of models. The two different models are fitted on an acoustical level dataset collected close to the Nice (France) international airport. Results will be encouraging and will show good prediction performances of both the adopted strategies. A residual analysis is performed, in order to quantify the forecasting error features.
Public perception of rural environmental quality: Moving towards a multi-pollutant approach
NASA Astrophysics Data System (ADS)
Cantuaria, Manuella Lech; Brandt, Jørgen; Løfstrøm, Per; Blanes-Vidal, Victoria
2017-12-01
Most environmental epidemiology studies have examined pollutants individually. Multi-pollutant approaches have been recognized recently, but to the extent of our knowledge, no study to date has specifically investigated exposures to multiple air pollutants in rural environments. In this paper we characterized and quantified residential exposures to air pollutant mixtures in rural populations, provided a better understanding of the relationships between air pollutant mixtures and annoyance responses to environmental stressors, particularly odor, and quantified their predictive abilities. We used validated and highly spatially resolved atmospheric modeling of 14 air pollutants for four rural areas of Denmark, and the annoyance responses considered were annoyance due to odor, noise, dust, smoke and vibrations. We found significant associations between odor annoyance and principal components predominantly described by nitrate (NO3-), ammonium (NH4+), particulate matter (PM10 and PM2.5) and NH3, which are usually related to agricultural emission sources. Among these components, NH3 showed the lowest error when comparing observed population data and predicted probabilities. The combination of these compounds in a predictive model resulted in the most accurate model, being able to correctly predict 66% of odor annoyance responses. Furthermore, noise annoyance was found to be significantly associated with traffic-related air pollutants. In general terms, our results suggest that emissions from the agricultural and livestock production sectors are the main contributors to environmental annoyance, but also identify traffic and biomass burning as potential sources of annoyance.
NASA Astrophysics Data System (ADS)
Schuerger, Andrew C.; Richards, Jeffrey T.
2006-09-01
Plant-based life support systems that utilize bioregenerative technologies have been proposed for long-term human missions to both the Moon and Mars. Bioregenerative life support systems will utilize higher plants to regenerate oxygen, water, and edible biomass for crews, and are likely to significantly lower the ‘equivalent system mass’ of crewed vehicles. As part of an ongoing effort to begin the development of an automatic remote sensing system to monitor plant health in bioregenerative life support modules, we tested the efficacy of seven artificial illumination sources on the remote detection of plant stresses. A cohort of pepper plants (Capsicum annuum L.) were grown 42 days at 25 °C, 70% relative humidity, and 300 μmol m-2 s-1 of photosynthetically active radiation (PAR; from 400 to 700 nm). Plants were grown under nutritional stresses induced by irrigating subsets of the plants with 100, 50, 25, or 10% of a standard nutrient solution. Reflectance spectra of the healthy and stressed plants were collected under seven artificial lamps including two tungsten halogen lamps, plus high pressure sodium, metal halide, fluorescent, microwave, and red/blue light emitting diode (LED) sources. Results indicated that several common algorithms used to estimate biomass and leaf chlorophyll content were effective in predicting plant stress under all seven illumination sources. However, the two types of tungsten halogen lamps and the microwave illumination source yielded linear models with the highest residuals and thus the highest predictive capabilities of all lamps tested. The illumination sources with the least predictive capabilities were the red/blue LEDs and fluorescent lamps. Although the red/blue LEDs yielded the lowest residuals for linear models derived from the remote sensing data, the LED arrays used in these experiments were optimized for plant productivity and not the collection of remote sensing data. Thus, we propose that if adjusted to optimize the collectio n of remote sensing information from plants, LEDs remain the best candidates for illumination sources for monitoring plant stresses in bioregenerative life support systems.
Pietz, Kenneth; Petersen, Laura A
2007-01-01
Objectives To compare the ability of two diagnosis-based risk adjustment systems and health self-report to predict short- and long-term mortality. Data Sources/Study Setting Data were obtained from the Department of Veterans Affairs (VA) administrative databases. The study population was 78,164 VA beneficiaries at eight medical centers during fiscal year (FY) 1998, 35,337 of whom completed an 36-Item Short Form Health Survey for veterans (SF-36V) survey. Study Design We tested the ability of Diagnostic Cost Groups (DCGs), Adjusted Clinical Groups (ACGs), SF-36V Physical Component score (PCS) and Mental Component Score (MCS), and eight SF-36V scales to predict 1- and 2–5 year all-cause mortality. The additional predictive value of adding PCS and MCS to ACGs and DCGs was also evaluated. Logistic regression models were compared using Akaike's information criterion, the c-statistic, and the Hosmer–Lemeshow test. Principal Findings The c-statistics for the eight scales combined with age and gender were 0.766 for 1-year mortality and 0.771 for 2–5-year mortality. For DCGs with age and gender the c-statistics for 1- and 2–5-year mortality were 0.778 and 0.771, respectively. Adding PCS and MCS to the DCG model increased the c-statistics to 0.798 for 1-year and 0.784 for 2–5-year mortality. Conclusions The DCG model showed slightly better performance than the eight-scale model in predicting 1-year mortality, but the two models showed similar performance for 2–5-year mortality. Health self-report may add health risk information in addition to age, gender, and diagnosis for predicting longer-term mortality. PMID:17362210
The circadian profile of epilepsy improves seizure forecasting.
Karoly, Philippa J; Ung, Hoameng; Grayden, David B; Kuhlmann, Levin; Leyde, Kent; Cook, Mark J; Freestone, Dean R
2017-08-01
It is now established that epilepsy is characterized by periodic dynamics that increase seizure likelihood at certain times of day, and which are highly patient-specific. However, these dynamics are not typically incorporated into seizure prediction algorithms due to the difficulty of estimating patient-specific rhythms from relatively short-term or unreliable data sources. This work outlines a novel framework to develop and assess seizure forecasts, and demonstrates that the predictive power of forecasting models is improved by circadian information. The analyses used long-term, continuous electrocorticography from nine subjects, recorded for an average of 320 days each. We used a large amount of out-of-sample data (a total of 900 days for algorithm training, and 2879 days for testing), enabling the most extensive post hoc investigation into seizure forecasting. We compared the results of an electrocorticography-based logistic regression model, a circadian probability, and a combined electrocorticography and circadian model. For all subjects, clinically relevant seizure prediction results were significant, and the addition of circadian information (combined model) maximized performance across a range of outcome measures. These results represent a proof-of-concept for implementing a circadian forecasting framework, and provide insight into new approaches for improving seizure prediction algorithms. The circadian framework adds very little computational complexity to existing prediction algorithms, and can be implemented using current-generation implant devices, or even non-invasively via surface electrodes using a wearable application. The ability to improve seizure prediction algorithms through straightforward, patient-specific modifications provides promise for increased quality of life and improved safety for patients with epilepsy. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Phelan, Thomas J.; Abriola, Linda M.; Gibson, Jenny L.; Smits, Kathleen M.; Christ, John A.
2015-12-01
In-situ bioremediation, a widely applied treatment technology for source zones contaminated with dense non-aqueous phase liquids (DNAPLs), has proven economical and reasonably efficient for long-term management of contaminated sites. Successful application of this remedial technology, however, requires an understanding of the complex interaction of transport, mass transfer, and biotransformation processes. The bioenhancement factor, which represents the ratio of DNAPL mass transfer under microbially active conditions to that which would occur under abiotic conditions, is commonly used to quantify the effectiveness of a particular bioremediation remedy. To date, little research has been directed towards the development and validation of methods to predict bioenhancement factors under conditions representative of real sites. This work extends an existing, first-order, bioenhancement factor expression to systems with zero-order and Monod kinetics, representative of many source-zone scenarios. The utility of this model for predicting the bioenhancement factor for previously published laboratory and field experiments is evaluated. This evaluation demonstrates the applicability of these simple bioenhancement factors for preliminary experimental design and analysis, and for assessment of dissolution enhancement in ganglia-contaminated source zones. For ease of application, a set of nomographs is presented that graphically depicts the dependence of bioenhancement factor on physicochemical properties. Application of these nomographs is illustrated using data from a well-documented field site. Results suggest that this approach can successfully capture field-scale, as well as column-scale, behavior. Sensitivity analyses reveal that bioenhanced dissolution will critically depend on in-situ biomass concentrations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paul L. Wichlacz
2003-09-01
This source-term summary document is intended to describe the current understanding of contaminant source terms and the conceptual model for potential source-term release to the environment at the Idaho National Engineering and Environmental Laboratory (INEEL), as presented in published INEEL reports. The document presents a generalized conceptual model of the sources of contamination and describes the general categories of source terms, primary waste forms, and factors that affect the release of contaminants from the waste form into the vadose zone and Snake River Plain Aquifer. Where the information has previously been published and is readily available, summaries of the inventorymore » of contaminants are also included. Uncertainties that affect the estimation of the source term release are also discussed where they have been identified by the Source Term Technical Advisory Group. Areas in which additional information are needed (i.e., research needs) are also identified.« less
Short-term variability in body weight predicts long-term weight gain1
Lowe, Michael R; Feig, Emily H; Winter, Samantha R; Stice, Eric
2015-01-01
Background: Body weight in lower animals and humans is highly stable despite a very large flux in energy intake and expenditure over time. Conversely, the existence of higher-than-average variability in weight may indicate a disruption in the mechanisms responsible for homeostatic weight regulation. Objective: In a sample chosen for weight-gain proneness, we evaluated whether weight variability over a 6-mo period predicted subsequent weight change from 6 to 24 mo. Design: A total of 171 nonobese women were recruited to participate in this longitudinal study in which weight was measured 4 times over 24 mo. The initial 3 weights were used to calculate weight variability with the use of a root mean square error approach to assess fluctuations in weight independent of trajectory. Linear regression analysis was used to examine whether weight variability in the initial 6 mo predicted weight change 18 mo later. Results: Greater weight variability significantly predicted amount of weight gained. This result was unchanged after control for baseline body mass index (BMI) and BMI change from baseline to 6 mo and for measures of disinhibition, restrained eating, and dieting. Conclusions: Elevated weight variability in young women may signal the degradation of body weight regulatory systems. In an obesogenic environment this may eventuate in accelerated weight gain, particularly in those with a genetic susceptibility toward overweight. Future research is needed to evaluate the reliability of weight variability as a predictor of future weight gain and the sources of its predictive effect. The trial on which this study is based is registered at clinicaltrials.gov as NCT00456131. PMID:26354535
Short-term variability in body weight predicts long-term weight gain.
Lowe, Michael R; Feig, Emily H; Winter, Samantha R; Stice, Eric
2015-11-01
Body weight in lower animals and humans is highly stable despite a very large flux in energy intake and expenditure over time. Conversely, the existence of higher-than-average variability in weight may indicate a disruption in the mechanisms responsible for homeostatic weight regulation. In a sample chosen for weight-gain proneness, we evaluated whether weight variability over a 6-mo period predicted subsequent weight change from 6 to 24 mo. A total of 171 nonobese women were recruited to participate in this longitudinal study in which weight was measured 4 times over 24 mo. The initial 3 weights were used to calculate weight variability with the use of a root mean square error approach to assess fluctuations in weight independent of trajectory. Linear regression analysis was used to examine whether weight variability in the initial 6 mo predicted weight change 18 mo later. Greater weight variability significantly predicted amount of weight gained. This result was unchanged after control for baseline body mass index (BMI) and BMI change from baseline to 6 mo and for measures of disinhibition, restrained eating, and dieting. Elevated weight variability in young women may signal the degradation of body weight regulatory systems. In an obesogenic environment this may eventuate in accelerated weight gain, particularly in those with a genetic susceptibility toward overweight. Future research is needed to evaluate the reliability of weight variability as a predictor of future weight gain and the sources of its predictive effect. The trial on which this study is based is registered at clinicaltrials.gov as NCT00456131. © 2015 American Society for Nutrition.
Predicting DNAPL Source Zone and Plume Response Using Site-Measured Characteristics
2017-05-19
FINAL REPORT Predicting DNAPL Source Zone and Plume Response Using Site- Measured Characteristics SERDP Project ER-1613 MAY 2017...Final Report 3. DATES COVERED (From - To) 2007 - 2017 4. TITLE AND SUBTITLE PREDICTING DNAPL SOURCE ZONE AND PLUME RESPONSE USING SITE- MEASURED ...historical record of concentration and head measurements , particularly in the near-source region. For each site considered, currently available data
Gardner, Benjamin
2015-01-01
The term ‘habit’ is widely used to predict and explain behaviour. This paper examines use of the term in the context of health-related behaviour, and explores how the concept might be made more useful. A narrative review is presented, drawing on a scoping review of 136 empirical studies and 8 literature reviews undertaken to document usage of the term ‘habit’, and methods to measure it. A coherent definition of ‘habit’, and proposals for improved methods for studying it, were derived from findings. Definitions of ‘habit’ have varied in ways that are often implicit and not coherently linked with an underlying theory. A definition is proposed whereby habit is a process by which a stimulus generates an impulse to act as a result of a learned stimulus-response association. Habit-generated impulses may compete or combine with impulses and inhibitions arising from other sources, including conscious decision-making, to influence responses, and need not generate behaviour. Most research on habit is based on correlational studies using self-report measures. Adopting a coherent definition of ‘habit’, and a wider range of paradigms, designs and measures to study it, may accelerate progress in habit theory and application. PMID:25207647
Evaluating bacterial gene-finding HMM structures as probabilistic logic programs.
Mørk, Søren; Holmes, Ian
2012-03-01
Probabilistic logic programming offers a powerful way to describe and evaluate structured statistical models. To investigate the practicality of probabilistic logic programming for structure learning in bioinformatics, we undertook a simplified bacterial gene-finding benchmark in PRISM, a probabilistic dialect of Prolog. We evaluate Hidden Markov Model structures for bacterial protein-coding gene potential, including a simple null model structure, three structures based on existing bacterial gene finders and two novel model structures. We test standard versions as well as ADPH length modeling and three-state versions of the five model structures. The models are all represented as probabilistic logic programs and evaluated using the PRISM machine learning system in terms of statistical information criteria and gene-finding prediction accuracy, in two bacterial genomes. Neither of our implementations of the two currently most used model structures are best performing in terms of statistical information criteria or prediction performances, suggesting that better-fitting models might be achievable. The source code of all PRISM models, data and additional scripts are freely available for download at: http://github.com/somork/codonhmm. Supplementary data are available at Bioinformatics online.
Methods for nuclear air-cleaning-system accident-consequence assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrae, R.W.; Bolstad, J.W.; Gregory, W.S.
1982-01-01
This paper describes a multilaboratory research program that is directed toward addressing many questions that analysts face when performing air cleaning accident consequence assessments. The program involves developing analytical tools and supportive experimental data that will be useful in making more realistic assessments of accident source terms within and up to the atmospheric boundaries of nuclear fuel cycle facilities. The types of accidents considered in this study includes fires, explosions, spills, tornadoes, criticalities, and equipment failures. The main focus of the program is developing an accident analysis handbook (AAH). We will describe the contents of the AAH, which include descriptionsmore » of selected nuclear fuel cycle facilities, process unit operations, source-term development, and accident consequence analyses. Three computer codes designed to predict gas and material propagation through facility air cleaning systems are described. These computer codes address accidents involving fires (FIRAC), explosions (EXPAC), and tornadoes (TORAC). The handbook relies on many illustrative examples to show the analyst how to approach accident consequence assessments. We will use the FIRAC code and a hypothetical fire scenario to illustrate the accident analysis capability.« less
Verification and Improvement of Flamelet Approach for Non-Premixed Flames
NASA Technical Reports Server (NTRS)
Zaitsev, S.; Buriko, Yu.; Guskov, O.; Kopchenov, V.; Lubimov, D.; Tshepin, S.; Volkov, D.
1997-01-01
Studies in the mathematical modeling of the high-speed turbulent combustion has received renewal attention in the recent years. The review of fundamentals, approaches and extensive bibliography was presented by Bray, Libbi and Williams. In order to obtain accurate predictions for turbulent combustible flows, the effects of turbulent fluctuations on the chemical source terms should be taken into account. The averaging of chemical source terms requires to utilize probability density function (PDF) model. There are two main approaches which are dominant in high-speed combustion modeling now. In the first approach, PDF form is assumed based on intuitia of modelliers (see, for example, Spiegler et.al.; Girimaji; Baurle et.al.). The second way is much more elaborate and it is based on the solution of evolution equation for PDF. This approach was proposed by S.Pope for incompressible flames. Recently, it was modified for modeling of compressible flames in studies of Farschi; Hsu; Hsu, Raji, Norris; Eifer, Kollman. But its realization in CFD is extremely expensive in computations due to large multidimensionality of PDF evolution equation (Baurle, Hsu, Hassan).
Predictability effects in auditory scene analysis: a review
Bendixen, Alexandra
2014-01-01
Many sound sources emit signals in a predictable manner. The idea that predictability can be exploited to support the segregation of one source's signal emissions from the overlapping signals of other sources has been expressed for a long time. Yet experimental evidence for a strong role of predictability within auditory scene analysis (ASA) has been scarce. Recently, there has been an upsurge in experimental and theoretical work on this topic resulting from fundamental changes in our perspective on how the brain extracts predictability from series of sensory events. Based on effortless predictive processing in the auditory system, it becomes more plausible that predictability would be available as a cue for sound source decomposition. In the present contribution, empirical evidence for such a role of predictability in ASA will be reviewed. It will be shown that predictability affects ASA both when it is present in the sound source of interest (perceptual foreground) and when it is present in other sound sources that the listener wishes to ignore (perceptual background). First evidence pointing toward age-related impairments in the latter capacity will be addressed. Moreover, it will be illustrated how effects of predictability can be shown by means of objective listening tests as well as by subjective report procedures, with the latter approach typically exploiting the multi-stable nature of auditory perception. Critical aspects of study design will be delineated to ensure that predictability effects can be unambiguously interpreted. Possible mechanisms for a functional role of predictability within ASA will be discussed, and an analogy with the old-plus-new heuristic for grouping simultaneous acoustic signals will be suggested. PMID:24744695
Tests and comparisons of gravity models.
NASA Technical Reports Server (NTRS)
Marsh, J. G.; Douglas, B. C.
1971-01-01
Optical observations of the GEOS satellites were used to obtain orbital solutions with different sets of geopotential coefficients. The solutions were compared before and after modification to high order terms (necessary because of resonance) and were then analyzed by comparing subsequent observations with predicted trajectories. The most important source of error in orbit determination and prediction for the GEOS satellites is the effect of resonance found in most published sets of geopotential coefficients. Modifications to the sets yield greatly improved orbits in most cases. The results of these comparisons suggest that with the best optical tracking systems and gravity models, satellite position error due to gravity model uncertainty can reach 50-100 m during a heavily observed 5-6 day orbital arc. If resonant coefficients are estimated, the uncertainty is reduced considerably.
NASA Technical Reports Server (NTRS)
Carlson, T. N.
1986-01-01
A review is presented of numerical models which were developed to interpret thermal IR data and to identify the governing parameters and surface energy fluxes recorded in the images. Analytic, predictive, diagnostic and empirical models are described. The limitations of each type of modeling approach are explored in terms of the error sources and inherent constraints due to theoretical or measurement limitations. Sample results of regional-scale soil moisture or evaporation patterns derived from the Heat Capacity Mapping Mission and GOES satellite data through application of the predictive model devised by Carlson (1981) are discussed. The analysis indicates that pattern recognition will probably be highest when data are collected over flat, arid, sparsely vegetated terrain. The soil moisture data then obtained may be accurate to within 10-20 percent.
Assessment of seismic hazard in the North Caucasus
NASA Astrophysics Data System (ADS)
Ulomov, V. I.; Danilova, T. I.; Medvedeva, N. S.; Polyakova, T. P.; Shumilina, L. S.
2007-07-01
The seismicity of the North Caucasus is the highest in the European part of Russia. The detection of potential seismic sources here and long-term prediction of earthquakes are extremely important for the assessment of seismic hazard and seismic risk in this densely populated and industrially developed region of the country. The seismogenic structures of the Iran-Caucasus-Anatolia and Central Asia regions, adjacent to European Russia, are the subjects of this study. These structures are responsible for the specific features of regional seismicity and for the geodynamic interaction with adjacent areas of the Scythian and Turan platforms. The most probable potential sources of earthquakes with magnitudes M = 7.0 ± 0.2 and 7.5 ± 0.2 in the North Caucasus are located. The possible macroseismic effect of one of them is assessed.
Theoretical and experimental aspects of laser cutting with a direct diode laser
NASA Astrophysics Data System (ADS)
Costa Rodrigues, G.; Pencinovsky, J.; Cuypers, M.; Duflou, J. R.
2014-10-01
Recent developments in beam coupling techniques have made it possible to scale up the power of diode lasers with a laser beam quality suitable for laser cutting of metal sheets. In this paper a prototype of a Direct Diode Laser (DDL) source (BPP of 22 mm-mrad) is analyzed in terms of efficiency and cut performance and compared with two established technologies, CO2 and fiber lasers. An analytical model based on absorption calculations is used to predict the performance of the studied laser source with a good agreement with experimental results. Furthermore results of fusion cutting of stainless steel and aluminium alloys as well as oxygen cutting of structural steel are presented, demonstrating that industrial relevant cutting speeds with high cutting quality can now be achieved with DDL.
Model falsifiability and climate slow modes
NASA Astrophysics Data System (ADS)
Essex, Christopher; Tsonis, Anastasios A.
2018-07-01
The most advanced climate models are actually modified meteorological models attempting to capture climate in meteorological terms. This seems a straightforward matter of raw computing power applied to large enough sources of current data. Some believe that models have succeeded in capturing climate in this manner. But have they? This paper outlines difficulties with this picture that derive from the finite representation of our computers, and the fundamental unavailability of future data instead. It suggests that alternative windows onto the multi-decadal timescales are necessary in order to overcome the issues raised for practical problems of prediction.
Operational Earthquake Forecasting: Proposed Guidelines for Implementation (Invited)
NASA Astrophysics Data System (ADS)
Jordan, T. H.
2010-12-01
The goal of operational earthquake forecasting (OEF) is to provide the public with authoritative information about how seismic hazards are changing with time. During periods of high seismic activity, short-term earthquake forecasts based on empirical statistical models can attain nominal probability gains in excess of 100 relative to the long-term forecasts used in probabilistic seismic hazard analysis (PSHA). Prospective experiments are underway by the Collaboratory for the Study of Earthquake Predictability (CSEP) to evaluate the reliability and skill of these seismicity-based forecasts in a variety of tectonic environments. How such information should be used for civil protection is by no means clear, because even with hundredfold increases, the probabilities of large earthquakes typically remain small, rarely exceeding a few percent over forecasting intervals of days or weeks. Civil protection agencies have been understandably cautious in implementing formal procedures for OEF in this sort of “low-probability environment.” Nevertheless, the need to move more quickly towards OEF has been underscored by recent experiences, such as the 2009 L’Aquila earthquake sequence and other seismic crises in which an anxious public has been confused by informal, inconsistent earthquake forecasts. Whether scientists like it or not, rising public expectations for real-time information, accelerated by the use of social media, will require civil protection agencies to develop sources of authoritative information about the short-term earthquake probabilities. In this presentation, I will discuss guidelines for the implementation of OEF informed by my experience on the California Earthquake Prediction Evaluation Council, convened by CalEMA, and the International Commission on Earthquake Forecasting, convened by the Italian government following the L’Aquila disaster. (a) Public sources of information on short-term probabilities should be authoritative, scientific, open, and timely, and they need to convey the epistemic uncertainties in the operational forecasts. (b) Earthquake probabilities should be based on operationally qualified, regularly updated forecasting systems. All operational procedures should be rigorously reviewed by experts in the creation, delivery, and utility of earthquake forecasts. (c) The quality of all operational models should be evaluated for reliability and skill by retrospective testing, and the models should be under continuous prospective testing in a CSEP-type environment against established long-term forecasts and a wide variety of alternative, time-dependent models. (d) Short-term models used in operational forecasting should be consistent with the long-term forecasts used in PSHA. (e) Alert procedures should be standardized to facilitate decisions at different levels of government and among the public, based in part on objective analysis of costs and benefits. (f) In establishing alert procedures, consideration should also be made of the less tangible aspects of value-of-information, such as gains in psychological preparedness and resilience. Authoritative statements of increased risk, even when the absolute probability is low, can provide a psychological benefit to the public by filling information vacuums that can lead to informal predictions and misinformation.
NASA Astrophysics Data System (ADS)
Maksimovic, C.
2012-04-01
The effects of climate change and increasing urbanisation call for a new paradigm for efficient planning, management and retrofitting of urban developments to increase resilience to climate change and to maximize ecosystem services. Improved management of urban floods from all sources in required. Time scale for well documented fluvial and coastal floods allows for timely response but surface (pluvial) flooding caused by intense local storms had not been given appropriate attention, Pitt Review (UK). Urban surface floods predictions require fine scale data and model resolutions. They have to be tackled locally by combining central inputs (meteorological services) with the efforts of the local entities. Although significant breakthrough in modelling of pluvial flooding was made there is a need to further enhance short term prediction of both rainfall and surface flooding. These issues are dealt with in the EU Iterreg project Rain Gain (RG). Breakthrough in urban flood mitigation can only be achieved by combined effects of advanced planning design, construction and management of urban water (blue) assets in interaction with urban vegetated areas' (green) assets. Changes in design and operation of blue and green assets, currently operating as two separate systems, is urgently required. Gaps in knowledge and technology will be introduced by EIT's Climate-KIC Blue Green Dream (BGD) project. The RG and BGD projects provide synergy of the "decoupled" blue and green systems to enhance multiple benefits to: urban amenity, flood management, heat island, biodiversity, resilience to drought thus energy requirements, thus increased quality of urban life at lower costs. Urban pluvial flood management will address two priority areas: Short Term rainfall Forecast and Short term flood surface forecast. Spatial resolution of short term rainfall forecast below 0.5 km2 and lead time of a few hours are needed. Improvements are achievable by combining data sources of raingauge networks with C-Band and X-Band radars with NWP and pluvial flood prediction models. The RG project deals with the merging and providing synergy of these technologies. Combined effects of BG technologies can totally reduce the risk of surface flooding for low return period events and up to 50-80% for high return periods. Demonstration technology testing sites for both BGD and RG projects will be established in 5 participating countries. Decision Support Systems will enhance full scale implementation of both BGD and RG project deliverables. A BGD efficiency rating scheme and training guidelines and e-learning tools will be developed. Experimental/demo sites for BDG and RG technology development and testing in Rotterdam, Paris, Berlin, Leuven and London and the expected results with concepts of RG and BGD projects and the initial results will be presented in the paper.
Energy resources - cornucopia or empty barrel?
McCabe, P.J.
1998-01-01
Over the last 25 yr, considerable debate has continued about the future supply of fossil fuel. On one side are those who believe we are rapidly depleting resources and that the resulting shortages will have a profound impact on society. On the other side are those who see no impending crisis because long-term trends are for cheaper prices despite rising production. The concepts of resources and reserves have historically created considerable misunderstanding in the minds of many nongeologists. Hubbert-type predictions of energy production assume that there is a finite supply of energy that is measurable; however, estimates of resources and reserves are inventories of the amounts of a fossil fuel perceived to be available over some future period of time. As those resources/reserves are depleted over time, additional amounts of fossil fuels are inventoried. Throughout most of this century, for example, crude oil reserves in the United States have represented a 10-14-yr supply. For the last 50 yr, resource crude oil estimates have represented about a 60-70-yr supply for the United States. Division of reserve or resource estimates by current or projected annual consumption therefore is circular in reasoning and can lead to highly erroneous conclusions. Production histories of fossil fuels are driven more by demand than by the geologic abundance of the resource. Examination of some energy resources with well-documented histories leads to two conceptual models that relate production to price. The closed-market model assumes that there is only one source of energy available. Although the price initially may fall because of economies of scale long term, prices rise as the energy source is depleted and it becomes progressively more expensive to extract. By contrast, the open-market model assumes that there is a variety of available energy sources and that competition among them leads to long-term stable or falling prices. At the moment, the United States and the world approximate the open-market model, but in the long run the supply of fossil fuel is finite, and prices inevitably will rise unless alternate energy sources substitute for fossil energy supplies; however, there appears little reason to suspect that long-term price trends will rise significantly over the next few decades.Over the last 25 years, considerable debate has continued about the future supply of fossil fuel. On one side are those who believe that resources are rapidly depleting and that the resulting shortages will have a profound impact on society. On the other side are those who see no impending crisis because longterm trends are for cheaper prices despite rising production. This paper examines historic trends and clarify the foundations on which one may build one's predictions.
Stratified wakes, the high Froude number approximation, and potential flow
NASA Astrophysics Data System (ADS)
Vasholz, David P.
2011-12-01
Properties of a steady wake generated by a body moving uniformly at constant depth through a stratified fluid are studied as a function of two parameters inserted into the linearized equations of motion. The first parameter, μ, multiplies the along-track gradient term in the source equation. When formal solutions for an arbitrary buoyancy frequency profile are written as eigenfunction expansions, one finds that the limit μ → 0 corresponds to a high Froude number approximation accompanied by a substantial reduction in the complexity of the calculation. For μ = 1, upstream effects are present and the eigenvalues correspond to critical speeds above which transverse waves disappear for any given mode. For sufficiently high modes, the high Froude number approximation is valid. The second tracer multiplies the square of the buoyancy frequency term in the linearized conservation of mass equation and enables direct comparisons with the limit of potential flow. Detailed results are given for the simplest possible profile, in which the buoyancy frequency is independent of depth; emphasis is placed upon quantities that can, in principle, be experimentally measured in a laboratory experiment. The vertical displacement field is written in terms of a stratified wake form factor {{H}} , which is the sum of a wavelike contribution that is non-zero downstream and an evanescent contribution that appears symmetrically upstream and downstream. First- and second-order cross-track moments of {{H}} are analyzed. First-order results predict enhanced upstream vertical displacements. Second-order results expand upon previous predictions of wavelike resonances and also predict evanescent resonance effects.
The effects of shared information on semantic calculations in the gene ontology.
Bible, Paul W; Sun, Hong-Wei; Morasso, Maria I; Loganantharaj, Rasiah; Wei, Lai
2017-01-01
The structured vocabulary that describes gene function, the gene ontology (GO), serves as a powerful tool in biological research. One application of GO in computational biology calculates semantic similarity between two concepts to make inferences about the functional similarity of genes. A class of term similarity algorithms explicitly calculates the shared information (SI) between concepts then substitutes this calculation into traditional term similarity measures such as Resnik, Lin, and Jiang-Conrath. Alternative SI approaches, when combined with ontology choice and term similarity type, lead to many gene-to-gene similarity measures. No thorough investigation has been made into the behavior, complexity, and performance of semantic methods derived from distinct SI approaches. We apply bootstrapping to compare the generalized performance of 57 gene-to-gene semantic measures across six benchmarks. Considering the number of measures, we additionally evaluate whether these methods can be leveraged through ensemble machine learning to improve prediction performance. Results showed that the choice of ontology type most strongly influenced performance across all evaluations. Combining measures into an ensemble classifier reduces cross-validation error beyond any individual measure for protein interaction prediction. This improvement resulted from information gained through the combination of ontology types as ensemble methods within each GO type offered no improvement. These results demonstrate that multiple SI measures can be leveraged for machine learning tasks such as automated gene function prediction by incorporating methods from across the ontologies. To facilitate future research in this area, we developed the GO Graph Tool Kit (GGTK), an open source C++ library with Python interface (github.com/paulbible/ggtk).
Xia, Yongqiu; Weller, Donald E; Williams, Meghan N; Jordan, Thomas E; Yan, Xiaoyuan
2016-11-15
Export coefficient models (ECMs) are often used to predict nutrient sources and sinks in watersheds because ECMs can flexibly incorporate processes and have minimal data requirements. However, ECMs do not quantify uncertainties in model structure, parameters, or predictions; nor do they account for spatial and temporal variability in land characteristics, weather, and management practices. We applied Bayesian hierarchical methods to address these problems in ECMs used to predict nitrate concentration in streams. We compared four model formulations, a basic ECM and three models with additional terms to represent competing hypotheses about the sources of error in ECMs and about spatial and temporal variability of coefficients: an ADditive Error Model (ADEM), a SpatioTemporal Parameter Model (STPM), and a Dynamic Parameter Model (DPM). The DPM incorporates a first-order random walk to represent spatial correlation among parameters and a dynamic linear model to accommodate temporal correlation. We tested the modeling approach in a proof of concept using watershed characteristics and nitrate export measurements from watersheds in the Coastal Plain physiographic province of the Chesapeake Bay drainage. Among the four models, the DPM was the best--it had the lowest mean error, explained the most variability (R 2 = 0.99), had the narrowest prediction intervals, and provided the most effective tradeoff between fit complexity (its deviance information criterion, DIC, was 45.6 units lower than any other model, indicating overwhelming support for the DPM). The superiority of the DPM supports its underlying hypothesis that the main source of error in ECMs is their failure to account for parameter variability rather than structural error. Analysis of the fitted DPM coefficients for cropland export and instream retention revealed some of the factors controlling nitrate concentration: cropland nitrate exports were positively related to stream flow and watershed average slope, while instream nitrate retention was positively correlated with nitrate concentration. By quantifying spatial and temporal variability in sources and sinks, the DPM provides new information to better target management actions to the most effective times and places. Given the wide use of ECMs as research and management tools, our approach can be broadly applied in other watersheds and to other materials. Copyright © 2016 Elsevier Ltd. All rights reserved.
Assessing pathogen risk to swimmers at non-sewage impacted recreational beaches.
Schoen, Mary E; Ashbolt, Nicholas J
2010-04-01
The risk of gastrointestinal illness to swimmers from fresh sewage and non-sewage fecal sources at recreational beaches was predicted using quantitative microbial risk assessment (QMRA). The QMRA estimated the probability of illness for accidental ingestion of recreational water with a specific concentration of fecal indicator bacteria, here the geometric mean enterococci limit of 35 cfu 100 mL(-1), from either a mixture of sources or an individual source. Using seagulls as an example non-sewage fecal source, the predicted median probability of illness was less than the illness benchmark of 0.01. When the fecal source was changed to poorly treated sewage, a relativity small difference between the median probability of illness and the illness benchmark was predicted. For waters impacted by a mixture of seagull and sewage waste, the dominant source of fecal indicator was not always the predicted dominant source of risk.
Anticipatory versus reactive spatial attentional bias to threat.
Gladwin, Thomas E; Möbius, Martin; McLoughlin, Shane; Tyndall, Ian
2018-05-10
Dot-probe or visual probe tasks (VPTs) are used extensively to measure attentional biases. A novel variant termed the cued VPT (cVPT) was developed to focus on the anticipatory component of attentional bias. This study aimed to establish an anticipatory attentional bias to threat using the cVPT and compare its split-half reliability with a typical dot-probe task. A total of 120 students performed the cVPT task and dot-probe tasks. Essentially, the cVPT uses cues that predict the location of pictorial threatening stimuli, but on trials on which probe stimuli are presented the pictures do not appear. Hence, actual presentation of emotional stimuli did not affect responses. The reliability of the cVPT was higher at most cue-stimulus intervals and was .56 overall. A clear anticipatory attentional bias was found. In conclusion, the cVPT may be of methodological and theoretical interest. Using visually neutral predictive cues may remove sources of noise that negatively impact reliability. Predictive cues are able to bias response selection, suggesting a role of predicted outcomes in automatic processes. © 2018 The British Psychological Society.
Neural imaging to track mental states while using an intelligent tutoring system.
Anderson, John R; Betts, Shawn; Ferris, Jennifer L; Fincham, Jon M
2010-04-13
Hemodynamic measures of brain activity can be used to interpret a student's mental state when they are interacting with an intelligent tutoring system. Functional magnetic resonance imaging (fMRI) data were collected while students worked with a tutoring system that taught an algebra isomorph. A cognitive model predicted the distribution of solution times from measures of problem complexity. Separately, a linear discriminant analysis used fMRI data to predict whether or not students were engaged in problem solving. A hidden Markov algorithm merged these two sources of information to predict the mental states of students during problem-solving episodes. The algorithm was trained on data from 1 day of interaction and tested with data from a later day. In terms of predicting what state a student was in during a 2-s period, the algorithm achieved 87% accuracy on the training data and 83% accuracy on the test data. The results illustrate the importance of integrating the bottom-up information from imaging data with the top-down information from a cognitive model.
QCD nature of dark energy at finite temperature: Cosmological implications
NASA Astrophysics Data System (ADS)
Azizi, K.; Katırcı, N.
2016-05-01
The Veneziano ghost field has been proposed as an alternative source of dark energy, whose energy density is consistent with the cosmological observations. In this model, the energy density of the QCD ghost field is expressed in terms of QCD degrees of freedom at zero temperature. We extend this model to finite temperature to search the model predictions from late time to early universe. We depict the variations of QCD parameters entering the calculations, dark energy density, equation of state, Hubble and deceleration parameters on temperature from zero to a critical temperature. We compare our results with the observations and theoretical predictions existing at different eras. It is found that this model safely defines the universe from quark condensation up to now and its predictions are not in tension with those of the standard cosmology. The EoS parameter of dark energy is dynamical and evolves from -1/3 in the presence of radiation to -1 at late time. The finite temperature ghost dark energy predictions on the Hubble parameter well fit to those of Λ CDM and observations at late time.
Short time ahead wind power production forecast
NASA Astrophysics Data System (ADS)
Sapronova, Alla; Meissner, Catherine; Mana, Matteo
2016-09-01
An accurate prediction of wind power output is crucial for efficient coordination of cooperative energy production from different sources. Long-time ahead prediction (from 6 to 24 hours) of wind power for onshore parks can be achieved by using a coupled model that would bridge the mesoscale weather prediction data and computational fluid dynamics. When a forecast for shorter time horizon (less than one hour ahead) is anticipated, an accuracy of a predictive model that utilizes hourly weather data is decreasing. That is because the higher frequency fluctuations of the wind speed are lost when data is averaged over an hour. Since the wind speed can vary up to 50% in magnitude over a period of 5 minutes, the higher frequency variations of wind speed and direction have to be taken into account for an accurate short-term ahead energy production forecast. In this work a new model for wind power production forecast 5- to 30-minutes ahead is presented. The model is based on machine learning techniques and categorization approach and using the historical park production time series and hourly numerical weather forecast.
Inductive matrix completion for predicting gene-disease associations.
Natarajan, Nagarajan; Dhillon, Inderjit S
2014-06-15
Most existing methods for predicting causal disease genes rely on specific type of evidence, and are therefore limited in terms of applicability. More often than not, the type of evidence available for diseases varies-for example, we may know linked genes, keywords associated with the disease obtained by mining text, or co-occurrence of disease symptoms in patients. Similarly, the type of evidence available for genes varies-for example, specific microarray probes convey information only for certain sets of genes. In this article, we apply a novel matrix-completion method called Inductive Matrix Completion to the problem of predicting gene-disease associations; it combines multiple types of evidence (features) for diseases and genes to learn latent factors that explain the observed gene-disease associations. We construct features from different biological sources such as microarray expression data and disease-related textual data. A crucial advantage of the method is that it is inductive; it can be applied to diseases not seen at training time, unlike traditional matrix-completion approaches and network-based inference methods that are transductive. Comparison with state-of-the-art methods on diseases from the Online Mendelian Inheritance in Man (OMIM) database shows that the proposed approach is substantially better-it has close to one-in-four chance of recovering a true association in the top 100 predictions, compared to the recently proposed Catapult method (second best) that has <15% chance. We demonstrate that the inductive method is particularly effective for a query disease with no previously known gene associations, and for predicting novel genes, i.e. genes that are previously not linked to diseases. Thus the method is capable of predicting novel genes even for well-characterized diseases. We also validate the novelty of predictions by evaluating the method on recently reported OMIM associations and on associations recently reported in the literature. Source code and datasets can be downloaded from http://bigdata.ices.utexas.edu/project/gene-disease. © The Author 2014. Published by Oxford University Press.
The Scaling of Broadband Shock-Associated Noise with Increasing Temperature
NASA Technical Reports Server (NTRS)
Miller, Steven A. E.
2013-01-01
A physical explanation for the saturation of broadband shock-associated noise (BBSAN) intensity with increasing jet stagnation temperature has eluded investigators. An explanation is proposed for this phenomenon with the use of an acoustic analogy. To isolate the relevant physics, the scaling of BBSAN peak intensity level at the sideline observer location is examined. The equivalent source within the framework of an acoustic analogy for BBSAN is based on local field quantities at shock wave shear layer interactions. The equivalent source combined with accurate calculations of the propagation of sound through the jet shear layer, using an adjoint vector Green's function solver of the linearized Euler equations, allows for predictions that retain the scaling with respect to stagnation pressure and allows for saturation of BBSAN with increasing stagnation temperature. The sources and vector Green's function have arguments involving the steady Reynolds- Averaged Navier-Stokes solution of the jet. It is proposed that saturation of BBSAN with increasing jet temperature occurs due to a balance between the amplication of the sound propagation through the shear layer and the source term scaling.
NASA Astrophysics Data System (ADS)
Malviya, Devesh; Borage, Mangesh Balkrishna; Tiwari, Sunil
2017-12-01
This paper investigates the possibility of application of Resonant Immittance Converters (RICs) as a current source for the current-fed symmetrical Capacitor-Diode Voltage Multiplier (CDVM) with LCL-T Resonant Converter (RC) as an example. Firstly, detailed characterization of the current-fed symmetrical CDVM is carried out using repeated simulations followed by the normalization of the simulation results in order to derive the closed-form curve fit equations to predict the operating modes, output voltage and ripple in terms of operating parameters. RICs, due to their ability to convert voltage source into a current source, become a possible candidate for the realization of current source for the current-fed symmetrical CDVM. Detailed analysis, optimization and design of LCL-T RC with CDVM is performed in this paper. A step by step design procedure for the design of CDVM and the converter is proposed. A 5-stage prototype symmetrical CDVM driven by LCL-T RC to produce 2.5 kV, 50 mA dc output voltage is designed, built and tested to validate the findings of the analysis and simulation.
Retrieving definitional content for ontology development.
Smith, L; Wilbur, W J
2004-12-01
Ontology construction requires an understanding of the meaning and usage of its encoded concepts. While definitions found in dictionaries or glossaries may be adequate for many concepts, the actual usage in expert writing could be a better source of information for many others. The goal of this paper is to describe an automated procedure for finding definitional content in expert writing. The approach uses machine learning on phrasal features to learn when sentences in a book contain definitional content, as determined by their similarity to glossary definitions provided in the same book. The end result is not a concise definition of a given concept, but for each sentence, a predicted probability that it contains information relevant to a definition. The approach is evaluated automatically for terms with explicit definitions, and manually for terms with no available definition.
Computer model to simulate ionizing radiation effects correlates with experimental data
NASA Astrophysics Data System (ADS)
Perez-Poch, Antoni
Exposure to radiation from high energy protons and particles with ionizing properties is a major challenge for long-term space missions. The specific effect of such radiation on hematopoietic cells is still not fully understood. A number of experiments have been conducted on ground and in space. Those experiments on one hand, measure the extent of damage on blood markers. On the other hand, they intend to quantify the correlation between dose and energy from the radiation particles, with their ability to impair the hematopoietic stem and progenitor function. We present a computer model based on a neural network that intends to assess the relationship between dose, energy and number of hits on a particular cell, to the damage incurred to the human marrow cells. Calibration of the network is performed with the existing experimental data available in bibliography. Different sources of ionizing radiation at different doses (0-90 cGy) and along different patterns of a long-term exposure scenarios are simulated. Results are shown for a continuous variation of doses and are compared with specific data available in the literature. Some predictions are inferred for long-term scenarios of spaceflight, and the risk of jeopardizing a mission due to a major disfunction of the bone marrow is calculated. The method has proved successful in reproducing specific experimental data. We also discuss the significance and validity of the predicted ionizing radiation effects in situations such as long-term missions for a continuous range of dose.
Respiratory syncytial virus tracking using internet search engine data.
Oren, Eyal; Frere, Justin; Yom-Tov, Eran; Yom-Tov, Elad
2018-04-03
Respiratory Syncytial Virus (RSV) is the leading cause of hospitalization in children less than 1 year of age in the United States. Internet search engine queries may provide high resolution temporal and spatial data to estimate and predict disease activity. After filtering an initial list of 613 symptoms using high-resolution Bing search logs, we used Google Trends data between 2004 and 2016 for a smaller list of 50 terms to build predictive models of RSV incidence for five states where long-term surveillance data was available. We then used domain adaptation to model RSV incidence for the 45 remaining US states. Surveillance data sources (hospitalization and laboratory reports) were highly correlated, as were laboratory reports with search engine data. The four terms which were most often statistically significantly correlated as time series with the surveillance data in the five state models were RSV, flu, pneumonia, and bronchiolitis. Using our models, we tracked the spread of RSV by observing the time of peak use of the search term in different states. In general, the RSV peak moved from south-east (Florida) to the north-west US. Our study represents the first time that RSV has been tracked using Internet data results and highlights successful use of search filters and domain adaptation techniques, using data at multiple resolutions. Our approach may assist in identifying spread of both local and more widespread RSV transmission and may be applicable to other seasonal conditions where comprehensive epidemiological data is difficult to collect or obtain.
Smith, B; Hassen, A; Hinds, M; Rice, D; Jones, D; Sauber, T; Iiams, C; Sevenich, D; Allen, R; Owens, F; McNaughton, J; Parsons, C
2015-03-01
The DE values of corn grain for pigs will differ among corn sources. More accurate prediction of DE may improve diet formulation and reduce diet cost. Corn grain sources ( = 83) were assayed with growing swine (20 kg) in DE experiments with total collection of feces, with 3-wk-old broiler chick in nitrogen-corrected apparent ME (AME) trials and with cecectomized adult roosters in nitrogen-corrected true ME (TME) studies. Additional AME data for the corn grain source set was generated based on an existing near-infrared transmittance prediction model (near-infrared transmittance-predicted AME [NIT-AME]). Corn source nutrient composition was determined by wet chemistry methods. These data were then used to 1) test the accuracy of predicting swine DE of individual corn sources based on available literature equations and nutrient composition and 2) develop models for predicting DE of sources from nutrient composition and the cross-species information gathered above (AME, NIT-AME, and TME). The overall measured DE, AME, NIT-AME, and TME values were 4,105 ± 11, 4,006 ± 10, 4,004 ± 10, and 4,086 ± 12 kcal/kg DM, respectively. Prediction models were developed using 80% of the corn grain sources; the remaining 20% was reserved for validation of the developed prediction equation. Literature equations based on nutrient composition proved imprecise for predicting corn DE; the root mean square error of prediction ranged from 105 to 331 kcal/kg, an equivalent of 2.6 to 8.8% error. Yet among the corn composition traits, 4-variable models developed in the current study provided adequate prediction of DE (model ranging from 0.76 to 0.79 and root mean square error [RMSE] of 50 kcal/kg). When prediction equations were tested using the validation set, these models had a 1 to 1.2% error of prediction. Simple linear equations from AME, NIT-AME, or TME provided an accurate prediction of DE for individual sources ( ranged from 0.65 to 0.73 and RMSE ranged from 50 to 61 kcal/kg). Percentage error of prediction based on the validation data set was greater (1.4%) for the TME model than for the NIT-AME or AME models (1 and 1.2%, respectively), indicating that swine DE values could be accurately predicted by using AME or NIT-AME. In conclusion, regression equations developed from broiler measurements or from analyzed nutrient composition proved adequate to reliably predict the DE of commercially available corn hybrids for growing pigs.
Distributed and Collaborative Software Analysis
NASA Astrophysics Data System (ADS)
Ghezzi, Giacomo; Gall, Harald C.
Throughout the years software engineers have come up with a myriad of specialized tools and techniques that focus on a certain type of
Dissolved Solids in Streams of the Conterminous United States
NASA Astrophysics Data System (ADS)
Anning, D. W.; Flynn, M.
2014-12-01
Studies have shown that excessive dissolved-solids concentrations in water can have adverse effects on the environment and on agricultural, municipal, and industrial water users. Such effects motivated the U.S. Geological Survey's National Water-Quality Assessment Program to develop a SPAtially-Referenced Regression on Watershed Attributes (SPARROW) model to improve the understanding of dissolved solids in streams of the United States. Using the SPARROW model, annual dissolved-solids loads from 2,560 water-quality monitoring stations were statistically related to several spatial datasets serving as surrogates for dissolved-solids sources and transport processes. Sources investigated in the model included geologic materials, road de-icers, urban lands, cultivated lands, and pasture lands. Factors affecting transport from these sources to streams in the model included climate, soil, vegetation, terrain, population, irrigation, and artificial-drainage characteristics. The SPARROW model was used to predict long-term mean annual conditions for dissolved-solids sources, loads, yields, and concentrations in about 66,000 stream reaches and corresponding incremental catchments nationwide. The estimated total amount of dissolved solids delivered to the Nation's streams is 272 million metric tons (Mt) annually, of which 194 million Mt (71%) are from geologic sources, 38 million Mt (14%) are from road de-icers, 18 million Mt (7%) are from pasture lands, 14 million Mt (5 %) are from urban lands, and 8 million Mt (3%) are from cultivated lands. The median incremental-catchment yield delivered to local streams is 26 metric tons per year per square kilometer [(Mt/yr)/km2]. Ten percent of the incremental catchments yield less than 4 (Mt/yr)/km2, and 10 percent yield more than 90 (Mt/yr)/km2. In 13% of the reaches, predicted flow-weighted concentrations exceed 500 mg/L—the U.S. Environmental Protection Agency secondary non-enforceable drinking-water standard.
Critical Source Area Delineation: The representation of hydrology in effective erosion modeling.
NASA Astrophysics Data System (ADS)
Fowler, A.; Boll, J.; Brooks, E. S.; Boylan, R. D.
2017-12-01
Despite decades of conservation and millions of conservation dollars, nonpoint source sediment loading associated with agricultural disturbance continues to be a significant problem in many parts of the world. Local and national conservation organizations are interested in targeting critical source areas for control strategy implementation. Currently, conservation practices are selected and located based on the Revised Universal Soil Loss Equation (RUSLE) hillslope erosion modeling, and the National Resource Conservation Service will soon be transiting to the Watershed Erosion Predict Project (WEPP) model for the same purpose. We present an assessment of critical source areas targeted with RUSLE, WEPP and a regionally validated hydrology model, the Soil Moisture Routing (SMR) model, to compare the location of critical areas for sediment loading and the effectiveness of control strategies. The three models are compared for the Palouse dryland cropping region of the inland northwest, with un-calibrated analyses of the Kamiache watershed using publicly available soils, land-use and long-term simulated climate data. Critical source areas were mapped and the side-by-side comparison exposes the differences in the location and timing of runoff and erosion predictions. RUSLE results appear most sensitive to slope driving processes associated with infiltration excess. SMR captured saturation excess driven runoff events located at the toe slope position, while WEPP was able to capture both infiltration excess and saturation excess processes depending on soil type and management. A methodology is presented for down-scaling basin level screening to the hillslope management scale for local control strategies. Information on the location of runoff and erosion, driven by the runoff mechanism, is critical for effective treatment and conservation.
SHM-Based Probabilistic Fatigue Life Prediction for Bridges Based on FE Model Updating
Lee, Young-Joo; Cho, Soojin
2016-01-01
Fatigue life prediction for a bridge should be based on the current condition of the bridge, and various sources of uncertainty, such as material properties, anticipated vehicle loads and environmental conditions, make the prediction very challenging. This paper presents a new approach for probabilistic fatigue life prediction for bridges using finite element (FE) model updating based on structural health monitoring (SHM) data. Recently, various types of SHM systems have been used to monitor and evaluate the long-term structural performance of bridges. For example, SHM data can be used to estimate the degradation of an in-service bridge, which makes it possible to update the initial FE model. The proposed method consists of three steps: (1) identifying the modal properties of a bridge, such as mode shapes and natural frequencies, based on the ambient vibration under passing vehicles; (2) updating the structural parameters of an initial FE model using the identified modal properties; and (3) predicting the probabilistic fatigue life using the updated FE model. The proposed method is demonstrated by application to a numerical model of a bridge, and the impact of FE model updating on the bridge fatigue life is discussed. PMID:26950125
VICTORIA-92 pretest analyses of PHEBUS-FPT0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bixler, N.E.; Erickson, C.M.
FPT0 is the first of six tests that are scheduled to be conducted in an experimental reactor in Cadarache, France. The test apparatus consists of an in-pile fuel bundle, an upper plenum, a hot leg, a steam generator, a cold leg, and a small containment. Thus, the test is integral in the sense that it attempts to simulate all of the processes that would be operative in a severe nuclear accident. In FPT0, the fuel will be trace irradiated; in subsequent tests high burn-up fuel will be used. This report discusses separate pretest analyses of the FPT0 fuel bundle andmore » primary circuit have been conducted using the USNRC`s source term code, VICTORIA-92. Predictions for release of fission product, control rod, and structural elements from the test section are compared with those given by CORSOR-M. In general, the releases predicted by VICTORIA-92 occur earlier than those predicted by CORSOR-M. The other notable difference is that U release is predicted to be on a par with that of the control rod elements; CORSOR-M predicts U release to be about 2 orders of magnitude greater.« less
NASA Technical Reports Server (NTRS)
Homicz, G. F.; Moselle, J. R.
1985-01-01
A hybrid numerical procedure is presented for the prediction of the aerodynamic and acoustic performance of advanced turboprops. A hybrid scheme is proposed which in principle leads to a consistent simultaneous prediction of both fields. In the inner flow a finite difference method, the Approximate-Factorization Alternating-Direction-Implicit (ADI) scheme, is used to solve the nonlinear Euler equations. In the outer flow the linearized acoustic equations are solved via a Boundary-Integral Equation (BIE) method. The two solutions are iteratively matched across a fictitious interface in the flow so as to maintain continuity. At convergence the resulting aerodynamic load prediction will automatically satisfy the appropriate free-field boundary conditions at the edge of the finite difference grid, while the acoustic predictions will reflect the back-reaction of the radiated field on the magnitude of the loading source terms, as well as refractive effects in the inner flow. The equations and logic needed to match the two solutions are developed and the computer program implementing the procedure is described. Unfortunately, no converged solutions were obtained, due to unexpectedly large running times. The reasons for this are discussed and several means to alleviate the situation are suggested.
NASA Astrophysics Data System (ADS)
D'Amico, Sebastiano; Akinci, Aybige; Pischiutta, Marta
2018-07-01
In this paper we characterize the high-frequency (1.0-10 Hz) seismic wave crustal attenuation and the source excitation in the Sicily Channel and surrounding regions using background seismicity from weak-motion database. The data set includes 15 995 waveforms related to earthquakes having local magnitude ranging from 2.0 to 4.5 recorded between 2006 and 2012. The observed and predicted ground motions form the weak-motion data are evaluated in several narrow frequency bands from 0.25 to 20.0 Hz. The filtered observed peaks are regressed to specify a proper functional form for the regional attenuation, excitation and site specific term separately. The results are then used to calibrate effective theoretical attenuation and source excitation models using the random vibration theory. In the log-log domain, the regional seismic wave attenuation and the geometrical spreading coefficient are modelled together. The geometrical spreading coefficient, g(r), modelled with a bilinear piecewise functional form and given as g(r) ∝ r-1.0 for the short distances (r < 50 km) and as g(r) ∝ r-0.8 for the larger distances (r < 50 km). A frequency-dependent quality factor, inverse of the seismic attenuation parameter, Q(f)=160f/fref0. 35 (where fref = 1.0 Hz), is combined to the geometrical spreading. The source excitation terms are defined at a selected reference distance with a magnitude-independent roll-off spectral parameter, κ 0.04 s and with a Brune stress drop parameter increasing with moment magnitude, from Δσ = 2 MPa for Mw = 2.0 to Δσ = 13 MPa for Mw = 4.5. For events M ≤ 4.5 (being Mwmax = 4.5 available in the data set) the stress parameters are obtained by correlating the empirical/excitation source spectra with the Brune spectral model as function of magnitude. For the larger magnitudes (Mw>4.5) outside the range available in the calibration data set where we do not have recorded data, we extrapolate our results through the calibration of the stress parameters of the Brune source spectrum over the Bindi et al.ground-motion prediction equation selected as a reference model (hereafter also ITA10). Finally, the weak-motion-based model parameters are used through a stochastic approach in order to predict a set of region specific spectral ground-motion parameters (peak ground acceleration, peak ground velocity, and 0.3 and 1.0 Hz spectral acceleration) relative to the generic rock site as a function of distance between 10 and 250 km and magnitude between M 2.0 and M 7.0.
A method for predicting static-to-flight effects on coaxial jet noise
NASA Astrophysics Data System (ADS)
Bryce, William D.; Chinoy, Cyrus B.
2016-08-01
Previously-published work has provided a theoretical modelling of the jet noise from coaxial nozzle configurations in the form of component sources which can each be quantified in terms of modified single-stream jets. This modelling has been refined and extended to cover a wide range of the operating conditions of aircraft turbofan engines with separate exhaust flows, encompassing area ratios from 0.8 to 4. The objective has been to establish a basis for predicting the static-to-flight changes in the coaxial jet noise by applying single-stream flight effects to each of the sources comprising the modelling of the coaxial jet noise under static conditions. Relatively few experimental test points are available for validation although these do cover the full extent of the jet conditions and area ratios considered. The experimental results are limited in their frequency range by practical considerations but the static-to-flight changes in the third-octave SPLs are predicted to within a standard deviation of 0.4 dB although the complex effects of jet refraction and convection cause the errors to increase at low flight emission angles to the jet axis. The modelling also provides useful insights into the mechanisms involved in the generation of coaxial jet noise and has facilitated the identification of inadequacies in the experimental simulation of flight effects.
NASA Technical Reports Server (NTRS)
Farassat, F.; Farris, Mark
1999-01-01
There are several approaches to the prediction of the noise from sources on high speed surfaces. Two of these are the Kirchhoff and the Ffowcs williams-Hawkings methods. It can be shown that both of these methods depend on the solution of the wave equation with mathematically similar inhomogeneous source terms. Two subsonic solutions known as Formulation 1 and 1A of Langley are simple and efficient for noise prediction. The supersonic solution known as Formulation 3 is very complicated and difficult to code. Because of the complexity of the result, the computation time is longer than the subsonic formulas. Furthermore, it is difficult to assess the accuracy of noise prediction. We have been searching for a new and simpler supersonic formulation without these shortcomings. In the last AIAA Aeroacoustics Conference in Toulouse, Farassat, Dunn and Brentner presented a paper in which such a result was presented and called Formulation 4 of Langley. In this paper we will present two analytic tests of the validity this Formulation: 1) the noise from dipole distribution on the unit circle whose strength varies radially with the square of the distance from the center and 2) the noise from dipole distribution on the unit sphere whose strength varies with the cosine of the angle from the polar axis. We will discuss the question of singularities of Formulation 4.
Cancellation of spurious arrivals in Green's function extraction and the generalized optical theorem
Snieder, R.; Van Wijk, K.; Haney, M.; Calvert, R.
2008-01-01
The extraction of the Green's function by cross correlation of waves recorded at two receivers nowadays finds much application. We show that for an arbitrary small scatterer, the cross terms of scattered waves give an unphysical wave with an arrival time that is independent of the source position. This constitutes an apparent inconsistency because theory predicts that such spurious arrivals do not arise, after integration over a complete source aperture. This puzzling inconsistency can be resolved for an arbitrary scatterer by integrating the contribution of all sources in the stationary phase approximation to show that the stationary phase contributions to the source integral cancel the spurious arrival by virtue of the generalized optical theorem. This work constitutes an alternative derivation of this theorem. When the source aperture is incomplete, the spurious arrival is not canceled and could be misinterpreted to be part of the Green's function. We give an example of how spurious arrivals provide information about the medium complementary to that given by the direct and scattered waves; the spurious waves can thus potentially be used to better constrain the medium. ?? 2008 The American Physical Society.
Water Resource Assessment in KRS Reservoir Using Remote Sensing and GIS Modelling
NASA Astrophysics Data System (ADS)
Manubabu, V. H.; Gouda, K. C.; Bhat, N.; Reddy, A.
2014-12-01
In the recent time the fresh water resource becomes very important because of various reasons like population growth, pollution, over exploitation of the ground water resources etc. As there is no efficient and proper measures for recharging ground water exists and also the climatological impacts on water resources like global warming exacerbating water shortages, growing populations and rising demand for freshwater in agriculture, industry, and energy production. There is a need and challenging task for analyzing the future changes in regional water availability and it is also very much necessary to asses and predict the fresh water present in a lake or reservoir to make better decision making in the optimal usage of surface water. In the present study is intended to provide a practical discussion of methodology that deals with how to asses and predict amount of surface water available in the future using Remote Sensing(RS) data , Geographical Information System(GIS) techniques, and GCM (Global Circulation Model). Basically the study emphasized over one of the biggest reservoir i.e. the Krishna Raja Sagara (KRS) reservoir situated in the state of Karnataka in India. Multispectral satellite images like IRS LISS III and Landsat L8 from different open source web portals like NRSC-Bhuvan and NASA Earth Explorer respectively are used for the present analysis. The multispectral satellite images are used to identify the temporal changes of the water quantity in the reservoir for the period 2000 to 2014. Also the water volume are being calculated using Advances Space born Thermal Emission and Reflection Radiometer (ASTER) Global DEM over the reservoir basin. The hydro meteorological parameters are also studied using multi-source observed data and the empirical water budget models for the reservoir in terms of rainfall, temperature, run off, water inflow and outflow etc. are being developed and analyzed. Statistical analysis are also carried out to quantify the relation between reservoir water volume and the hydrological parameters (Figure 1). A general circulation model (GCM) is used for the prediction of major hydro meteorological parameters like rainfall and using the GCM predictions the water availability in terms of water volume in future are simulated using the empirical water budget model.
Density-driven transport of gas phase chemicals in unsaturated soils
NASA Astrophysics Data System (ADS)
Fen, Chiu-Shia; Sun, Yong-tai; Cheng, Yuen; Chen, Yuanchin; Yang, Whaiwan; Pan, Changtai
2018-01-01
Variations of gas phase density are responsible for advective and diffusive transports of organic vapors in unsaturated soils. Laboratory experiments were conducted to explore dense gas transport (sulfur hexafluoride, SF6) from different source densities through a nitrogen gas-dry soil column. Gas pressures and SF6 densities at transient state were measured along the soil column for three transport configurations (horizontal, vertically upward and vertically downward transport). These measurements and others reported in the literature were compared with simulation results obtained from two models based on different diffusion approaches: the dusty gas model (DGM) equations and a Fickian-type molar fraction-based diffusion expression. The results show that the DGM and Fickian-based models predicted similar dense gas density profiles which matched the measured data well for horizontal transport of dense gas at low to high source densities, despite the pressure variations predicted in the soil column were opposite to the measurements. The pressure evolutions predicted by both models were in trend similar to the measured ones for vertical transport of dense gas. However, differences between the dense gas densities predicted by the DGM and Fickian-based models were discernible for vertically upward transport of dense gas even at low source densities, as the DGM-based predictions matched the measured data better than the Fickian results did. For vertically downward transport, the dense gas densities predicted by both models were not greatly different from our experimental measurements, but substantially greater than the observations obtained from the literature, especially at high source densities. Further research will be necessary for exploring factors affecting downward transport of dense gas in soil columns. Use of the measured data to compute flux components of SF6 showed that the magnitudes of diffusive flux component based on the Fickian-type diffusion expressions in terms of molar concentration, molar fraction and mass density fraction gradient were almost the same. However, they were greater than the result computed with the mass fraction gradient for > 24% and the DGM-based result for more than one time. As a consequence, the DGM-based total flux of SF6 was in magnitude greatly less than the Fickian result not only for horizontal transport (diffusion-dominating) but also for vertical transport (advection and diffusion) of dense gas. Particularly, the Fickian-based total flux was more than two times in magnitude as much as the DGM result for vertically upward transport of dense gas.
Collimator-free photon tomography
Dilmanian, F. Avraham; Barbour, Randall L.
1998-10-06
A method of uncollimated single photon emission computed tomography includes administering a radioisotope to a patient for producing gamma ray photons from a source inside the patient. Emissivity of the photons is measured externally of the patient with an uncollimated gamma camera at a plurality of measurement positions surrounding the patient for obtaining corresponding energy spectrums thereat. Photon emissivity at the plurality of measurement positions is predicted using an initial prediction of an image of the source. The predicted and measured photon emissivities are compared to obtain differences therebetween. Prediction and comparison is iterated by updating the image prediction until the differences are below a threshold for obtaining a final prediction of the source image.
Østergaard, Lauge; Adelborg, Kasper; Sundbøll, Jens; Pedersen, Lars; Loldrup Fosbøl, Emil; Schmidt, Morten
2018-05-30
The positive predictive value of an infective endocarditis diagnosis is approximately 80% in the Danish National Patient Registry. However, since infective endocarditis is a heterogeneous disease implying long-term intravenous treatment, we hypothesiszed that the positive predictive value varies by length of hospital stay. A total of 100 patients with first-time infective endocarditis in the Danish National Patient Registry were identified from January 2010 - December 2012 at the University hospital of Aarhus and regional hospitals of Herning and Randers. Medical records were reviewed. We calculated the positive predictive value according to admission length, and separately for patients with a cardiac implantable electronic device and a prosthetic heart valve using the Wilson score method. Among the 92 medical records available for review, the majority of the patients had admission length ⩾2 weeks. The positive predictive value increased with length of admission. In patients with admission length <2 weeks the positive predictive value was 65% while it was 90% for admission length ⩾2 weeks. The positive predictive value was 81% for patients with a cardiac implantable electronic device and 87% for patients with a prosthetic valve. The positive predictive value of the infective endocarditis diagnosis in the Danish National Patient Registry is high for patients with admission length ⩾2 weeks. Using this algorithm, the Danish National Patient Registry provides a valid source for identifying infective endocarditis for research.
Design of the Next Generation Aircraft Noise Prediction Program: ANOPP2
NASA Technical Reports Server (NTRS)
Lopes, Leonard V., Dr.; Burley, Casey L.
2011-01-01
The requirements, constraints, and design of NASA's next generation Aircraft NOise Prediction Program (ANOPP2) are introduced. Similar to its predecessor (ANOPP), ANOPP2 provides the U.S. Government with an independent aircraft system noise prediction capability that can be used as a stand-alone program or within larger trade studies that include performance, emissions, and fuel burn. The ANOPP2 framework is designed to facilitate the combination of acoustic approaches of varying fidelity for the analysis of noise from conventional and unconventional aircraft. ANOPP2 integrates noise prediction and propagation methods, including those found in ANOPP, into a unified system that is compatible for use within general aircraft analysis software. The design of the system is described in terms of its functionality and capability to perform predictions accounting for distributed sources, installation effects, and propagation through a non-uniform atmosphere including refraction and the influence of terrain. The philosophy of mixed fidelity noise prediction through the use of nested Ffowcs Williams and Hawkings surfaces is presented and specific issues associated with its implementation are identified. Demonstrations for a conventional twin-aisle and an unconventional hybrid wing body aircraft configuration are presented to show the feasibility and capabilities of the system. Isolated model-scale jet noise predictions are also presented using high-fidelity and reduced order models, further demonstrating ANOPP2's ability to provide predictions for model-scale test configurations.
2014-01-01
We present four models of solution free-energy prediction for druglike molecules utilizing cheminformatics descriptors and theoretically calculated thermodynamic values. We make predictions of solution free energy using physics-based theory alone and using machine learning/quantitative structure–property relationship (QSPR) models. We also develop machine learning models where the theoretical energies and cheminformatics descriptors are used as combined input. These models are used to predict solvation free energy. While direct theoretical calculation does not give accurate results in this approach, machine learning is able to give predictions with a root mean squared error (RMSE) of ∼1.1 log S units in a 10-fold cross-validation for our Drug-Like-Solubility-100 (DLS-100) dataset of 100 druglike molecules. We find that a model built using energy terms from our theoretical methodology as descriptors is marginally less predictive than one built on Chemistry Development Kit (CDK) descriptors. Combining both sets of descriptors allows a further but very modest improvement in the predictions. However, in some cases, this is a statistically significant enhancement. These results suggest that there is little complementarity between the chemical information provided by these two sets of descriptors, despite their different sources and methods of calculation. Our machine learning models are also able to predict the well-known Solubility Challenge dataset with an RMSE value of 0.9–1.0 log S units. PMID:24564264
Norman, L.M.; Guertin, D.P.; Feller, M.
2008-01-01
The development of new approaches for understanding processes of urban development and their environmental effects, as well as strategies for sustainable management, is essential in expanding metropolitan areas. This study illustrates the potential of linking urban growth and watershed models to identify problem areas and support long-term watershed planning. Sediment is a primary source of nonpoint-source pollution in surface waters. In urban areas, sediment is intermingled with other surface debris in transport. In an effort to forecast the effects of development on surface-water quality, changes predicted in urban areas by the SLEUTH urban growth model were applied in the context of erosion-sedimentation models (Universal Soil Loss Equation and Spatially Explicit Delivery Models). The models are used to simulate the effect of excluding hot-spot areas of erosion and sedimentation from future urban growth and to predict the impacts of alternative erosion-control scenarios. Ambos Nogales, meaning 'both Nogaleses,' is a name commonly used for the twin border cities of Nogales, Arizona and Nogales, Sonora, Mexico. The Ambos Nogales watershed has experienced a decrease in water quality as a result of urban development in the twin-city area. Population growth rates in Ambos Nogales are high and the resources set in place to accommodate the rapid population influx will soon become overburdened. Because of its remote location and binational governance, monitoring and planning across the border is compromised. One scenario described in this research portrays an improvement in water quality through the identification of high-risk areas using models that simulate their protection from development and replanting with native grasses, while permitting the predicted and inevitable growth elsewhere. This is meant to add to the body of knowledge about forecasting the impact potential of urbanization on sediment delivery to streams for sustainable development, which can be accomplished in a virtual environment. Copyright ?? 2008 by Bellwether Publishing, Ltd. All rights reserved.
Kolbe, Jason J; VanMiddlesworth, Paul S; Losin, Neil; Dappen, Nathan; Losos, Jonathan B
2012-01-01
Global change is predicted to alter environmental conditions for populations in numerous ways; for example, invasive species often experience substantial shifts in climatic conditions during introduction from their native to non-native ranges. Whether these shifts elicit a phenotypic response, and how adaptation and phenotypic plasticity contribute to phenotypic change, are key issues for understanding biological invasions and how populations may respond to local climate change. We combined modeling, field data, and a laboratory experiment to test for changing thermal tolerances during the introduction of the tropical lizard Anolis cristatellus from Puerto Rico to Miami, Florida. Species distribution models and bioclimatic data analyses showed lower minimum temperatures, and greater seasonal and annual variation in temperature for Miami compared to Puerto Rico. Two separate introductions of A. cristatellus occurred in Miami about 12 km apart, one in South Miami and the other on Key Biscayne, an offshore island. As predicted from the shift in the thermal climate and the thermal tolerances of other Anolis species in Miami, laboratory acclimation and field acclimatization showed that the introduced South Miami population of A. cristatellus has diverged from its native-range source population by acquiring low-temperature acclimation ability. By contrast, the introduced Key Biscayne population showed little change compared to its source. Our analyses predicted an adaptive response for introduced populations, but our comparisons to native-range sources provided evidence for thermal plasticity in one introduced population but not the other. The rapid acquisition of thermal plasticity by A. cristatellus in South Miami may be advantageous for its long-term persistence there and expansion of its non-native range. Our results also suggest that the common assumption of no trait variation when modeling non-native species distributions is invalid. PMID:22957158
Using RSSCTs to predict field-scale GAC control of DBP formation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cummings, L.; Summers, R.S.
1994-06-01
The primary objective of this study was to evaluate the use of the rapid small-scale column test (RSSCT) for predicting the control of disinfection by-product (DBP) formation by granular activated carbon (GAC). DBP formation was assessed by using a simulated distribution system (SDS) test and measuring trihalomethanes and total organic halide in the influent and effluent of the laboratory- and field-scale columns. It was observed that for the water studied, the RSSCTs effectively predicted the nonabsorbable fraction, time to 50 percent breakthrough, and the shape of the breakthrough curve for DBP formation. The advantage of RSSCTs is that conclusions aboutmore » the amenability of a GAC for DBP control can be reached in a short time period instead of at the end of a long-term pilot study. The authors recommend that similar studies be conducted with a range of source waters because the effectiveness of GAC is site-specific.« less
NASA Technical Reports Server (NTRS)
Findlay, J. T.; Kelly, G. M.; Mcconnell, J. G.; Compton, H. R.
1983-01-01
Longitudinal performance comparisons between flight derived and predicted values are presented for the first five NASA Space Shuttle Columbia flights. Though subsonic comparisons are emphasized, comparisons during the transonic and low supersonic regions of flight are included. Computed air data information based on the remotely sensed atmospheric measurements as well as in situ Orbiter Air Data System (ADS) measurements were incorporated. Each air data source provides for comparisons versus the predicted values from the LaRC data base. Principally, L/D, C sub L, and C sub D, comparisons are presented, though some pitching moment results are included. Similarities in flight conditions and spacecraft configuration during the first five flights are discussed. Contributions from the various elements of the data base are presented and the overall differences observed between the flight and predicted values are discussed in terms of expected variations. A discussion on potential data base updates is presented based on the results from the five flights to date.
Marsden, O; Bogey, C; Bailly, C
2014-03-01
The feasibility of using numerical simulation of fluid dynamics equations for the detailed description of long-range infrasound propagation in the atmosphere is investigated. The two dimensional (2D) Navier Stokes equations are solved via high fidelity spatial finite differences and Runge-Kutta time integration, coupled with a shock-capturing filter procedure allowing large amplitudes to be studied. The accuracy of acoustic prediction over long distances with this approach is first assessed in the linear regime thanks to two test cases featuring an acoustic source placed above a reflective ground in a homogeneous and weakly inhomogeneous medium, solved for a range of grid resolutions. An atmospheric model which can account for realistic features affecting acoustic propagation is then described. A 2D study of the effect of source amplitude on signals recorded at ground level at varying distances from the source is carried out. Modifications both in terms of waveforms and arrival times are described.
Cascading effects following intervention.
Patterson, Gerald R; Forgatch, Marion S; Degarmo, David S
2010-11-01
Four different sources for cascade effects were examined using 9-year process and outcome data from a randomized controlled trial of a preventive intervention using the Parent Management Training-Oregon Model (PMTO™). The social interaction learning model of child antisocial behavior serves as one basis for predicting change. A second source addresses the issue of comorbid relationships among clinical diagnoses. The third source, collateral changes, describes events in which changes in one family member correlate with changes in another. The fourth component is based on the long-term effects of reducing coercion and increasing positive interpersonal processes within the family. New findings from the 9-year follow-up show that mothers experienced benefits as measured by standard of living (i.e., income, occupation, education, and financial stress) and frequency of police arrests. It is assumed that PMTO reduces the level of coercion, which sets the stage for a massive increase in positive social interaction. In effect, PMTO alters the family environment and thereby opens doors to healthy new social environments.
Cascading Effects Following Intervention
Patterson, Gerald R.; Forgatch, Marion S.; DeGarmo, David S.
2010-01-01
Four different sources for cascade effects were examined using 9-year process and outcome data from a randomized controlled trial (RCT) of a preventive intervention using Parent Management Training – Oregon Model (PMTO™). The social interaction learning (SIL) model of child antisocial behavior serves as one basis for predicting change. A second source addresses the issue of comorbid relationships among clinical diagnoses. The third source, collateral changes, describes events in which changes in one family member correlate with changes in another. The fourth component is based on the long-term effects of reducing coercion and increasing positive interpersonal processes within the family. New findings from the 9-year follow-up show that mothers experienced benefits as measured by standard of living (i.e., income, occupation, education, and financial stress) and frequency of police arrests. It is assumed that PMTO reduces the level of coercion, which sets the stage for a massive increase in positive social interaction. In effect, PMTO alters the family environment and thereby opens doors to healthy new social environments. PMID:20883592
NASA Astrophysics Data System (ADS)
D'Amico, Sebastiano; Akinci, Aybige; Pischiutta, Marta
2018-03-01
In this paper we characterize the high frequency (1.0 - 10 Hz) seismic wave crustal attenuation and the source excitation in the Sicily Channel and surrounding regions using background seismicity from weak-motion database. The data set includes 15995 waveforms related to earthquakes having local magnitude ranging from 2.0 to 4.5 recorded between 2006 and 2012. The observed and predicted ground motions form the weak-motion data are evaluated in several narrow frequency bands from 0.25 to 20.0 Hz. The filtered observed peaks are regressed to specify a proper functional form for the regional attenuation, excitation and site specific term separately. The results are then used to calibrate effective theoretical attenuation and source excitation models using the Random Vibration Theory (RVT). In the log-log domain, the regional seismic wave attenuation and the geometrical spreading coefficient are modeled together. The geometrical spreading coefficient, g (r), modeled with a bilinear piecewise functional form and given as g (r) ∝ r-1.0 for the short distances (r < 50 km) and as g (r) ∝ r-0.8 for the larger distances (r < 50 km). A frequency-dependent quality factor, inverse of the seismic attenuation parameter, Q(f) = 160 f/fref 0. 35 (where fref = 1.0 Hz), is combined to the geometrical spreading. The source excitation terms are defined at a selected reference distance with a magnitude independent roll-off spectral parameter, κ 0.04 s and with a Brune stress drop parameter increasing with moment magnitude, from Δσ = 2 MPa for Mw = 2.0 to Δσ = 13 MPa for Mw = 4.5. For events M≤4.5 (being Mwmax = 4.5 available in the dataset) the stress parameters are obtained by correlating the empirical/excitation source spectra with the Brune spectral model as function of magnitude. For the larger magnitudes (Mw>4.5) outside the range available in the calibration dataset where we do not have recorded data, we extrapolate our results through the calibration of the stress parameters of the Brune source spectrum over the Bindi et al. (2011) ground motion prediction equation (GMPE) selected as a reference model (hereafter also ITA10).
Smith, David S; Jones, Benedict C; Allan, Kevin
2013-08-01
The functionalist memory perspective predicts that information of adaptive value may trigger specific processing modes. It was recently demonstrated that women's memory is sensitive to cues of male sexual dimorphism (i.e., masculinity) that convey information of adaptive value for mate choice because they signal health and genetic quality, as well as personality traits important in relationship contexts. Here, we show that individual differences in women's mating strategies predict the effect of facial masculinity cues upon memory, strengthening the case for functional design within memory. Using the revised socio-sexual orientation inventory, Experiment 1 demonstrates that women pursuing a short-term, uncommitted mating strategy have enhanced source memory for men with exaggerated versus reduced masculine facial features, an effect that reverses in women who favor long-term committed relationships. The reversal in the direction of the effect indicates that it does not reflect the sex typicality of male faces per se. The same pattern occurred within women's source memory for women's faces, implying that the memory bias does not reflect the perceived attractiveness of faces per se. In Experiment 2, we reran the experiment using men's faces to establish the reliability of the core finding and replicated Experiment 1's results. Masculinity cues may therefore trigger a specific mode within women's episodic memory. We discuss why this mode may be triggered by female faces and its possible role in mate choice. In so doing, we draw upon the encoding specificity principle and the idea that episodic memory limits the scope of stereotypical inferences about male behavior.
Paul, Susannah; Mgbere, Osaro; Arafat, Raouf; Yang, Biru; Santos, Eunice
2017-01-01
Objective The objective was to forecast and validate prediction estimates of influenza activity in Houston, TX using four years of historical influenza-like illness (ILI) from three surveillance data capture mechanisms. Background Using novel surveillance methods and historical data to estimate future trends of influenza-like illness can lead to early detection of influenza activity increases and decreases. Anticipating surges gives public health professionals more time to prepare and increase prevention efforts. Methods Data was obtained from three surveillance systems, Flu Near You, ILINet, and hospital emergency center (EC) visits, with diverse data capture mechanisms. Autoregressive integrated moving average (ARIMA) models were fitted to data from each source for week 27 of 2012 through week 26 of 2016 and used to forecast influenza-like activity for the subsequent 10 weeks. Estimates were then compared to actual ILI percentages for the same period. Results Forecasted estimates had wide confidence intervals that crossed zero. The forecasted trend direction differed by data source, resulting in lack of consensus about future influenza activity. ILINet forecasted estimates and actual percentages had the least differences. ILINet performed best when forecasting influenza activity in Houston, TX. Conclusion Though the three forecasted estimates did not agree on the trend directions, and thus, were considered imprecise predictors of long-term ILI activity based on existing data, pooling predictions and careful interpretations may be helpful for short term intervention efforts. Further work is needed to improve forecast accuracy considering the promise forecasting holds for seasonal influenza prevention and control, and pandemic preparedness.
Iterative weighting of multiblock data in the orthogonal partial least squares framework.
Boccard, Julien; Rutledge, Douglas N
2014-02-27
The integration of multiple data sources has emerged as a pivotal aspect to assess complex systems comprehensively. This new paradigm requires the ability to separate common and redundant from specific and complementary information during the joint analysis of several data blocks. However, inherent problems encountered when analysing single tables are amplified with the generation of multiblock datasets. Finding the relationships between data layers of increasing complexity constitutes therefore a challenging task. In the present work, an algorithm is proposed for the supervised analysis of multiblock data structures. It associates the advantages of interpretability from the orthogonal partial least squares (OPLS) framework and the ability of common component and specific weights analysis (CCSWA) to weight each data table individually in order to grasp its specificities and handle efficiently the different sources of Y-orthogonal variation. Three applications are proposed for illustration purposes. A first example refers to a quantitative structure-activity relationship study aiming to predict the binding affinity of flavonoids toward the P-glycoprotein based on physicochemical properties. A second application concerns the integration of several groups of sensory attributes for overall quality assessment of a series of red wines. A third case study highlights the ability of the method to combine very large heterogeneous data blocks from Omics experiments in systems biology. Results were compared to the reference multiblock partial least squares (MBPLS) method to assess the performance of the proposed algorithm in terms of predictive ability and model interpretability. In all cases, ComDim-OPLS was demonstrated as a relevant data mining strategy for the simultaneous analysis of multiblock structures by accounting for specific variation sources in each dataset and providing a balance between predictive and descriptive purpose. Copyright © 2014 Elsevier B.V. All rights reserved.
Probabilistic Seismic Hazard Analysis for Georgia
NASA Astrophysics Data System (ADS)
Tsereteli, N. S.; Varazanashvili, O.; Sharia, T.; Arabidze, V.; Tibaldi, A.; Bonali, F. L. L.; Russo, E.; Pasquaré Mariotto, F.
2017-12-01
Nowadays, seismic hazard studies are developed in terms of the calculation of Peak Ground Acceleration (PGA), Spectral Acceleration (SA), Peak Ground Velocity (PGV) and other recorded parameters. In the frame of EMME project PSH were calculated for Georgia using GMPE based on selection criteria. In the frame of Project N 216758 (supported by Shota Rustaveli National Science Foundation (SRNF)) PSH maps were estimated using hybrid- empirical ground motion prediction equation developed for Georgia. Due to the paucity of seismically recorded information, in this work we focused our research on a more robust dataset related to macroseismic data,and attempted to calculate the probabilistic seismic hazard directly in terms of macroseismicintensity. For this reason, we started calculating new intensity prediction equations (IPEs)for Georgia taking into account different sets, belonging to the same new database, as well as distances from the seismic source.With respect to the seismic source, in order to improve the quality of the results, we have also hypothesized the size of faults from empirical relations, and calculated new IPEs also by considering Joyner-Boore and rupture distances in addition to epicentral and hypocentral distances. Finally, site conditions have been included as variables for IPEs calculation Regarding the database, we used a brand new revised set of macroseismic data and instrumental records for the significant earthquakes that struck Georgia between 1900 and 2002.Particularly, a large amount of research and documents related to macroseismic effects of individual earthquakes, stored in the archives of the Institute of Geophysics, were used as sources for the new macroseismic data. The latter are reported in the Medvedev-Sponheuer-Karnikmacroseismic scale (MSK64). For each earthquake the magnitude, the focal depth and the epicenter location are also reported. An online version of the database, with therelated metadata,has been produced for the 69 revised earthquakes and is available online (http://www.enguriproject.unimib.it/; .
Integrating data types to enhance shoreline change assessments
NASA Astrophysics Data System (ADS)
Long, J.; Henderson, R.; Plant, N. G.; Nelson, P. R.
2016-12-01
Shorelines represent the variable boundary between terrestrial and marine environments. Assessment of geographic and temporal variability in shoreline position and related variability in shoreline change rates are an important part of studies and applications related to impacts from sea-level rise and storms. The results from these assessments are used to quantify future ecosystem services and coastal resilience and guide selection of appropriate coastal restoration and protection designs. But existing assessments typically fail to incorporate all available shoreline observations because they are derived from multiple data types and have different or unknown biases and uncertainties. Shoreline-change research and assessments often focus on either the long-term trajectory using sparse data over multiple decades or shorter-term evolution using data collected more frequently but over a shorter period of time. The combination of data collected with significantly different temporal resolution is not often considered. Also, differences in the definition of the shoreline metric itself can occur, whether using a single or multiple data source(s), due to variation the signal being detected in the data (e.g. instantaneous land/water interface, swash zone, wrack line, or topographic contours). Previous studies have not explored whether more robust shoreline change assessments are possible if all available data are utilized and all uncertainties are considered. In this study, we test the hypothesis that incorporating all available shoreline data will lead to both improved historical assessments and enhance the predictive capability of shoreline-change forecasts. Using over 250 observations of shoreline position at Dauphin Island, Alabama over the last century, we compare shoreline-change rates derived from individual data sources (airborne lidar, satellite, aerial photographs) with an assessment using the combination of all available data. Biases or simple uncertainties in the shoreline metric from different data types and varying temporal/spatial resolution of the data are examined. As part of this test, we also demonstrate application of data assimilation techniques to predict shoreline position by accurately including the uncertainty in each type of data.
SGP-1: Prediction and Validation of Homologous Genes Based on Sequence Alignments
Wiehe, Thomas; Gebauer-Jung, Steffi; Mitchell-Olds, Thomas; Guigó, Roderic
2001-01-01
Conventional methods of gene prediction rely on the recognition of DNA-sequence signals, the coding potential or the comparison of a genomic sequence with a cDNA, EST, or protein database. Reasons for limited accuracy in many circumstances are species-specific training and the incompleteness of reference databases. Lately, comparative genome analysis has attracted increasing attention. Several analysis tools that are based on human/mouse comparisons are already available. Here, we present a program for the prediction of protein-coding genes, termed SGP-1 (Syntenic Gene Prediction), which is based on the similarity of homologous genomic sequences. In contrast to most existing tools, the accuracy of SGP-1 depends little on species-specific properties such as codon usage or the nucleotide distribution. SGP-1 may therefore be applied to nonstandard model organisms in vertebrates as well as in plants, without the need for extensive parameter training. In addition to predicting genes in large-scale genomic sequences, the program may be useful to validate gene structure annotations from databases. To this end, SGP-1 output also contains comparisons between predicted and annotated gene structures in HTML format. The program can be accessed via a Web server at http://soft.ice.mpg.de/sgp-1. The source code, written in ANSI C, is available on request from the authors. PMID:11544202
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xiaoning; Patton, Howard John; Chen, Ting
2016-03-25
This report offers predictions for the SPE-5 ground-motion and accelerometer array sites. These predictions pertain to the waveform and spectral amplitude at certain geophone sites using Denny&Johnson source model and a source model derived from SPE data; waveform, peak velocity and peak acceleration at accelerometer sites using the SPE source model and the finite-difference simulation with LLNL 3D velocity model; and the SPE-5 moment and corner frequency.
Doos, Lucy; Packer, Claire; Ward, Derek; Simpson, Sue; Stevens, Andrew
2016-03-10
Forecasting can support rational decision-making around the introduction and use of emerging health technologies and prevent investment in technologies that have limited long-term potential. However, forecasting methods need to be credible. We performed a systematic search to identify the methods used in forecasting studies to predict future health technologies within a 3-20-year timeframe. Identification and retrospective assessment of such methods potentially offer a route to more reliable prediction. Systematic search of the literature to identify studies reported on methods of forecasting in healthcare. People are not needed in this study. The authors searched MEDLINE, EMBASE, PsychINFO and grey literature sources, and included articles published in English that reported their methods and a list of identified technologies. Studies reporting methods used to predict future health technologies within a 3-20-year timeframe with an identified list of individual healthcare technologies. Commercially sponsored reviews, long-term futurology studies (with over 20-year timeframes) and speculative editorials were excluded. 15 studies met our inclusion criteria. Our results showed that the majority of studies (13/15) consulted experts either alone or in combination with other methods such as literature searching. Only 2 studies used more complex forecasting tools such as scenario building. The methodological fundamentals of formal 3-20-year prediction are consistent but vary in details. Further research needs to be conducted to ascertain if the predictions made were accurate and whether accuracy varies by the methods used or by the types of technologies identified. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Development of genetic programming-based model for predicting oyster norovirus outbreak risks.
Chenar, Shima Shamkhali; Deng, Zhiqiang
2018-01-01
Oyster norovirus outbreaks pose increasing risks to human health and seafood industry worldwide but exact causes of the outbreaks are rarely identified, making it highly unlikely to reduce the risks. This paper presents a genetic programming (GP) based approach to identifying the primary cause of oyster norovirus outbreaks and predicting oyster norovirus outbreaks in order to reduce the risks. In terms of the primary cause, it was found that oyster norovirus outbreaks were controlled by cumulative effects of antecedent environmental conditions characterized by low solar radiation, low water temperature, low gage height (the height of water above a gage datum), low salinity, heavy rainfall, and strong offshore wind. The six environmental variables were determined by using Random Forest (RF) and Binary Logistic Regression (BLR) methods within the framework of the GP approach. In terms of predicting norovirus outbreaks, a risk-based GP model was developed using the six environmental variables and various combinations of the variables with different time lags. The results of local and global sensitivity analyses showed that gage height, temperature, and solar radiation were by far the three most important environmental predictors for oyster norovirus outbreaks, though other variables were also important. Specifically, very low temperature and gage height significantly increased the risk of norovirus outbreaks while high solar radiation markedly reduced the risk, suggesting that low temperature and gage height were associated with the norovirus source while solar radiation was the primary sink of norovirus. The GP model was utilized to hindcast daily risks of oyster norovirus outbreaks along the Northern Gulf of Mexico coast. The daily hindcasting results indicated that the GP model was capable of hindcasting all historical oyster norovirus outbreaks from January 2002 to June 2014 in the Gulf of Mexico with only two false positive outbreaks for the 12.5-year period. The performance of the GP model was characterized with the area under the Receiver Operating Characteristic curve of 0.86, the true positive rate (sensitivity) of 78.53% and the true negative rate (specificity) of 88.82%, respectively, demonstrating the efficacy of the GP model. The findings and results offered new insights into the oyster norovirus outbreaks in terms of source, sink, cause, and predictors. The GP model provided an efficient and effective tool for predicting potential oyster norovirus outbreaks and implementing management interventions to prevent or at least reduce norovirus risks to both the human health and the seafood industry. Copyright © 2017 Elsevier Ltd. All rights reserved.
Spatial variation and density-dependent dispersal in competitive coexistence.
Amarasekare, Priyanga
2004-01-01
It is well known that dispersal from localities favourable to a species' growth and reproduction (sources) can prevent competitive exclusion in unfavourable localities (sinks). What is perhaps less well known is that too much emigration can undermine the viability of sources and cause regional competitive exclusion. Here, I investigate two biological mechanisms that reduce the cost of dispersal to source communities. The first involves increasing the spatial variation in the strength of competition such that sources can withstand high rates of emigration; the second involves reducing emigration from sources via density-dependent dispersal. I compare how different forms of spatial variation and modes of dispersal influence source viability, and hence source-sink coexistence, under dominance and pre-emptive competition. A key finding is that, while spatial variation substantially reduces dispersal costs under both types of competition, density-dependent dispersal does so only under dominance competition. For instance, when spatial variation in the strength of competition is high, coexistence is possible (regardless of the type of competition) even when sources experience high emigration rates; when spatial variation is low, coexistence is restricted even under low emigration rates. Under dominance competition, density-dependent dispersal has a strong effect on coexistence. For instance, when the emigration rate increases with density at an accelerating rate (Type III density-dependent dispersal), coexistence is possible even when spatial variation is quite low; when the emigration rate increases with density at a decelerating rate (Type II density-dependent dispersal), coexistence is restricted even when spatial variation is quite high. Under pre-emptive competition, density-dependent dispersal has only a marginal effect on coexistence. Thus, the diversity-reducing effects of high dispersal rates persist under pre-emptive competition even when dispersal is density dependent, but can be significantly mitigated under dominance competition if density-dependent dispersal is Type III rather than Type II. These results lead to testable predictions about source-sink coexistence under different regimes of competition, spatial variation and dispersal. They identify situations in which density-independent dispersal provides a reasonable approximation to species' dispersal patterns, and those under which consideration of density-dependent dispersal is crucial to predicting long-term coexistence. PMID:15306322
Environmental impact and risk assessments and key factors contributing to the overall uncertainties.
Salbu, Brit
2016-01-01
There is a significant number of nuclear and radiological sources that have contributed, are still contributing, or have the potential to contribute to radioactive contamination of the environment in the future. To protect the environment from radioactive contamination, impact and risk assessments are performed prior to or during a release event, short or long term after deposition or prior and after implementation of countermeasures. When environmental impact and risks are assessed, however, a series of factors will contribute to the overall uncertainties. To provide environmental impact and risk assessments, information on processes, kinetics and a series of input variables is needed. Adding problems such as variability, questionable assumptions, gaps in knowledge, extrapolations and poor conceptual model structures, a series of factors are contributing to large and often unacceptable uncertainties in impact and risk assessments. Information on the source term and the release scenario is an essential starting point in impact and risk models; the source determines activity concentrations and atom ratios of radionuclides released, while the release scenario determine the physico-chemical forms of released radionuclides such as particle size distribution, structure and density. Releases will most often contain other contaminants such as metals, and due to interactions, contaminated sites should be assessed as a multiple stressor scenario. Following deposition, a series of stressors, interactions and processes will influence the ecosystem transfer of radionuclide species and thereby influence biological uptake (toxicokinetics) and responses (toxicodynamics) in exposed organisms. Due to the variety of biological species, extrapolation is frequently needed to fill gaps in knowledge e.g., from effects to no effects, from effects in one organism to others, from one stressor to mixtures. Most toxtests are, however, performed as short term exposure of adult organisms, ignoring sensitive history life stages of organisms and transgenerational effects. To link sources, ecosystem transfer and biological effects to future impact and risks, a series of models are usually interfaced, while uncertainty estimates are seldom given. The model predictions are, however, only valid within the boundaries of the overall uncertainties. Furthermore, the model predictions are only useful and relevant when uncertainties are estimated, communicated and understood. Among key factors contributing most to uncertainties, the present paper focuses especially on structure uncertainties (model bias or discrepancies) as aspects such as particle releases, ecosystem dynamics, mixed exposure, sensitive life history stages and transgenerational effects, are usually ignored in assessment models. Research focus on these aspects should significantly reduce the overall uncertainties in the impact and risk assessment of radioactive contaminated ecosystems. Copyright © 2015 Elsevier Ltd. All rights reserved.
Laser induced heat source distribution in bio-tissues
NASA Astrophysics Data System (ADS)
Li, Xiaoxia; Fan, Shifu; Zhao, Youquan
2006-09-01
During numerical simulation of laser and tissue thermal interaction, the light fluence rate distribution should be formularized and constituted to the source term in the heat transfer equation. Usually the solution of light irradiative transport equation is given in extreme conditions such as full absorption (Lambert-Beer Law), full scattering (Lubelka-Munk theory), most scattering (Diffusion Approximation) et al. But in specific conditions, these solutions will induce different errors. The usually used Monte Carlo simulation (MCS) is more universal and exact but has difficulty to deal with dynamic parameter and fast simulation. Its area partition pattern has limits when applying FEM (finite element method) to solve the bio-heat transfer partial differential coefficient equation. Laser heat source plots of above methods showed much difference with MCS. In order to solve this problem, through analyzing different optical actions such as reflection, scattering and absorption on the laser induced heat generation in bio-tissue, a new attempt was made out which combined the modified beam broaden model and the diffusion approximation model. First the scattering coefficient was replaced by reduced scattering coefficient in the beam broaden model, which is more reasonable when scattering was treated as anisotropic scattering. Secondly the attenuation coefficient was replaced by effective attenuation coefficient in scattering dominating turbid bio-tissue. The computation results of the modified method were compared with Monte Carlo simulation and showed the model provided reasonable predictions of heat source term distribution than past methods. Such a research is useful for explaining the physical characteristics of heat source in the heat transfer equation, establishing effective photo-thermal model, and providing theory contrast for related laser medicine experiments.
77 FR 19740 - Water Sources for Long-Term Recirculation Cooling Following a Loss-of-Coolant Accident
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-02
... NUCLEAR REGULATORY COMMISSION [NRC-2010-0249] Water Sources for Long-Term Recirculation Cooling... Regulatory Guide (RG) 1.82, ``Water Sources for Long-Term Recirculation Cooling Following a Loss-of-Coolant... regarding the sumps and suppression pools that provide water sources for emergency core cooling, containment...
NASA Astrophysics Data System (ADS)
Chai, Tianfeng; Crawford, Alice; Stunder, Barbara; Pavolonis, Michael J.; Draxler, Roland; Stein, Ariel
2017-02-01
Currently, the National Oceanic and Atmospheric Administration (NOAA) National Weather Service (NWS) runs the HYSPLIT dispersion model with a unit mass release rate to predict the transport and dispersion of volcanic ash. The model predictions provide information for the Volcanic Ash Advisory Centers (VAAC) to issue advisories to meteorological watch offices, area control centers, flight information centers, and others. This research aims to provide quantitative forecasts of ash distributions generated by objectively and optimally estimating the volcanic ash source strengths, vertical distribution, and temporal variations using an observation-modeling inversion technique. In this top-down approach, a cost functional is defined to quantify the differences between the model predictions and the satellite measurements of column-integrated ash concentrations weighted by the model and observation uncertainties. Minimizing this cost functional by adjusting the sources provides the volcanic ash emission estimates. As an example, MODIS (Moderate Resolution Imaging Spectroradiometer) satellite retrievals of the 2008 Kasatochi volcanic ash clouds are used to test the HYSPLIT volcanic ash inverse system. Because the satellite retrievals include the ash cloud top height but not the bottom height, there are different model diagnostic choices for comparing the model results with the observed mass loadings. Three options are presented and tested. Although the emission estimates vary significantly with different options, the subsequent model predictions with the different release estimates all show decent skill when evaluated against the unassimilated satellite observations at later times. Among the three options, integrating over three model layers yields slightly better results than integrating from the surface up to the observed volcanic ash cloud top or using a single model layer. Inverse tests also show that including the ash-free region to constrain the model is not beneficial for the current case. In addition, extra constraints on the source terms can be given by explicitly enforcing no-ash
for the atmosphere columns above or below the observed ash cloud top height. However, in this case such extra constraints are not helpful for the inverse modeling. It is also found that simultaneously assimilating observations at different times produces better hindcasts than only assimilating the most recent observations.
Activity interference and noise annoyance
NASA Astrophysics Data System (ADS)
Hall, F. L.; Taylor, S. M.; Birnie, S. E.
1985-11-01
Debate continues over differences in the dose-response functions used to predict the annoyance at different sources of transportation noise. This debate reflects the lack of an accepted model of noise annoyance in residential communities. In this paper a model is proposed which is focussed on activity interference as a central component mediating the relationship between noise exposure and annoyance. This model represents a departure from earlier models in two important respects. First, single event noise levels (e.g., maximum levels, sound exposure level) constitute the noise exposure variables in place of long-term energy equivalent measures (e.g., 24-hour Leq or Ldn). Second, the relationships within the model are expressed as probabilistic rather than deterministic equations. The model has been tested by using acoustical and social survey data collected at 57 sites in the Toronto region exposed to aircraft, road traffic or train noise. Logit analysis was used to estimate two sets of equations. The first predicts the probability of activity interference as a function of event noise level. Four types of interference are included: indoor speech, outdoor speech, difficulty getting to sleep and awakening. The second set predicts the probability of annoyance as a function of the combination of activity interferences. From the first set of equations, it was possible to estimate a function for indoor speech interference only. In this case, the maximum event level was the strongest predictor. The lack of significant results for the other types of interference is explained by the limitations of the data. The same function predicts indoor speech interference for all three sources—road, rail and aircraft noise. The results for the second set of equations show strong relationships between activity interference and the probability of annoyance. Again, the parameters of the logit equations are similar for the three sources. A trial application of the model predicts a higher probability of annoyance for aircraft than for road traffic situations with the same 24-hour Leq. This result suggests that the model may account for previously reported source differences in annoyance.
Relative Importance of H2 and H2S as Energy Sources for Primary Production in Geothermal Springs▿ †
D'Imperio, Seth; Lehr, Corinne R.; Oduro, Harry; Druschel, Greg; Kühl, Michael; McDermott, Timothy R.
2008-01-01
Geothermal waters contain numerous potential electron donors capable of supporting chemolithotrophy-based primary production. Thermodynamic predictions of energy yields for specific electron donor and acceptor pairs in such systems are available, although direct assessments of these predictions are rare. This study assessed the relative importance of dissolved H2 and H2S as energy sources for the support of chemolithotrophic metabolism in an acidic geothermal spring in Yellowstone National Park. H2S and H2 concentration gradients were observed in the outflow channel, and vertical H2S and O2 gradients were evident within the microbial mat. H2S levels and microbial consumption rates were approximately three orders of magnitude greater than those of H2. Hydrogenobaculum-like organisms dominated the bacterial component of the microbial community, and isolates representing three distinct 16S rRNA gene phylotypes (phylotype = 100% identity) were isolated and characterized. Within a phylotype, O2 requirements varied, as did energy source utilization: some isolates could grow only with H2S, some only with H2, while others could utilize either as an energy source. These metabolic phenotypes were consistent with in situ geochemical conditions measured using aqueous chemical analysis and in-field measurements made by using gas chromatography and microelectrodes. Pure-culture experiments with an isolate that could utilize H2S and H2 and that represented the dominant phylotype (70% of the PCR clones) showed that H2S and H2 were used simultaneously, without evidence of induction or catabolite repression, and at relative rate differences comparable to those measured in ex situ field assays. Under in situ-relevant concentrations, growth of this isolate with H2S was better than that with H2. The major conclusions drawn from this study are that phylogeny may not necessarily be reliable for predicting physiology and that H2S can dominate over H2 as an energy source in terms of availability, apparent in situ consumption rates, and growth-supporting energy. PMID:18641166
Surfzone alongshore advective accelerations: observations and modeling
NASA Astrophysics Data System (ADS)
Hansen, J.; Raubenheimer, B.; Elgar, S.
2014-12-01
The sources, magnitudes, and impacts of non-linear advective accelerations on alongshore surfzone currents are investigated with observations and a numerical model. Previous numerical modeling results have indicated that advective accelerations are an important contribution to the alongshore force balance, and are required to understand spatial variations in alongshore currents (which may result in spatially variable morphological change). However, most prior observational studies have neglected advective accelerations in the alongshore force balance. Using a numerical model (Delft3D) to predict optimal sensor locations, a dense array of 26 colocated current meters and pressure sensors was deployed between the shoreline and 3-m water depth over a 200 by 115 m region near Duck, NC in fall 2013. The array included 7 cross- and 3 alongshore transects. Here, observational and numerical estimates of the dominant forcing terms in the alongshore balance (pressure and radiation-stress gradients) and the advective acceleration terms will be compared with each other. In addition, the numerical model will be used to examine the force balance, including sources of velocity gradients, at a higher spatial resolution than possible with the instrument array. Preliminary numerical results indicate that at O(10-100 m) alongshore scales, bathymetric variations and the ensuing alongshore variations in the wave field and subsequent forcing are the dominant sources of the modeled velocity gradients and advective accelerations. Additional simulations and analysis of the observations will be presented. Funded by NSF and ASDR&E.
Auroral Proper Motion in the Era of AMISR and EMCCD
NASA Astrophysics Data System (ADS)
Semeter, J. L.
2016-12-01
The term "aurora" is a catch-all for luminosity produced by the deposition of magnetospheric energy in the outer atmosphere. The use of this single phenomenological term occludes the rich variety of sources and mechanisms responsible for the excitation. Among these are electron thermal conduction (SAR arcs), electrostatic potential fields ("inverted-V" aurora), wave-particle resonance (Alfvenic aurora, pulsating aurora), pitch-angle scattering (diffuse aurora), and direct injection of plasma sheet particles (PBIs, substorms). Much information about auroral energization has been derived from the energy spectrum of primary particles, which may be measured directly with an in situ detector or indirectly via analysis of the atmospheric response (e.g., auroral spectroscopy, tomography, ionization). Somewhat less emphasized has been the information in the B_perp dimension. Specifically, the scale-dependent motions of auroral forms in the rest frame of the ambient plasma provide a means of partitioning both the source region and the source mechanism. These results, in turn, affect ionospheric state parameters that control the M-I coupling process-most notably, the degree of structure imparted to the conductance field. This paper describes recent results enabled by the advent of two technologies: high frame-rate, high-resolution imaging detectors, and electronically steerable incoherent scatter radar (the AMISR systems). In addition to contributing to our understanding of the aurora, these results may be used in predictive models of multi-scale energy transfer within the disturbed geospace system.
A Unified Flash Flood Database across the United States
Gourley, Jonathan J.; Hong, Yang; Flamig, Zachary L.; Arthur, Ami; Clark, Robert; Calianno, Martin; Ruin, Isabelle; Ortel, Terry W.; Wieczorek, Michael; Kirstetter, Pierre-Emmanuel; Clark, Edward; Krajewski, Witold F.
2013-01-01
Despite flash flooding being one of the most deadly and costly weather-related natural hazards worldwide, individual datasets to characterize them in the United States are hampered by limited documentation and can be difficult to access. This study is the first of its kind to assemble, reprocess, describe, and disseminate a georeferenced U.S. database providing a long-term, detailed characterization of flash flooding in terms of spatiotemporal behavior and specificity of impacts. The database is composed of three primary sources: 1) the entire archive of automated discharge observations from the U.S. Geological Survey that has been reprocessed to describe individual flooding events, 2) flash-flooding reports collected by the National Weather Service from 2006 to the present, and 3) witness reports obtained directly from the public in the Severe Hazards Analysis and Verification Experiment during the summers 2008–10. Each observational data source has limitations; a major asset of the unified flash flood database is its collation of relevant information from a variety of sources that is now readily available to the community in common formats. It is anticipated that this database will be used for many diverse purposes, such as evaluating tools to predict flash flooding, characterizing seasonal and regional trends, and improving understanding of dominant flood-producing processes. We envision the initiation of this community database effort will attract and encompass future datasets.
NASA Astrophysics Data System (ADS)
Soulsby, Chris; Birkel, Christian; Geris, Josie; Tetzlaff, Doerthe
2016-04-01
Advances in the use of hydrological tracers and their integration into rainfall runoff models is facilitating improved quantification of stream water age distributions. This is of fundamental importance to understanding water quality dynamics over both short- and long-time scales, particularly as water quality parameters are often associated with water sources of markedly different ages. For example, legacy nitrate pollution may reflect deeper waters that have resided in catchments for decades, whilst more dynamics parameters from anthropogenic sources (e.g. P, pathogens etc) are mobilised by very young (<1 day) near-surface water sources. It is increasingly recognised that water age distributions of stream water is non-stationary in both the short (i.e. event dynamics) and longer-term (i.e. in relation to hydroclimatic variability). This provides a crucial context for interpreting water quality time series. Here, we will use longer-term (>5 year), high resolution (daily) isotope time series in modelling studies for different catchments to show how variable stream water age distributions can be a result of hydroclimatic variability and the implications for understanding water quality. We will also use examples from catchments undergoing rapid urbanisation, how the resulting age distributions of stream water change in a predictable way as a result of modified flow paths. The implication for the management of water quality in urban catchments will be discussed.
Point-source helicity injection for ST plasma startup in Pegasus
NASA Astrophysics Data System (ADS)
Redd, A. J.; Battaglia, D. J.; Bongard, M. W.; Fonck, R. J.; Schlossberg, D. J.
2009-11-01
Plasma current guns are used as point-source DC helicity injectors for forming non-solenoidal tokamak plasmas in the Pegasus Toroidal Experiment. Discharges driven by this injection scheme have achieved Ip>= 100 kA using Iinj<= 4 kA. They form at the outboard midplane, transition to a tokamak-like equilibrium, and continue to grow inward as Ip increases due to helicity injection and outer- PF induction. The maximum Ip is determined by helicity balance (injection rate vs resistive dissipation) and a Taylor relaxation limit, in which Ip√ITF Iinj/w, where w is the radial thickness of the gun-driven edge. Preliminary experiments tentatively confirm these scalings with ITF, Iinj, and w, increasing confidence in this simple relaxation model. Adding solenoidal inductive drive during helicity injection can push Ip up to, but not beyond, the predicted relaxation limit, demonstrating that this is a hard performance limit. Present experiments are focused on increasing the injection voltage (i.e., helicity injection rate) and reducing w. Near-term goals are to further test scalings predicted by the simple relaxation model and to study in detail the observed bursty n=1 activity correlated with rapid increases in Ip.
Modeling Aerodynamically Generated Sound of Helicopter Rotors
NASA Technical Reports Server (NTRS)
Brentner, Kenneth S.; Farassat, F.
2002-01-01
A great deal of progress has been made in the modeling of aerodynamically generated sound of rotors over the past decade. Although the modeling effort has focused on helicopter main rotors, the theory is generally valid for a wide range of rotor configurations. The Ffowcs Williams Hawkings (FW-H) equation has been the foundation for much of the development. The monopole and dipole source terms of the FW-H equation account for the thickness and loading noise, respectively. Bladevortex-interaction noise and broadband noise are important types of loading noise, hence much research has been directed toward the accurate modeling of these noise mechanisms. Both subsonic and supersonic quadrupole noise formulations have been developed for the prediction of high-speed impulsive noise. In an effort to eliminate the need to compute the quadrupole contribution, the FW-H equation has also been utilized on permeable surfaces surrounding all physical noise sources. Comparisons of the Kirchhoff formulation for moving surfaces with the FW-H equation have shown that the Kirchhoff formulation for moving surfaces can give erroneous results for aeroacoustic problems. Finally, significant progress has been made incorporating the rotor noise models into full vehicle noise prediction tools.
A Framework of Covariance Projection on Constraint Manifold for Data Fusion.
Bakr, Muhammad Abu; Lee, Sukhan
2018-05-17
A general framework of data fusion is presented based on projecting the probability distribution of true states and measurements around the predicted states and actual measurements onto the constraint manifold. The constraint manifold represents the constraints to be satisfied among true states and measurements, which is defined in the extended space with all the redundant sources of data such as state predictions and measurements considered as independent variables. By the general framework, we mean that it is able to fuse any correlated data sources while directly incorporating constraints and identifying inconsistent data without any prior information. The proposed method, referred to here as the Covariance Projection (CP) method, provides an unbiased and optimal solution in the sense of minimum mean square error (MMSE), if the projection is based on the minimum weighted distance on the constraint manifold. The proposed method not only offers a generalization of the conventional formula for handling constraints and data inconsistency, but also provides a new insight into data fusion in terms of a geometric-algebraic point of view. Simulation results are provided to show the effectiveness of the proposed method in handling constraints and data inconsistency.
NASA Astrophysics Data System (ADS)
Eble, M. C.; uslu, B. U.; Wright, L.
2013-12-01
Synthetic tsunamis generated from source regions around the Pacific Basin are analyzed in terms of their relative impact on United States coastal locations.. The region of tsunami origin is as important as the expected magnitude and the predicted inundation for understanding tsunami hazard. The NOAA Center for Tsunami Research has developed high-resolution tsunami models capable of predicting tsunami arrival time and amplitude of waves at each location. These models have been used to conduct tsunami hazard assessments to assess maximum impact and tsunami inundation for use by local communities in education and evacuation map development. Hazard assessment studies conducted for Los Angeles, San Francisco, Crescent City, Hilo, and Apra Harbor are combined with results of tsunami forecast model development at each of seventy-five locations. Complete hazard assessment, identifies every possible tsunami variation from a pre-computed propagation database. Study results indicate that the Eastern Aleutian Islands and Alaska are the most likely regions to produce the largest impact on the West Coast of the United States, while the East Philippines and Mariana trench regions impact Apra Harbor, Guam. Hawaii appears to be impacted equally from South America, Alaska and the Kuril Islands.
Lithium-Ion Batteries for Aerospace Applications
NASA Technical Reports Server (NTRS)
Surampudi, S.; Halpert, G.; Marsh, R. A.; James, R.
1999-01-01
This presentation reviews: (1) the goals and objectives, (2) the NASA and Airforce requirements, (3) the potential near term missions, (4) management approach, (5) the technical approach and (6) the program road map. The objectives of the program include: (1) develop high specific energy and long life lithium ion cells and smart batteries for aerospace and defense applications, (2) establish domestic production sources, and to demonstrate technological readiness for various missions. The management approach is to encourage the teaming of universities, R&D organizations, and battery manufacturing companies, to build on existing commercial and government technology, and to develop two sources for manufacturing cells and batteries. The technological approach includes: (1) develop advanced electrode materials and electrolytes to achieve improved low temperature performance and long cycle life, (2) optimize cell design to improve specific energy, cycle life and safety, (3) establish manufacturing processes to ensure predictable performance, (4) establish manufacturing processes to ensure predictable performance, (5) develop aerospace lithium ion cells in various AH sizes and voltages, (6) develop electronics for smart battery management, (7) develop a performance database required for various applications, and (8) demonstrate technology readiness for the various missions. Charts which review the requirements for the Li-ion battery development program are presented.
The NASA Severe Thunderstorm Observations and Regional Modeling (NASA STORM) Project
NASA Technical Reports Server (NTRS)
Schultz, Christopher J.; Gatlin, Patrick N.; Lang, Timothy J.; Srikishen, Jayanthi; Case, Jonathan L.; Molthan, Andrew L.; Zavodsky, Bradley T.; Bailey, Jeffrey; Blakeslee, Richard J.; Jedlovec, Gary J.
2016-01-01
The NASA Severe Storm Thunderstorm Observations and Regional Modeling(NASA STORM) project enhanced NASA’s severe weather research capabilities, building upon existing Earth Science expertise at NASA Marshall Space Flight Center (MSFC). During this project, MSFC extended NASA’s ground-based lightning detection capacity to include a readily deployable lightning mapping array (LMA). NASA STORM also enabled NASA’s Short-term Prediction and Research Transition (SPoRT) to add convection allowing ensemble modeling to its portfolio of regional numerical weather prediction (NWP) capabilities. As a part of NASA STORM, MSFC developed new open-source capabilities for analyzing and displaying weather radar observations integrated from both research and operational networks. These accomplishments enabled by NASA STORM are a step towards enhancing NASA’s capabilities for studying severe weather and positions them for any future NASA related severe storm field campaigns.
Isolation of Genetically Diverse Marburg Viruses from Egyptian Fruit Bats
Towner, Jonathan S.; Amman, Brian R.; Sealy, Tara K.; Carroll, Serena A. Reeder; Comer, James A.; Kemp, Alan; Swanepoel, Robert; Paddock, Christopher D.; Balinandi, Stephen; Khristova, Marina L.; Formenty, Pierre B. H.; Albarino, Cesar G.; Miller, David M.; Reed, Zachary D.; Kayiwa, John T.; Mills, James N.; Cannon, Deborah L.; Greer, Patricia W.; Byaruhanga, Emmanuel; Farnon, Eileen C.; Atimnedi, Patrick; Okware, Samuel; Katongole-Mbidde, Edward; Downing, Robert; Tappero, Jordan W.; Zaki, Sherif R.; Ksiazek, Thomas G.; Nichol, Stuart T.; Rollin, Pierre E.
2009-01-01
In July and September 2007, miners working in Kitaka Cave, Uganda, were diagnosed with Marburg hemorrhagic fever. The likely source of infection in the cave was Egyptian fruit bats (Rousettus aegyptiacus) based on detection of Marburg virus RNA in 31/611 (5.1%) bats, virus-specific antibody in bat sera, and isolation of genetically diverse virus from bat tissues. The virus isolates were collected nine months apart, demonstrating long-term virus circulation. The bat colony was estimated to be over 100,000 animals using mark and re-capture methods, predicting the presence of over 5,000 virus-infected bats. The genetically diverse virus genome sequences from bats and miners closely matched. These data indicate common Egyptian fruit bats can represent a major natural reservoir and source of Marburg virus with potential for spillover into humans. PMID:19649327
Isolation of genetically diverse Marburg viruses from Egyptian fruit bats.
Towner, Jonathan S; Amman, Brian R; Sealy, Tara K; Carroll, Serena A Reeder; Comer, James A; Kemp, Alan; Swanepoel, Robert; Paddock, Christopher D; Balinandi, Stephen; Khristova, Marina L; Formenty, Pierre B H; Albarino, Cesar G; Miller, David M; Reed, Zachary D; Kayiwa, John T; Mills, James N; Cannon, Deborah L; Greer, Patricia W; Byaruhanga, Emmanuel; Farnon, Eileen C; Atimnedi, Patrick; Okware, Samuel; Katongole-Mbidde, Edward; Downing, Robert; Tappero, Jordan W; Zaki, Sherif R; Ksiazek, Thomas G; Nichol, Stuart T; Rollin, Pierre E
2009-07-01
In July and September 2007, miners working in Kitaka Cave, Uganda, were diagnosed with Marburg hemorrhagic fever. The likely source of infection in the cave was Egyptian fruit bats (Rousettus aegyptiacus) based on detection of Marburg virus RNA in 31/611 (5.1%) bats, virus-specific antibody in bat sera, and isolation of genetically diverse virus from bat tissues. The virus isolates were collected nine months apart, demonstrating long-term virus circulation. The bat colony was estimated to be over 100,000 animals using mark and re-capture methods, predicting the presence of over 5,000 virus-infected bats. The genetically diverse virus genome sequences from bats and miners closely matched. These data indicate common Egyptian fruit bats can represent a major natural reservoir and source of Marburg virus with potential for spillover into humans.
NASA Technical Reports Server (NTRS)
George, A. R.; Chou, S.-T.
1983-01-01
Experimental data on broadband noise from airfoils are compared, together with analytical methods, in order to identify the mechanisms of noise emission. Rotor noise is categorized into discrete frequency, impulsive, and broadband components, the last having a continuous spectrum originating from a random source. The results of computer simulations of different rotor blade types which produce broadband noise were compared with experimental data and among themselves in terms of predictions of the spectra obtained. Consideration was given to the overall sound pressure level, unsteady turbulence forces, rotational forces, inflow turbulence, self-generated turbulence, and turbulence in the flow. Data are presented for a helicopter rotor and light aircraft propeller. The most significant source was found to be inflow turbulence induced lift fluctuations in helicopter rotors and boundary layer trailing edge noise on large wind energy conversion systems
Collimator-free photon tomography
Dilmanian, F.A.; Barbour, R.L.
1998-10-06
A method of uncollimated single photon emission computed tomography includes administering a radioisotope to a patient for producing gamma ray photons from a source inside the patient. Emissivity of the photons is measured externally of the patient with an uncollimated gamma camera at a plurality of measurement positions surrounding the patient for obtaining corresponding energy spectrums thereat. Photon emissivity at the plurality of measurement positions is predicted using an initial prediction of an image of the source. The predicted and measured photon emissivities are compared to obtain differences therebetween. Prediction and comparison is iterated by updating the image prediction until the differences are below a threshold for obtaining a final prediction of the source image. 6 figs.
NASA Astrophysics Data System (ADS)
Koliopanos, Filippos; Vasilopoulos, Georgios
2018-06-01
Aims: We study the temporal and spectral characteristics of SMC X-3 during its recent (2016) outburst to probe accretion onto highly magnetized neutron stars (NSs) at the Eddington limit. Methods: We obtained XMM-Newton observations of SMC X-3 and combined them with long-term observations by Swift. We performed a detailed analysis of the temporal and spectral behavior of the source, as well as its short- and long-term evolution. We have also constructed a simple toy-model (based on robust theoretical predictions) in order to gain insight into the complex emission pattern of SMC X-3. Results: We confirm the pulse period of the system that has been derived by previous works and note that the pulse has a complex three-peak shape. We find that the pulsed emission is dominated by hard photons, while at energies below 1 keV, the emission does not pulsate. We furthermore find that the shape of the pulse profile and the short- and long-term evolution of the source light-curve can be explained by invoking a combination of a "fan" and a "polar" beam. The results of our temporal study are supported by our spectroscopic analysis, which reveals a two-component emission, comprised of a hard power law and a soft thermal component. We find that the latter produces the bulk of the non-pulsating emission and is most likely the result of reprocessing the primary hard emission by optically thick material that partly obscures the central source. We also detect strong emission lines from highly ionized metals. The strength of the emission lines strongly depends on the phase. Conclusions: Our findings are in agreement with previous works. The energy and temporal evolution as well as the shape of the pulse profile and the long-term spectra evolution of the source are consistent with the expected emission pattern of the accretion column in the super-critical regime, while the large reprocessing region is consistent with the analysis of previously studied X-ray pulsars observed at high accretion rates. This reprocessing region is consistent with recently proposed theoretical and observational works that suggested that highly magnetized NSs occupy a considerable fraction of ultraluminous X-ray sources.
Initiation process of earthquakes and its implications for seismic hazard reduction strategy.
Kanamori, H
1996-04-30
For the average citizen and the public, "earthquake prediction" means "short-term prediction," a prediction of a specific earthquake on a relatively short time scale. Such prediction must specify the time, place, and magnitude of the earthquake in question with sufficiently high reliability. For this type of prediction, one must rely on some short-term precursors. Examinations of strain changes just before large earthquakes suggest that consistent detection of such precursory strain changes cannot be expected. Other precursory phenomena such as foreshocks and nonseismological anomalies do not occur consistently either. Thus, reliable short-term prediction would be very difficult. Although short-term predictions with large uncertainties could be useful for some areas if their social and economic environments can tolerate false alarms, such predictions would be impractical for most modern industrialized cities. A strategy for effective seismic hazard reduction is to take full advantage of the recent technical advancements in seismology, computers, and communication. In highly industrialized communities, rapid earthquake information is critically important for emergency services agencies, utilities, communications, financial companies, and media to make quick reports and damage estimates and to determine where emergency response is most needed. Long-term forecast, or prognosis, of earthquakes is important for development of realistic building codes, retrofitting existing structures, and land-use planning, but the distinction between short-term and long-term predictions needs to be clearly communicated to the public to avoid misunderstanding.
Miller, K A; Nelson, N J; Smith, H G; Moore, J A
2009-09-01
Reduced genetic diversity can result in short-term decreases in fitness and reduced adaptive potential, which may lead to an increased extinction risk. Therefore, maintaining genetic variation is important for the short- and long-term success of reintroduced populations. Here, we evaluate how founder group size and variance in male reproductive success influence the long-term maintenance of genetic diversity after reintroduction. We used microsatellite data to quantify the loss of heterozygosity and allelic diversity in the founder groups from three reintroductions of tuatara (Sphenodon), the sole living representatives of the reptilian order Rhynchocephalia. We then estimated the maintenance of genetic diversity over 400 years (approximately 10 generations) using population viability analyses. Reproduction of tuatara is highly skewed, with as few as 30% of males mating across years. Predicted losses of heterozygosity over 10 generations were low (1-14%), and populations founded with more animals retained a greater proportion of the heterozygosity and allelic diversity of their source populations and founder groups. Greater male reproductive skew led to greater predicted losses of genetic diversity over 10 generations, but only accelerated the loss of genetic diversity at small population size (<250 animals). A reduction in reproductive skew at low density may facilitate the maintenance of genetic diversity in small reintroduced populations. If reproductive skew is high and density-independent, larger founder groups could be released to achieve genetic goals for management.
Epidemiology of Multiple Myeloma in the Czech Republic.
Maluskova, D; Svobodová, I; Kucerova, M; Brozova, L; Muzik, J; Jarkovský, J; Hájek, R; Maisnar, V; Dusek, L
2017-01-01
Multiple myeloma (MM) is a cancer of plasma cells with an incidence of 4.8 cases per 100,000 population in the Czech Republic in 2014; the burden of MM in the Czech Republic is moderate when compared to other European countries. This work brings the latest information on MM epidemiology in the Czech population. The Czech National Cancer Registry is the basic source of data for the population-based evaluation of MM epidemiology. This database also makes it possible to assess patient survival and to predict probable short-term as well as long-term trends in the treatment burden of the entire population. According to the latest Czech National Cancer Registry data, there were 504 new cases of MM and 376 deaths from MM in 2014. Since 2004, there has been a 26.9% increase in MM incidence and an 8.3% increase in MM mortality. In 2014, there were 1,982 persons living with MM or a history of MM, corresponding to a 74.4% increase when compared to MM prevalence in 2004. The 5-year survival of patients treated in the period 2010-2014 was nearly 40%. The available data make it possible to analyse long-term trends in MM epidemiology and to predict the future treatment burden as well as treatment results.Key words: multiple myeloma - epidemiology - Czech National Cancer Registry - Registry of Monoclonal Gammopathies - Czech Republic.
Choi, Moo Jin; Choi, Byung Tae; Shin, Hwa Kyoung; Shin, Byung Cheul; Han, Yoo Kyoung; Baek, Jin Ung
2015-01-01
The major objectives of this study were to provide a list of candidate antiaging medicinal herbs that have been widely utilized in Korean medicine and to organize preliminary data for the benefit of experimental and clinical researchers to develop new drug therapies by analyzing previous studies. “Dongeuibogam,” a representative source of the Korean medicine literature, was selected to investigate candidate antiaging medicinal herbs and to identify appropriate terms that describe the specific antiaging effects that these herbs are predicted to elicit. In addition, we aimed to review previous studies that referenced the selected candidate antiaging medicinal herbs. From our chosen source, “Dongeuibogam,” we were able to screen 102 terms describing antiaging effects, which were further classified into 11 subtypes. Ninety-seven candidate antiaging medicinal herbs were selected using the criterion that their antiaging effects were described using the same terms as those employed in “Dongeuibogam.” These candidates were classified into 11 subtypes. Of the 97 candidate antiaging medicinal herbs selected, 47 are widely used by Korean medical doctors in Korea and were selected for further analysis of their antiaging effects. Overall, we found an average of 7.7 previous studies per candidate herb that described their antiaging effects. PMID:25861371
Radiological analysis of plutonium glass batches with natural/enriched boron
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rainisch, R.
2000-06-22
The disposition of surplus plutonium inventories by the US Department of Energy (DOE) includes the immobilization of certain plutonium materials in a borosilicate glass matrix, also referred to as vitrification. This paper addresses source terms of plutonium masses immobilized in a borosilicate glass matrix where the glass components include both natural boron and enriched boron. The calculated source terms pertain to neutron and gamma source strength (particles per second), and source spectrum changes. The calculated source terms corresponding to natural boron and enriched boron are compared to determine the benefits (decrease in radiation source terms) for to the use ofmore » enriched boron. The analysis of plutonium glass source terms shows that a large component of the neutron source terms is due to (a, n) reactions. The Americium-241 and plutonium present in the glass emit alpha particles (a). These alpha particles interact with low-Z nuclides like B-11, B-10, and O-17 in the glass to produce neutrons. The low-Z nuclides are referred to as target particles. The reference glass contains 9.4 wt percent B{sub 2}O{sub 3}. Boron-11 was found to strongly support the (a, n) reactions in the glass matrix. B-11 has a natural abundance of over 80 percent. The (a, n) reaction rates for B-10 are lower than for B-11 and the analysis shows that the plutonium glass neutron source terms can be reduced by artificially enriching natural boron with B-10. The natural abundance of B-10 is 19.9 percent. Boron enriched to 96-wt percent B-10 or above can be obtained commercially. Since lower source terms imply lower dose rates to radiation workers handling the plutonium glass materials, it is important to know the achievable decrease in source terms as a result of boron enrichment. Plutonium materials are normally handled in glove boxes with shielded glass windows and the work entails both extremity and whole-body exposures. Lowering the source terms of the plutonium batches will make the handling of these materials less difficult and will reduce radiation exposure to operating workers.« less
Azevedo Peixoto, Leonardo de; Laviola, Bruno Galvêas; Alves, Alexandre Alonso; Rosado, Tatiana Barbosa; Bhering, Leonardo Lopes
2017-01-01
Genomic wide selection is a promising approach for improving the selection accuracy in plant breeding, particularly in species with long life cycles, such as Jatropha. Therefore, the objectives of this study were to estimate the genetic parameters for grain yield (GY) and the weight of 100 seeds (W100S) using restricted maximum likelihood (REML); to compare the performance of GWS methods to predict GY and W100S; and to estimate how many markers are needed to train the GWS model to obtain the maximum accuracy. Eight GWS models were compared in terms of predictive ability. The impact that the marker density had on the predictive ability was investigated using a varying number of markers, from 2 to 1,248. Because the genetic variance between evaluated genotypes was significant, it was possible to obtain selection gain. All of the GWS methods tested in this study can be used to predict GY and W100S in Jatropha. A training model fitted using 1,000 and 800 markers is sufficient to capture the maximum genetic variance and, consequently, maximum prediction ability of GY and W100S, respectively. This study demonstrated the applicability of genome-wide prediction to identify useful genetic sources of GY and W100S for Jatropha breeding. Further research is needed to confirm the applicability of the proposed approach to other complex traits.
Prediction of preterm deliveries from EHG signals using machine learning.
Fergus, Paul; Cheung, Pauline; Hussain, Abir; Al-Jumeily, Dhiya; Dobbins, Chelsea; Iram, Shamaila
2013-01-01
There has been some improvement in the treatment of preterm infants, which has helped to increase their chance of survival. However, the rate of premature births is still globally increasing. As a result, this group of infants are most at risk of developing severe medical conditions that can affect the respiratory, gastrointestinal, immune, central nervous, auditory and visual systems. In extreme cases, this can also lead to long-term conditions, such as cerebral palsy, mental retardation, learning difficulties, including poor health and growth. In the US alone, the societal and economic cost of preterm births, in 2005, was estimated to be $26.2 billion, per annum. In the UK, this value was close to £2.95 billion, in 2009. Many believe that a better understanding of why preterm births occur, and a strategic focus on prevention, will help to improve the health of children and reduce healthcare costs. At present, most methods of preterm birth prediction are subjective. However, a strong body of evidence suggests the analysis of uterine electrical signals (Electrohysterography), could provide a viable way of diagnosing true labour and predict preterm deliveries. Most Electrohysterography studies focus on true labour detection during the final seven days, before labour. The challenge is to utilise Electrohysterography techniques to predict preterm delivery earlier in the pregnancy. This paper explores this idea further and presents a supervised machine learning approach that classifies term and preterm records, using an open source dataset containing 300 records (38 preterm and 262 term). The synthetic minority oversampling technique is used to oversample the minority preterm class, and cross validation techniques, are used to evaluate the dataset against other similar studies. Our approach shows an improvement on existing studies with 96% sensitivity, 90% specificity, and a 95% area under the curve value with 8% global error using the polynomial classifier.
Cellular evidence for selfish spermatogonial selection in aged human testes.
Maher, G J; Goriely, A; Wilkie, A O M
2014-05-01
Owing to a recent trend for delayed paternity, the genomic integrity of spermatozoa of older men has become a focus of increased interest. Older fathers are at higher risk for their children to be born with several monogenic conditions collectively termed paternal age effect (PAE) disorders, which include achondroplasia, Apert syndrome and Costello syndrome. These disorders are caused by specific mutations originating almost exclusively from the male germline, in genes encoding components of the tyrosine kinase receptor/RAS/MAPK signalling pathway. These particular mutations, occurring randomly during mitotic divisions of spermatogonial stem cells (SSCs), are predicted to confer a selective/growth advantage on the mutant SSC. This selective advantage leads to a clonal expansion of the mutant cells over time, which generates mutant spermatozoa at levels significantly above the background mutation rate. This phenomenon, termed selfish spermatogonial selection, is likely to occur in all men. In rare cases, probably because of additional mutational events, selfish spermatogonial selection may lead to spermatocytic seminoma. The studies that initially predicted the clonal nature of selfish spermatogonial selection were based on DNA analysis, rather than the visualization of mutant clones in intact testes. In a recent study that aimed to identify these clones directly, we stained serial sections of fixed testes for expression of melanoma antigen family A4 (MAGEA4), a marker of spermatogonia. A subset of seminiferous tubules with an appearance and distribution compatible with the predicted mutant clones were identified. In these tubules, termed 'immunopositive tubules', there is an increased density of spermatogonia positive for markers related to selfish selection (FGFR3) and SSC self-renewal (phosphorylated AKT). Here we detail the properties of the immunopositive tubules and how they relate to the predicted mutant clones, as well as discussing the utility of identifying the potential cellular source of PAE mutations. © 2013 American Society of Andrology and European Academy of Andrology.
Kail, Jochem; Guse, Björn; Radinger, Johannes; Schröder, Maria; Kiesel, Jens; Kleinhans, Maarten; Schuurman, Filip; Fohrer, Nicola; Hering, Daniel; Wolter, Christian
2015-01-01
River biota are affected by global reach-scale pressures, but most approaches for predicting biota of rivers focus on river reach or segment scale processes and habitats. Moreover, these approaches do not consider long-term morphological changes that affect habitat conditions. In this study, a modelling framework was further developed and tested to assess the effect of pressures at different spatial scales on reach-scale habitat conditions and biota. Ecohydrological and 1D hydrodynamic models were used to predict discharge and water quality at the catchment scale and the resulting water level at the downstream end of a study reach. Long-term reach morphology was modelled using empirical regime equations, meander migration and 2D morphodynamic models. The respective flow and substrate conditions in the study reach were predicted using a 2D hydrodynamic model, and the suitability of these habitats was assessed with novel habitat models. In addition, dispersal models for fish and macroinvertebrates were developed to assess the re-colonization potential and to finally compare habitat suitability and the availability / ability of species to colonize these habitats. Applicability was tested and model performance was assessed by comparing observed and predicted conditions in the lowland Treene River in northern Germany. Technically, it was possible to link the different models, but future applications would benefit from the development of open source software for all modelling steps to enable fully automated model runs. Future research needs concern the physical modelling of long-term morphodynamics, feedback of biota (e.g., macrophytes) on abiotic habitat conditions, species interactions, and empirical data on the hydraulic habitat suitability and dispersal abilities of macroinvertebrates. The modelling framework is flexible and allows for including additional models and investigating different research and management questions, e.g., in climate impact research as well as river restoration and management. PMID:26114430
French, N P; Clancy, D; Davison, H C; Trees, A J
1999-10-01
The transmission and control of Neospora caninum infection in dairy cattle was examined using deterministic and stochastic models. Parameter estimates were derived from recent studies conducted in the UK and from the published literature. Three routes of transmission were considered: maternal vertical transmission with a high probability (0.95), horizontal transmission from infected cattle within the herd, and horizontal transmission from an independent external source. Putative infection via pooled colostrum was used as an example of within-herd horizontal transmission, and the recent finding that the dog is a definitive host of N. caninum supported the inclusion of an external independent source of infection. The predicted amount of horizontal transmission required to maintain infection at levels commonly observed in field studies in the UK and elsewhere, was consistent with that observed in studies of post-natal seroconversion (0.85-9.0 per 100 cow-years). A stochastic version of the model was used to simulate the spread of infection in herds of 100 cattle, with a mean infection prevalence similar to that observed in UK studies (around 20%). The distributions of infected and uninfected cattle corresponded closely to Normal distributions, with S.D.s of 6.3 and 7.0, respectively. Control measures were considered by altering birth, death and horizontal transmission parameters. A policy of annual culling of infected cattle very rapidly reduced the prevalence of infection, and was shown to be the most effective method of control in the short term. Not breeding replacements from infected cattle was also effective in the short term, particularly in herds with a higher turnover of cattle. However, the long-term effectiveness of these measures depended on the amount and source of horizontal infection. If the level of within-herd transmission was above a critical threshold, then a combination of reducing within-herd, and blocking external sources of transmission was required to permanently eliminate infection.
de Valk, Josje M; Wnuk, Ewelina; Huisman, John L A; Majid, Asifa
2017-08-01
People appear to have systematic associations between odors and colors. Previous research has emphasized the perceptual nature of these associations, but little attention has been paid to what role language might play. It is possible odor-color associations arise through a process of labeling; that is, participants select a descriptor for an odor and then choose a color accordingly (e.g., banana odor → "banana" label → yellow). If correct, this would predict odor-color associations would differ as odor descriptions differ. We compared speakers of Dutch (who overwhelmingly describe odors by referring to the source; e.g., smells like banana) with speakers of Maniq and Thai (who also describe odors with dedicated, abstract smell vocabulary; e.g., musty), and tested whether the type of descriptor mattered for odor-color associations. Participants were asked to select a color that they associated with an odor on two separate occasions (to test for consistency), and finally to label the odors. We found the hunter-gatherer Maniq showed few, if any, consistent or accurate odor-color associations. More importantly, we found the types of descriptors used to name the smells were related to the odor-color associations. When people used abstract smell terms to describe odors, they were less likely to choose a color match, but when they described an odor with a source-based term, their color choices more accurately reflected the odor source, particularly when the odor source was named correctly (e.g., banana odor → yellow). This suggests language is an important factor in odor-color cross-modal associations.
NASA Technical Reports Server (NTRS)
Kelecy, Tom; Payne, Tim; Thurston, Robin; Stansbery, Gene
2007-01-01
A population of deep space objects is thought to be high area-to-mass ratio (AMR) debris having origins from sources in the geosynchronous orbit (GEO) belt. The typical AMR values have been observed to range anywhere from 1's to 10's of m(sup 2)/kg, and hence, higher than average solar radiation pressure effects result in long-term migration of eccentricity (0.1-0.6) and inclination over time. However, the nature of the debris orientation-dependent dynamics also results time-varying solar radiation forces about the average which complicate the short-term orbit determination processing. The orbit determination results are presented for several of these debris objects, and highlight their unique and varied dynamic attributes. Estimation or the solar pressure dynamics over time scales suitable for resolving the shorter term dynamics improves the orbit estimation, and hence, the orbit predictions needed to conduct follow-up observations.
Towards an operational high-resolution air quality forecasting system at ECCC
NASA Astrophysics Data System (ADS)
Munoz-Alpizar, Rodrigo; Stroud, Craig; Ren, Shuzhan; Belair, Stephane; Leroyer, Sylvie; Souvanlasy, Vanh; Spacek, Lubos; Pavlovic, Radenko; Davignon, Didier; Moran, Moran
2017-04-01
Urban environments are particularly sensitive to weather, air quality (AQ), and climatic conditions. Despite the efforts made in Canada to reduce pollution in urban areas, AQ continues to be a concern for the population, especially during short-term episodes that could lead to exceedances of daily air quality standards. Furthermore, urban air pollution has long been associated with significant adverse health effects. In Canada, the large percentage of the population living in urban areas ( 81%, according to the Canada's 2011 census) is exposed to elevated air pollution due to local emissions sources. Thus, in order to improve the services offered to the Canadian public, Environment and Climate Change Canada has launched an initiative to develop a high-resolution air quality prediction capacity for urban areas in Canada. This presentation will show observed pollution trends (2010-2016) for Canadian mega-cities along with some preliminary high-resolution air quality modelling results. Short-term and long-term plans for urban AQ forecasting in Canada will also be described.
NASA Technical Reports Server (NTRS)
Stapfer, G.; Truscello, V. C.
1976-01-01
The successful utilization of a radioisotope thermoelectric generator (RTG) as the power source for spaceflight missions requires that the performance of such an RTG be predictable throughout the mission. Several mechanisms occur within the generator which tend to degrade the performance as a function of operating time. The impact which these mechanisms have on the available output power of an RTG depends primarily on such factors as time, temperature and self-limiting effects. The relative magnitudes, rates and temperature dependency of these various degradation mechanisms have been investigated separately by coupon experiments as well as 4-couple and 18-couple module experiments. This paper discusses the different individual mechanisms and summarizes their combined influence on the performance of an RTG. Also presented as part of the RTG long-term performance characteristics is the sensitivity of the available RTG output power to variations of the individual degradation mechanisms thus identifying the areas of greatest concern for a successful long-term mission.
NASA Astrophysics Data System (ADS)
Massa, Corrado
1996-03-01
The consequences of a cosmological ∧ term varying asS -2 in a spatially isotropic universe with scale factorS and conserved matter tensor are investigated. One finds a perpetually expanding universe with positive ∧ and gravitational ‘constant’G that increases with time. The ‘hard’ equation of state 3P>U (U mass-energy density,P scalar pressure) applied to the early universe leads to the expansion lawS∝t (t cosmic time) which solves the horizon problem with no need of inflation. Also the flatness problem is resolved without inflation. The model does not affect the well known predictions on the cosmic light elements abundance which come from standard big bang cosmology. In the present, matter dominated universe one findsdG/dt=2∧H/U (H is the Hubble parameter) which is consistent with observations provided ∧<10-57 cm-2. Asymptotically (S→∞) the ∧ term equalsGU/2, in agreement with other studies.
Snow, Mathew S; Snyder, Darin C; Clark, Sue B; Kelley, Morgan; Delmore, James E
2015-03-03
Radiometric and mass spectrometric analyses of Cs contamination in the environment can reveal the location of Cs emission sources, release mechanisms, modes of transport, prediction of future contamination migration, and attribution of contamination to specific generator(s) and/or process(es). The Subsurface Disposal Area (SDA) at Idaho National Laboratory (INL) represents a complicated case study for demonstrating the current capabilities and limitations to environmental Cs analyses. (137)Cs distribution patterns, (135)Cs/(137)Cs isotope ratios, known Cs chemistry at this site, and historical records enable narrowing the list of possible emission sources and release events to a single source and event, with the SDA identified as the emission source and flood transport of material from within Pit 9 and Trench 48 as the primary release event. These data combined allow refining the possible number of waste generators from dozens to a single generator, with INL on-site research and reactor programs identified as the most likely waste generator. A discussion on the ultimate limitations to the information that (135)Cs/(137)Cs ratios alone can provide is presented and includes (1) uncertainties in the exact date of the fission event and (2) possibility of mixing between different Cs source terms (including nuclear weapons fallout and a source of interest).
Searching for propeller-phase ULXs in the XMM-Newton Serendipitous Source Catalogue
NASA Astrophysics Data System (ADS)
Earnshaw, H. P.; Roberts, T. P.; Sathyaprakash, R.
2018-05-01
We search for transient sources in a sample of ultraluminous X-ray sources (ULXs) from the 3XMM-DR4 release of the XMM-Newton Serendipitous Source Catalogue in order to find candidate neutron star ULXs alternating between an accreting state and the propeller regime, in which the luminosity drops dramatically. By examining their fluxes and flux upper limits, we identify five ULXs that demonstrate long-term variability of over an order of magnitude. Using Chandra and Swift data to further characterize their light curves, we find that two of these sources are detected only once and could be X-ray binaries in outburst that only briefly reach ULX luminosities. Two others are consistent with being super-Eddington accreting sources with high levels of inter-observation variability. One source, M51 ULX-4, demonstrates apparent bimodal flux behaviour that could indicate the propeller regime. It has a hard X-ray spectrum, but no significant pulsations in its timing data, although with an upper limit of 10 per cent of the signal pulsed at ˜1.5 Hz a pulsating ULX cannot be excluded, particularly if the pulsations are transient. By simulating XMM-Newton observations of a population of pulsating ULXs, we predict that there could be approximately 200 other bimodal ULXs that have not been observed sufficiently well by XMM-Newton to be identified as transient.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snow, Mathew S.; Snyder, Darin C.; Clark, Sue B.
2015-03-03
Radiometric and mass spectrometric analyses of Cs contamination in the environment can reveal the location of Cs emission sources, release mechanisms, modes of transport, prediction of future contamination migration, and attribution of contamination to specific generator(s) and/or process(es). The Subsurface Disposal Area (SDA) at Idaho National Laboratory (INL) represents a complicated case study for demonstrating the current capabilities and limitations to environmental Cs analyses. 137Cs distribution patterns, 135Cs/ 137Cs isotope ratios, known Cs chemistry at this site, and historical records enable narrowing the list of possible emission sources and release events to a single source and event, with the SDAmore » identified as the emission source and flood transport of material from within Pit 9 and Trench 48 as the primary release event. These data combined allow refining the possible number of waste generators from dozens to a single generator, with INL on-site research and reactor programs identified as the most likely waste generator. A discussion on the ultimate limitations to the information that 135Cs/ 137Cs ratios alone can provide is presented and includes (1) uncertainties in the exact date of the fission event and (2) possibility of mixing between different Cs source terms (including nuclear weapons fallout and a source of interest).« less
NASA Astrophysics Data System (ADS)
Castelo, A.; Mendioroz, A.; Celorrio, R.; Salazar, A.; López de Uralde, P.; Gorosmendi, I.; Gorostegui-Colinas, E.
2017-05-01
Lock-in vibrothermography is used to characterize vertical kissing and open cracks in metals. In this technique the crack heats up during ultrasound excitation due mainly to friction between the defect's faces. We have solved the inverse problem, consisting in determining the heat source distribution produced at cracks under amplitude modulated ultrasound excitation, which is an ill-posed inverse problem. As a consequence the minimization of the residual is unstable. We have stabilized the algorithm introducing a penalty term based on Total Variation functional. In the inversion, we combine amplitude and phase surface temperature data obtained at several modulation frequencies. Inversions of synthetic data with added noise indicate that compact heat sources are characterized accurately and that the particular upper contours can be retrieved for shallow heat sources. The overall shape of open and homogeneous semicircular strip-shaped heat sources representing open half-penny cracks can also be retrieved but the reconstruction of the deeper end of the heat source loses contrast. Angle-, radius- and depth-dependent inhomogeneous heat flux distributions within these semicircular strips can also be qualitatively characterized. Reconstructions of experimental data taken on samples containing calibrated heat sources confirm the predictions from reconstructions of synthetic data. We also present inversions of experimental data obtained from a real welded Inconel 718 specimen. The results are in good qualitative agreement with the results of liquids penetrants testing.
NASA Astrophysics Data System (ADS)
Perez, Pedro B.; Hamawi, John N.
2017-09-01
Nuclear power plant radiation protection design features are based on radionuclide source terms derived from conservative assumptions that envelope expected operating experience. Two parameters that significantly affect the radionuclide concentrations in the source term are failed fuel fraction and effective fission product appearance rate coefficients. Failed fuel fraction may be a regulatory based assumption such as in the U.S. Appearance rate coefficients are not specified in regulatory requirements, but have been referenced to experimental data that is over 50 years old. No doubt the source terms are conservative as demonstrated by operating experience that has included failed fuel, but it may be too conservative leading to over-designed shielding for normal operations as an example. Design basis source term methodologies for normal operations had not advanced until EPRI published in 2015 an updated ANSI/ANS 18.1 source term basis document. Our paper revisits the fission product appearance rate coefficients as applied in the derivation source terms following the original U.S. NRC NUREG-0017 methodology. New coefficients have been calculated based on recent EPRI results which demonstrate the conservatism in nuclear power plant shielding design.
Outlier Resistant Predictive Source Encoding for a Gaussian Stationary Nominal Source.
1987-09-18
breakdown point and influence function . The proposed sequence of predictive encoders attains strictly positive breakdown point and uniformly bounded... influence function , at the expense of increased mean difference-squared distortion and differential entropy, at the Gaussian nominal source.
Predictive Performance Assessment: Trait and State Dimensions Should not be Confused
NASA Astrophysics Data System (ADS)
Pattyn, N.; Migeotte, P.-F.; Morais, J.; Cluydts, R.; Soetens, E.; Meeusen, R.; de Schutter, G.; Nederhof, E.; Kolinsky, R.
2008-06-01
One of the major aims of performance investigation is to obtain a measure predicting real-life performance, in order to prevent consequences of a potential decrement. Whereas the predictive validity of such assessment has been extensively described for long-term outcomes, as is the case for testing in selection context, equivalent evidence is lacking regarding the short-term predictive value of cognitive testing, i.e., whether these results reflect real-life performance on an immediately subsequent task. In this series of experiments, we investigated both medium-term and short-term predictive value of psychophysiological testing with regard to real-life performance in two operational settings: military student pilots with regard to their success on an evaluation flight, and special forces candidates with regard to their performance on their training course. Our results showed some relationships between test performance and medium-term outcomes. However, no short-term predictive value could be identified for cognitive testing, despite the fact physiological data showed interesting trends. We recommend a critical distinction between "state" and "trait" dimensions of performance with regard to the predictive value of testing.
Characterizing Uncertainty and Variability in PBPK Models ...
Mode-of-action based risk and safety assessments can rely upon tissue dosimetry estimates in animals and humans obtained from physiologically-based pharmacokinetic (PBPK) modeling. However, risk assessment also increasingly requires characterization of uncertainty and variability; such characterization for PBPK model predictions represents a continuing challenge to both modelers and users. Current practices show significant progress in specifying deterministic biological models and the non-deterministic (often statistical) models, estimating their parameters using diverse data sets from multiple sources, and using them to make predictions and characterize uncertainty and variability. The International Workshop on Uncertainty and Variability in PBPK Models, held Oct 31-Nov 2, 2006, sought to identify the state-of-the-science in this area and recommend priorities for research and changes in practice and implementation. For the short term, these include: (1) multidisciplinary teams to integrate deterministic and non-deterministic/statistical models; (2) broader use of sensitivity analyses, including for structural and global (rather than local) parameter changes; and (3) enhanced transparency and reproducibility through more complete documentation of the model structure(s) and parameter values, the results of sensitivity and other analyses, and supporting, discrepant, or excluded data. Longer-term needs include: (1) theoretic and practical methodological impro
NASA Technical Reports Server (NTRS)
Shapiro, I. I.; Counselman, C. C., III
1975-01-01
The uses of radar observations of planets and very-long-baseline radio interferometric observations of extragalactic objects to test theories of gravitation are described in detail with special emphasis on sources of error. The accuracy achievable in these tests with data already obtained, can be summarized in terms of: retardation of signal propagation (radar), deflection of radio waves (interferometry), advance of planetary perihelia (radar), gravitational quadrupole moment of sun (radar), and time variation of gravitational constant (radar). The analyses completed to date have yielded no significant disagreement with the predictions of general relativity.
Test Equal Bending by Gravity for Space and Time
NASA Astrophysics Data System (ADS)
Sweetser, Douglas
2009-05-01
For the simplest problem of gravity - a static, non-rotating, spherically symmetric source - the solution for spacetime bending around the Sun should be evenly split between time and space. That is true to first order in M/R, and confirmed by experiment. At second order, general relativity predicts different amounts of contribution from time and space without a physical justification. I show an exponential metric is consistent with light bending to first order, measurably different at second order. All terms to all orders show equal contributions from space and time. Beautiful minimalism is Nature's way.
Three-dimensional calculations of rotor-airframe interaction in forward flight
NASA Technical Reports Server (NTRS)
Zori, Laith A. J.; Mathur, Sanjay R.; Rajagopalan, R. G.
1992-01-01
A method for analyzing the mutual aerodynamic interaction between a rotor and an airframe model has been developed. This technique models the rotor implicitly through the source terms of the momentum equations. A three-dimensional, incompressible, laminar, Navier-Stokes solver in cylindrical coordinates was developed for analyzing the rotor/airframe problem. The calculations are performed on a simplified model at an advance ratio of 0.1. The airframe surface pressure predictions are found to be in good agreement with wind tunnel test data. Results are presented for velocity and pressure field distributions in the wake of the rotor.
Dispersal, deposition and collective doses after the Chernobyl disaster.
Fairlie, Ian
2007-01-01
This article discusses the dispersal, deposition and collective doses of the radioactive fallout from the Chernobyl accident. It explains that, although Belarus, Ukraine and Russia were heavily contaminated by the Chernobyl fallout, more than half of the fallout was deposited outside these countries, particularly in Western Europe. Indeed, about 40 per cent of the surface area of Europe was contaminated. Collective doses are predicted to result in 30,000 to 60,000 excess cancer deaths throughout the northern hemisphere, mostly in western Europe. The article also estimates that the caesium-137 source term was about a third higher than official figures.
Light-assisted templated self assembly using photonic crystal slabs.
Mejia, Camilo A; Dutt, Avik; Povinelli, Michelle L
2011-06-06
We explore a technique which we term light-assisted templated self-assembly. We calculate the optical forces on colloidal particles over a photonic crystal slab. We show that exciting a guided resonance mode of the slab yields a resonantly-enhanced, attractive optical force. We calculate the lateral optical forces above the slab and predict that stably trapped periodic patterns of particles are dependent on wavelength and polarization. Tuning the wavelength or polarization of the light source may thus allow the formation and reconfiguration of patterns. We expect that this technique may be used to design all-optically reconfigurable photonic devices.
Modeling Scramjet Flows with Variable Turbulent Prandtl and Schmidt Numbers
NASA Technical Reports Server (NTRS)
Xiao, X.; Hassan, H. A.; Baurle, R. A.
2006-01-01
A complete turbulence model, where the turbulent Prandtl and Schmidt numbers are calculated as part of the solution and where averages involving chemical source terms are modeled, is presented. The ability of avoiding the use of assumed or evolution Probability Distribution Functions (PDF's) results in a highly efficient algorithm for reacting flows. The predictions of the model are compared with two sets of experiments involving supersonic mixing and one involving supersonic combustion. The results demonstrate the need for consideration of turbulence/chemistry interactions in supersonic combustion. In general, good agreement with experiment is indicated.
LANDSAT 4 band 6 data evaluation
NASA Technical Reports Server (NTRS)
1983-01-01
Satellite data collected over Lake Ontario were processed to observed surface temperature values. This involved computing apparent radiance values for each point where surface temperatures were known from averaged digital count values. These radiance values were then converted by using the LOWTRAN 5A atmospheric propagation model. This model was modified by incorporating a spectral response function for the LANDSAT band 6 sensors. A downwelled radiance term derived from LOWTRAN was included to account for reflected sky radiance. A blackbody equivalent source radiance was computed. Measured temperatures were plotted against the predicted temperature. The RMS error between the data sets is 0.51K.
A Second Law Based Unstructured Finite Volume Procedure for Generalized Flow Simulation
NASA Technical Reports Server (NTRS)
Majumdar, Alok
1998-01-01
An unstructured finite volume procedure has been developed for steady and transient thermo-fluid dynamic analysis of fluid systems and components. The procedure is applicable for a flow network consisting of pipes and various fittings where flow is assumed to be one dimensional. It can also be used to simulate flow in a component by modeling a multi-dimensional flow using the same numerical scheme. The flow domain is discretized into a number of interconnected control volumes located arbitrarily in space. The conservation equations for each control volume account for the transport of mass, momentum and entropy from the neighboring control volumes. In addition, they also include the sources of each conserved variable and time dependent terms. The source term of entropy equation contains entropy generation due to heat transfer and fluid friction. Thermodynamic properties are computed from the equation of state of a real fluid. The system of equations is solved by a hybrid numerical method which is a combination of simultaneous Newton-Raphson and successive substitution schemes. The paper also describes the application and verification of the procedure by comparing its predictions with the analytical and numerical solution of several benchmark problems.
Incorporating the eruptive history in a stochastic model for volcanic eruptions
NASA Astrophysics Data System (ADS)
Bebbington, Mark
2008-08-01
We show how a stochastic version of a general load-and-discharge model for volcanic eruptions can be implemented. The model tracks the history of the volcano through a quantity proportional to stored magma volume. Thus large eruptions can influence the activity rate for a considerable time following, rather than only the next repose as in the time-predictable model. The model can be fitted to data using point-process methods. Applied to flank eruptions of Mount Etna, it exhibits possible long-term quasi-cyclic behavior, and to Mauna Loa, a long-term decrease in activity. An extension to multiple interacting sources is outlined, which may be different eruption styles or locations, or different volcanoes. This can be used to identify an 'average interaction' between the sources. We find significant evidence that summit eruptions of Mount Etna are dependent on preceding flank eruptions, with both flank and summit eruptions being triggered by the other type. Fitted to Mauna Loa and Kilauea, the model had a marginally significant relationship between eruptions of Mauna Loa and Kilauea, consistent with the invasion of the latter's plumbing system by magma from the former.
NASA Astrophysics Data System (ADS)
Pulinets, S. A.; Ouzounov, D. P.; Karelin, A. V.; Davidenko, D. V.
2015-07-01
This paper describes the current understanding of the interaction between geospheres from a complex set of physical and chemical processes under the influence of ionization. The sources of ionization involve the Earth's natural radioactivity and its intensification before earthquakes in seismically active regions, anthropogenic radioactivity caused by nuclear weapon testing and accidents in nuclear power plants and radioactive waste storage, the impact of galactic and solar cosmic rays, and active geophysical experiments using artificial ionization equipment. This approach treats the environment as an open complex system with dissipation, where inherent processes can be considered in the framework of the synergistic approach. We demonstrate the synergy between the evolution of thermal and electromagnetic anomalies in the Earth's atmosphere, ionosphere, and magnetosphere. This makes it possible to determine the direction of the interaction process, which is especially important in applications related to short-term earthquake prediction. That is why the emphasis in this study is on the processes proceeding the final stage of earthquake preparation; the effects of other ionization sources are used to demonstrate that the model is versatile and broadly applicable in geophysics.
Stafoggia, Massimo; Schwartz, Joel; Badaloni, Chiara; Bellander, Tom; Alessandrini, Ester; Cattani, Giorgio; De' Donato, Francesca; Gaeta, Alessandra; Leone, Gianluca; Lyapustin, Alexei; Sorek-Hamer, Meytar; de Hoogh, Kees; Di, Qian; Forastiere, Francesco; Kloog, Itai
2017-02-01
Health effects of air pollution, especially particulate matter (PM), have been widely investigated. However, most of the studies rely on few monitors located in urban areas for short-term assessments, or land use/dispersion modelling for long-term evaluations, again mostly in cities. Recently, the availability of finely resolved satellite data provides an opportunity to estimate daily concentrations of air pollutants over wide spatio-temporal domains. Italy lacks a robust and validated high resolution spatio-temporally resolved model of particulate matter. The complex topography and the air mixture from both natural and anthropogenic sources are great challenges difficult to be addressed. We combined finely resolved data on Aerosol Optical Depth (AOD) from the Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithm, ground-level PM 10 measurements, land-use variables and meteorological parameters into a four-stage mixed model framework to derive estimates of daily PM 10 concentrations at 1-km2 grid over Italy, for the years 2006-2012. We checked performance of our models by applying 10-fold cross-validation (CV) for each year. Our models displayed good fitting, with mean CV-R2=0.65 and little bias (average slope of predicted VS observed PM 10 =0.99). Out-of-sample predictions were more accurate in Northern Italy (Po valley) and large conurbations (e.g. Rome), for background monitoring stations, and in the winter season. Resulting concentration maps showed highest average PM 10 levels in specific areas (Po river valley, main industrial and metropolitan areas) with decreasing trends over time. Our daily predictions of PM 10 concentrations across the whole Italy will allow, for the first time, estimation of long-term and short-term effects of air pollution nationwide, even in areas lacking monitoring data. Copyright © 2016 Elsevier Ltd. All rights reserved.
Cosmic-ray record in solar system matter
NASA Technical Reports Server (NTRS)
Reedy, R. C.; Arnold, J. R.; Lal, D.
1983-01-01
The interaction of galactic cosmic rays (GCR) and solar cosmic rays (SCR) with bodies in the solar system is discussed, and what the record of that interaction reveals about the history of the solar system is considered. The influence of the energy, charge, and mass of the particles on the interaction is addressed, showing long-term average fluxes of solar protons, predicted production rates for heavy-nuclei tracks and various radionuclides as a function of depth in lunar rock, and integral fluxes of protons emitted by solar flares. The variation of the earth's magnetic field, the gardening of the lunar surface, and the source of meteorites and cosmic dust are studied using the cosmic ray record. The time variation of GCR, SCR, and VH and VVH nuclei is discussed for both the short and the long term.
Contaminant dispersal in bounded turbulent shear flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wallace, J.M.; Bernard, P.S.; Chiang, K.F.
The dispersion of smoke downstream of a line source at the wall and at y{sup +} = 30 in a turbulent boundary layer has been predicted with a non-local model of the scalar fluxes {bar u}c and {bar v}c. The predicted plume from the wall source has been compared to high Schmidt number experimental measurements using a combination of hot-wire anemometry to obtain velocity component data synchronously with concentration data obtained optically. The predicted plumes from the source at y{sup +} = 30 and at the wall also have been compared to a low Schmidt number direct numerical simulation. Nearmore » the source, the non-local flux models give considerably better predictions than models which account solely for mean gradient transport. At a sufficient distance downstream the gradient models gives reasonably good predictions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zachara, John M.; Chen, Xingyuan; Murray, Chris
A tightly spaced well-field within a groundwater uranium (U) plume in the groundwater-surface water transition zone was monitored for a three year period for groundwater elevation and dissolved solutes. The plume discharges to the Columbia River, which displays a dramatic spring stage surge resulting from mountain snowmelt. Groundwater exhibits a low hydrologic gradient and chemical differences with river water. River water intrudes the site in spring. Specific aims were to assess the impacts of river intrusion on dissolved uranium (Uaq), specific conductance (SpC), and other solutes, and to discriminate between transport, geochemical, and source term heterogeneity effects. Time series trendsmore » for Uaq and SpC were complex and displayed large temporal well-to well variability as a result of water table elevation fluctuations, river water intrusion, and changes in groundwater flow directions. The wells were clustered into subsets exhibiting common temporal behaviors resulting from the intrusion dynamics of river water and the location of source terms. Concentration hot spots were observed in groundwater that varied in location with increasing water table elevation. Heuristic reactive transport modeling with PFLOTRAN demonstrated that mobilized U was transported between wells and source terms in complex trajectories, and was diluted as river water entered and exited the groundwater system. While uranium time-series concentration trends varied significantly from year to year as a result of climate-caused differences in the spring hydrograph, common and partly predictable response patterns were observed that were driven by water table elevation, and the extent and duration of the river water intrusion event.« less
Mason, L; Peters, E; Williams, S C; Kumari, V
2017-01-17
Little is known about the psychobiological mechanisms of cognitive behavioural therapy for psychosis (CBTp) and which specific processes are key in predicting favourable long-term outcomes. Following theoretical models of psychosis, this proof-of-concept study investigated whether the long-term recovery path of CBTp completers can be predicted by the neural changes in threat-based social affective processing that occur during CBTp. We followed up 22 participants who had undergone a social affective processing task during functional magnetic resonance imaging along with self-report and clinician-administered symptom measures, before and after receiving CBTp. Monthly ratings of psychotic and affective symptoms were obtained retrospectively across 8 years since receiving CBTp, plus self-reported recovery at final follow-up. We investigated whether these long-term outcomes were predicted by CBTp-led changes in functional connections with dorsal prefrontal cortical and amygdala during the processing of threatening and prosocial facial affect. Although long-term psychotic symptoms were predicted by changes in prefrontal connections during prosocial facial affective processing, long-term affective symptoms were predicted by threat-related amygdalo-inferior parietal lobule connectivity. Greater increases in dorsolateral prefrontal cortex connectivity with amygdala following CBTp also predicted higher subjective ratings of recovery at long-term follow-up. These findings show that reorganisation occurring at the neural level following psychological therapy can predict the subsequent recovery path of people with psychosis across 8 years. This novel methodology shows promise for further studies with larger sample size, which are needed to better examine the sensitivity of psychobiological processes, in comparison to existing clinical measures, in predicting long-term outcomes.
Barbeta, Adrià; Mejía-Chang, Monica; Ogaya, Romà; Voltas, Jordi; Dawson, Todd E; Peñuelas, Josep
2015-03-01
Vegetation in water-limited ecosystems relies strongly on access to deep water reserves to withstand dry periods. Most of these ecosystems have shallow soils over deep groundwater reserves. Understanding the functioning and functional plasticity of species-specific root systems and the patterns of or differences in the use of water sources under more frequent or intense droughts is therefore necessary to properly predict the responses of seasonally dry ecosystems to future climate. We used stable isotopes to investigate the seasonal patterns of water uptake by a sclerophyll forest on sloped terrain with shallow soils. We assessed the effect of a long-term experimental drought (12 years) and the added impact of an extreme natural drought that produced widespread tree mortality and crown defoliation. The dominant species, Quercus ilex, Arbutus unedo and Phillyrea latifolia, all have dimorphic root systems enabling them to access different water sources in space and time. The plants extracted water mainly from the soil in the cold and wet seasons but increased their use of groundwater during the summer drought. Interestingly, the plants subjected to the long-term experimental drought shifted water uptake toward deeper (10-35 cm) soil layers during the wet season and reduced groundwater uptake in summer, indicating plasticity in the functional distribution of fine roots that dampened the effect of our experimental drought over the long term. An extreme drought in 2011, however, further reduced the contribution of deep soil layers and groundwater to transpiration, which resulted in greater crown defoliation in the drought-affected plants. This study suggests that extreme droughts aggravate moderate but persistent drier conditions (simulated by our manipulation) and may lead to the depletion of water from groundwater reservoirs and weathered bedrock, threatening the preservation of these Mediterranean ecosystems in their current structures and compositions. © 2014 John Wiley & Sons Ltd.
Luyckx, Kim; Luyten, Léon; Daelemans, Walter; Van den Bulcke, Tim
2016-01-01
Objective Enormous amounts of healthcare data are becoming increasingly accessible through the large-scale adoption of electronic health records. In this work, structured and unstructured (textual) data are combined to assign clinical diagnostic and procedural codes (specifically ICD-9-CM) to patient stays. We investigate whether integrating these heterogeneous data types improves prediction strength compared to using the data types in isolation. Methods Two separate data integration approaches were evaluated. Early data integration combines features of several sources within a single model, and late data integration learns a separate model per data source and combines these predictions with a meta-learner. This is evaluated on data sources and clinical codes from a broad set of medical specialties. Results When compared with the best individual prediction source, late data integration leads to improvements in predictive power (eg, overall F-measure increased from 30.6% to 38.3% for International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) diagnostic codes), while early data integration is less consistent. The predictive strength strongly differs between medical specialties, both for ICD-9-CM diagnostic and procedural codes. Discussion Structured data provides complementary information to unstructured data (and vice versa) for predicting ICD-9-CM codes. This can be captured most effectively by the proposed late data integration approach. Conclusions We demonstrated that models using multiple electronic health record data sources systematically outperform models using data sources in isolation in the task of predicting ICD-9-CM codes over a broad range of medical specialties. PMID:26316458
Interannual, solar cycle, and trend terms in middle atmospheric temperature time series from HALOE
NASA Astrophysics Data System (ADS)
Remsberg, E. E.; Deaver, L. E.
2005-03-01
Temperature versus pressure or T(p) time series from the Halogen Occultation Experiment (HALOE) have been generated and analyzed for the period of 1991-2004 and for the mesosphere and upper stratosphere for latitude zones from 40N to 40S. Multiple linear regression (MLR) techniques were used for the analysis of the seasonal and the significant interannual and solar cycle (or decadal-scale) terms. An 11-yr solar cycle (SC) term of amplitude 0.5 to 1.7 K was found for the middle to upper mesosphere; its phase was determined by a Fourier fit to the de-seasonalized residual. This SC term is largest and has a lag of several years for northern hemisphere middle latitudes of the middle mesosphere, perhaps due to the interfering effects of wintertime wave dissipation. The SC response from the MLR models is weaker but essentially in-phase at low latitudes and in the southern hemisphere. An in-phase SC response term is also significant near the tropical stratopause with an amplitude of about 0.4 to 0.6 K, which is somewhat less than predicted from models. Both sub-biennial (688-dy) and QBO (800-dy) terms are resolved for the mid to upper stratosphere along with a decadal-scale term that is presumed to have a 13.5-yr period due to their predicted modulation. This decadal-scale term is out-of-phase with the SC during 1991-2004. However, the true nature and source of this term is still uncertain, especially at 5 hPa. Significant linear cooling trends ranging from -0.3 K to -1.1 K per decade were found in the tropical upper stratosphere and subtropical mesosphere. Trends have not emerged so far for the tropical mesosphere, so it is concluded that the cooling rates that have been resolved for the subtropics are likely upper limits. As HALOE-like measurements continue and their time series lengthen, it is anticipated that better accuracy can be achieved for these interannual, SC, and trend terms.
Adjoint Sensitivity Method to Determine Optimal Set of Stations for Tsunami Source Inversion
NASA Astrophysics Data System (ADS)
Gusman, A. R.; Hossen, M. J.; Cummins, P. R.; Satake, K.
2017-12-01
We applied the adjoint sensitivity technique in tsunami science for the first time to determine an optimal set of stations for a tsunami source inversion. The adjoint sensitivity (AS) method has been used in numerical weather prediction to find optimal locations for adaptive observations. We implemented this technique to Green's Function based Time Reverse Imaging (GFTRI), which is recently used in tsunami source inversion in order to reconstruct the initial sea surface displacement, known as tsunami source model. This method has the same source representation as the traditional least square (LSQ) source inversion method where a tsunami source is represented by dividing the source region into a regular grid of "point" sources. For each of these, Green's function (GF) is computed using a basis function for initial sea surface displacement whose amplitude is concentrated near the grid point. We applied the AS method to the 2009 Samoa earthquake tsunami that occurred on 29 September 2009 in the southwest Pacific, near the Tonga trench. Many studies show that this earthquake is a doublet associated with both normal faulting in the outer-rise region and thrust faulting in the subduction interface. To estimate the tsunami source model for this complex event, we initially considered 11 observations consisting of 5 tide gauges and 6 DART bouys. After implementing AS method, we found the optimal set of observations consisting with 8 stations. Inversion with this optimal set provides better result in terms of waveform fitting and source model that shows both sub-events associated with normal and thrust faulting.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gauntt, Randall O.; Goldmann, Andrew; Kalinich, Donald A.
2016-12-01
In this study, risk-significant pressurized-water reactor severe accident sequences are examined using MELCOR 1.8.5 to explore the range of fission product releases to the reactor containment building. Advances in the understanding of fission product release and transport behavior and severe accident progression are used to render best estimate analyses of selected accident sequences. Particular emphasis is placed on estimating the effects of high fuel burnup in contrast with low burnup on fission product releases to the containment. Supporting this emphasis, recent data available on fission product release from high-burnup (HBU) fuel from the French VERCOR project are used in thismore » study. The results of these analyses are treated as samples from a population of accident sequences in order to employ approximate order statistics characterization of the results. These trends and tendencies are then compared to the NUREG-1465 alternative source term prescription used today for regulatory applications. In general, greater differences are observed between the state-of-the-art calculations for either HBU or low-burnup (LBU) fuel and the NUREG-1465 containment release fractions than exist between HBU and LBU release fractions. Current analyses suggest that retention of fission products within the vessel and the reactor coolant system (RCS) are greater than contemplated in the NUREG-1465 prescription, and that, overall, release fractions to the containment are therefore lower across the board in the present analyses than suggested in NUREG-1465. The decreased volatility of Cs 2 MoO 4 compared to CsI or CsOH increases the predicted RCS retention of cesium, and as a result, cesium and iodine do not follow identical behaviors with respect to distribution among vessel, RCS, and containment. With respect to the regulatory alternative source term, greater differences are observed between the NUREG-1465 prescription and both HBU and LBU predictions than exist between HBU and LBU analyses. Additionally, current analyses suggest that the NUREG-1465 release fractions are conservative by about a factor of 2 in terms of release fractions and that release durations for in-vessel and late in-vessel release periods are in fact longer than the NUREG-1465 durations. It is currently planned that a subsequent report will further characterize these results using more refined statistical methods, permitting a more precise reformulation of the NUREG-1465 alternative source term for both LBU and HBU fuels, with the most important finding being that the NUREG-1465 formula appears to embody significant conservatism compared to current best-estimate analyses. ACKNOWLEDGEMENTS This work was supported by the United States Nuclear Regulatory Commission, Office of Nuclear Regulatory Research. The authors would like to thank Dr. Ian Gauld and Dr. Germina Ilas, of Oak Ridge National Laboratory, for their contributions to this work. In addition to development of core fission product inventory and decay heat information for use in MELCOR models, their insights related to fuel management practices and resulting effects on spatial distribution of fission products in the core was instrumental in completion of our work.« less
NASA Astrophysics Data System (ADS)
Berge-Thierry, C.; Hollender, F.; Guyonnet-Benaize, C.; Baumont, D.; Ameri, G.; Bollinger, L.
2017-09-01
Seismic analysis in the context of nuclear safety in France is currently guided by a pure deterministic approach based on Basic Safety Rule ( Règle Fondamentale de Sûreté) RFS 2001-01 for seismic hazard assessment, and on the ASN/2/01 Guide that provides design rules for nuclear civil engineering structures. After the 2011 Tohohu earthquake, nuclear operators worldwide were asked to estimate the ability of their facilities to sustain extreme seismic loads. The French licensees then defined the `hard core seismic levels', which are higher than those considered for design or re-assessment of the safety of a facility. These were initially established on a deterministic basis, and they have been finally justified through state-of-the-art probabilistic seismic hazard assessments. The appreciation and propagation of uncertainties when assessing seismic hazard in France have changed considerably over the past 15 years. This evolution provided the motivation for the present article, the objectives of which are threefold: (1) to provide a description of the current practices in France to assess seismic hazard in terms of nuclear safety; (2) to discuss and highlight the sources of uncertainties and their treatment; and (3) to use a specific case study to illustrate how extended source modeling can help to constrain the key assumptions or parameters that impact upon seismic hazard assessment. This article discusses in particular seismic source characterization, strong ground motion prediction, and maximal magnitude constraints, according to the practice of the French Atomic Energy Commission. Due to increases in strong motion databases in terms of the number and quality of the records in their metadata and the uncertainty characterization, several recently published empirical ground motion prediction models are eligible for seismic hazard assessment in France. We show that propagation of epistemic and aleatory uncertainties is feasible in a deterministic approach, as in a probabilistic way. Assessment of seismic hazard in France in the framework of the safety of nuclear facilities should consider these recent advances. In this sense, the opening of discussions with all of the stakeholders in France to update the reference documents (i.e., RFS 2001-01; ASN/2/01 Guide) appears appropriate in the short term.
26 CFR 1.737-3 - Basis adjustments; Recovery rules.
Code of Federal Regulations, 2012 CFR
2012-04-01
... Properties A1, A2, and A3 is long-term, U.S.-source capital gain or loss. The character of gain on Property A4 is long-term, foreign-source capital gain. B contributes Property B, nondepreciable real property...-term, foreign-source capital gain ($3,000 total gain under section 737 × $2,000 net long-term, foreign...
26 CFR 1.737-3 - Basis adjustments; Recovery rules.
Code of Federal Regulations, 2013 CFR
2013-04-01
... Properties A1, A2, and A3 is long-term, U.S.-source capital gain or loss. The character of gain on Property A4 is long-term, foreign-source capital gain. B contributes Property B, nondepreciable real property...-term, foreign-source capital gain ($3,000 total gain under section 737 × $2,000 net long-term, foreign...
26 CFR 1.737-3 - Basis adjustments; Recovery rules.
Code of Federal Regulations, 2011 CFR
2011-04-01
... Properties A1, A2, and A3 is long-term, U.S.-source capital gain or loss. The character of gain on Property A4 is long-term, foreign-source capital gain. B contributes Property B, nondepreciable real property...-term, foreign-source capital gain ($3,000 total gain under section 737 × $2,000 net long-term, foreign...
26 CFR 1.737-3 - Basis adjustments; Recovery rules.
Code of Federal Regulations, 2014 CFR
2014-04-01
... Properties A1, A2, and A3 is long-term, U.S.-source capital gain or loss. The character of gain on Property A4 is long-term, foreign-source capital gain. B contributes Property B, nondepreciable real property...-term, foreign-source capital gain ($3,000 total gain under section 737 × $2,000 net long-term, foreign...
26 CFR 1.737-3 - Basis adjustments; Recovery rules.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Properties A1, A2, and A3 is long-term, U.S.-source capital gain or loss. The character of gain on Property A4 is long-term, foreign-source capital gain. B contributes Property B, nondepreciable real property...-term, foreign-source capital gain ($3,000 total gain under section 737 × $2,000 net long-term, foreign...
Tiemuerbieke, Bahejiayinaer; Min, Xiao-Jun; Zang, Yong-Xin; Xing, Peng; Ma, Jian-Ying; Sun, Wei
2018-09-01
In water-limited ecosystems, spatial and temporal partitioning of water sources is an important mechanism that facilitates plant survival and lessens the competition intensity of co-existing plants. Insights into species-specific root functional plasticity and differences in the water sources of co-existing plants under changing water conditions can aid in accurate prediction of the response of desert ecosystems to future climate change. We used stable isotopes of soil water, groundwater and xylem water to determine the seasonal and inter- and intraspecific differences variations in the water sources of six C 3 and C 4 shrubs in the Gurbantonggut desert. We also measured the stem water potentials to determine the water stress levels of each species under varying water conditions. The studied shrubs exhibited similar seasonal water uptake patterns, i.e., all shrubs extracted shallow soil water recharged by snowmelt water during early spring and reverted to deeper water sources during dry summer periods, indicating that all of the studied shrubs have dimorphic root systems that enable them to obtain water sources that differ in space and time. Species in the C 4 shrub community exhibited differences in seasonal water absorption and water status due to differences in topography and rooting depth, demonstrating divergent adaptations to water availability and water stress. Haloxylon ammodendron and T. ramosissima in the C 3 /C 4 mixed community were similar in terms of seasonal water extraction but differed with respect to water potential, which indicated that plant water status is controlled by both root functioning and shoot eco-physiological traits. The two Tamarix species in the C 3 shrub community were similar in terms of water uptake and water status, which suggests functional convergence of the root system and physiological performance under same soil water conditions. In different communities, Haloxylon ammodendron differed in terms of summer water extraction, which suggests that this species exhibits plasticity with respect to rooting depth under different soil water conditions. Shrubs in the Gurbantonggut desert displayed varying adaptations across species and communities through divergent root functioning and shoot eco-physiological traits. Copyright © 2018 Elsevier B.V. All rights reserved.
Initiation process of earthquakes and its implications for seismic hazard reduction strategy.
Kanamori, H
1996-01-01
For the average citizen and the public, "earthquake prediction" means "short-term prediction," a prediction of a specific earthquake on a relatively short time scale. Such prediction must specify the time, place, and magnitude of the earthquake in question with sufficiently high reliability. For this type of prediction, one must rely on some short-term precursors. Examinations of strain changes just before large earthquakes suggest that consistent detection of such precursory strain changes cannot be expected. Other precursory phenomena such as foreshocks and nonseismological anomalies do not occur consistently either. Thus, reliable short-term prediction would be very difficult. Although short-term predictions with large uncertainties could be useful for some areas if their social and economic environments can tolerate false alarms, such predictions would be impractical for most modern industrialized cities. A strategy for effective seismic hazard reduction is to take full advantage of the recent technical advancements in seismology, computers, and communication. In highly industrialized communities, rapid earthquake information is critically important for emergency services agencies, utilities, communications, financial companies, and media to make quick reports and damage estimates and to determine where emergency response is most needed. Long-term forecast, or prognosis, of earthquakes is important for development of realistic building codes, retrofitting existing structures, and land-use planning, but the distinction between short-term and long-term predictions needs to be clearly communicated to the public to avoid misunderstanding. Images Fig. 8 PMID:11607657
Watling, James I.; Brandt, Laura A.; Bucklin, David N.; Fujisaki, Ikuko; Mazzotti, Frank J.; Romañach, Stephanie; Speroterra, Carolina
2015-01-01
Species distribution models (SDMs) are widely used in basic and applied ecology, making it important to understand sources and magnitudes of uncertainty in SDM performance and predictions. We analyzed SDM performance and partitioned variance among prediction maps for 15 rare vertebrate species in the southeastern USA using all possible combinations of seven potential sources of uncertainty in SDMs: algorithms, climate datasets, model domain, species presences, variable collinearity, CO2 emissions scenarios, and general circulation models. The choice of modeling algorithm was the greatest source of uncertainty in SDM performance and prediction maps, with some additional variation in performance associated with the comprehensiveness of the species presences used for modeling. Other sources of uncertainty that have received attention in the SDM literature such as variable collinearity and model domain contributed little to differences in SDM performance or predictions in this study. Predictions from different algorithms tended to be more variable at northern range margins for species with more northern distributions, which may complicate conservation planning at the leading edge of species' geographic ranges. The clear message emerging from this work is that researchers should use multiple algorithms for modeling rather than relying on predictions from a single algorithm, invest resources in compiling a comprehensive set of species presences, and explicitly evaluate uncertainty in SDM predictions at leading range margins.
NASA Astrophysics Data System (ADS)
Lu, Xinhua; Mao, Bing; Dong, Bingjiang
2018-01-01
Xia et al. (2017) proposed a novel, fully implicit method for the discretization of the bed friction terms for solving the shallow-water equations. The friction terms contain h-7/3 (h denotes water depth), which may be extremely large, introducing machine error when h approaches zero. To address this problem, Xia et al. (2017) introduces auxiliary variables (their equations (37) and (38)) so that h-4/3 rather than h-7/3 is calculated and solves a transformed equation (their equation (39)). The introduced auxiliary variables require extra storage. We implemented an analysis on the magnitude of the friction terms to find that these terms on the whole do not exceed the machine floating-point number precision, and thus we proposed a simple-to-implement technique by splitting h-7/3 into different parts of the friction terms to avoid introducing machine error. This technique does not need extra storage or to solve a transformed equation and thus is more efficient for simulations. We also showed that the surface reconstruction method proposed by Xia et al. (2017) may lead to predictions with spurious wiggles because the reconstructed Riemann states may misrepresent the water gravitational effect.
Role of Subdural Electrocorticography in Prediction of Long-Term Seizure Outcome in Epilepsy Surgery
ERIC Educational Resources Information Center
Asano, Eishi; Juhasz, Csaba; Shah, Aashit; Sood, Sandeep; Chugani, Harry T.
2009-01-01
Since prediction of long-term seizure outcome using preoperative diagnostic modalities remains suboptimal in epilepsy surgery, we evaluated whether interictal spike frequency measures obtained from extraoperative subdural electrocorticography (ECoG) recording could predict long-term seizure outcome. This study included 61 young patients (age…
NASA Technical Reports Server (NTRS)
Tuttle, M. E.; Brinson, H. F.
1986-01-01
The impact of flight error in measured viscoelastic parameters on subsequent long-term viscoelastic predictions is numerically evaluated using the Schapery nonlinear viscoelastic model. Of the seven Schapery parameters, the results indicated that long-term predictions were most sensitive to errors in the power law parameter n. Although errors in the other parameters were significant as well, errors in n dominated all other factors at long times. The process of selecting an appropriate short-term test cycle so as to insure an accurate long-term prediction was considered, and a short-term test cycle was selected using material properties typical for T300/5208 graphite-epoxy at 149 C. The process of selection is described, and its individual steps are itemized.
Bayesian estimation of a source term of radiation release with approximately known nuclide ratios
NASA Astrophysics Data System (ADS)
Tichý, Ondřej; Šmídl, Václav; Hofman, Radek
2016-04-01
We are concerned with estimation of a source term in case of an accidental release from a known location, e.g. a power plant. Usually, the source term of an accidental release of radiation comprises of a mixture of nuclide. The gamma dose rate measurements do not provide a direct information on the source term composition. However, physical properties of respective nuclide (deposition properties, decay half-life) can be used when uncertain information on nuclide ratios is available, e.g. from known reactor inventory. The proposed method is based on linear inverse model where the observation vector y arise as a linear combination y = Mx of a source-receptor-sensitivity (SRS) matrix M and the source term x. The task is to estimate the unknown source term x. The problem is ill-conditioned and further regularization is needed to obtain a reasonable solution. In this contribution, we assume that nuclide ratios of the release is known with some degree of uncertainty. This knowledge is used to form the prior covariance matrix of the source term x. Due to uncertainty in the ratios the diagonal elements of the covariance matrix are considered to be unknown. Positivity of the source term estimate is guaranteed by using multivariate truncated Gaussian distribution. Following Bayesian approach, we estimate all parameters of the model from the data so that y, M, and known ratios are the only inputs of the method. Since the inference of the model is intractable, we follow the Variational Bayes method yielding an iterative algorithm for estimation of all model parameters. Performance of the method is studied on simulated 6 hour power plant release where 3 nuclide are released and 2 nuclide ratios are approximately known. The comparison with method with unknown nuclide ratios will be given to prove the usefulness of the proposed approach. This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases by Inverse Atmospheric Dispersion Modelling (STRADI).
NASA Astrophysics Data System (ADS)
Trugman, Daniel T.; Shearer, Peter M.
2017-04-01
Earthquake source spectra contain fundamental information about the dynamics of earthquake rupture. However, the inherent tradeoffs in separating source and path effects, when combined with limitations in recorded signal bandwidth, make it challenging to obtain reliable source spectral estimates for large earthquake data sets. We present here a stable and statistically robust spectral decomposition method that iteratively partitions the observed waveform spectra into source, receiver, and path terms. Unlike previous methods of its kind, our new approach provides formal uncertainty estimates and does not assume self-similar scaling in earthquake source properties. Its computational efficiency allows us to examine large data sets (tens of thousands of earthquakes) that would be impractical to analyze using standard empirical Green's function-based approaches. We apply the spectral decomposition technique to P wave spectra from five areas of active contemporary seismicity in Southern California: the Yuha Desert, the San Jacinto Fault, and the Big Bear, Landers, and Hector Mine regions of the Mojave Desert. We show that the source spectra are generally consistent with an increase in median Brune-type stress drop with seismic moment but that this observed deviation from self-similar scaling is both model dependent and varies in strength from region to region. We also present evidence for significant variations in median stress drop and stress drop variability on regional and local length scales. These results both contribute to our current understanding of earthquake source physics and have practical implications for the next generation of ground motion prediction assessments.
Parameter estimation for groundwater models under uncertain irrigation data
Demissie, Yonas; Valocchi, Albert J.; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen
2015-01-01
The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p < 0.05) bias in estimated parameters and model predictions that persist despite calibrating the models to different calibration data and sample sizes. However, by directly accounting for the irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.
Development and Implementation of Dynamic Scripts to Execute Cycled WRF/GSI Forecasts
NASA Technical Reports Server (NTRS)
Zavodsky, Bradley; Srikishen, Jayanthi; Berndt, Emily; Li, Quanli; Watson, Leela
2014-01-01
Automating the coupling of data assimilation (DA) and modeling systems is a unique challenge in the numerical weather prediction (NWP) research community. In recent years, the Development Testbed Center (DTC) has released well-documented tools such as the Weather Research and Forecasting (WRF) model and the Gridpoint Statistical Interpolation (GSI) DA system that can be easily downloaded, installed, and run by researchers on their local systems. However, developing a coupled system in which the various preprocessing, DA, model, and postprocessing capabilities are all integrated can be labor-intensive if one has little experience with any of these individual systems. Additionally, operational modeling entities generally have specific coupling methodologies that can take time to understand and develop code to implement properly. To better enable collaborating researchers to perform modeling and DA experiments with GSI, the Short-term Prediction Research and Transition (SPoRT) Center has developed a set of Perl scripts that couple GSI and WRF in a cycling methodology consistent with the use of real-time, regional observation data from the National Centers for Environmental Prediction (NCEP)/Environmental Modeling Center (EMC). Because Perl is open source, the code can be easily downloaded and executed regardless of the user's native shell environment. This paper will provide a description of this open-source code and descriptions of a number of the use cases that have been performed by SPoRT collaborators using the scripts on different computing systems.
NASA Astrophysics Data System (ADS)
Koutroulis, Aristeidis; Grillakis, Manolis; Tsanis, Ioannis
2017-04-01
Seasonal prediction is recently at the center of the forecasting research efforts, especially for regions that are projected to be severely affected by global warming. The value of skillful seasonal forecasts can be considerable for many sectors and especially for the agricultural in which water users and managers can benefit to better anticipate against drought conditions. Here we present the first reflections from the user/stakeholder interactions and the design of a tailored drought decision support system in an attempt to bring seasonal predictions into local practice for the Messara valley located in the central-south area of Crete, Greece. Findings from interactions with the users and stakeholders reveal that although long range and seasonal predictions are not used, there is a strong interest for this type of information. The increase in the skill of short range weather predictions is also of great interest. The drought monitoring and prediction tool under development that support local water and agricultural management will include (a) sources of skillful short to medium term forecast information, (b) tailored drought monitoring and forecasting indices for the local groundwater aquifer and rain-fed agriculture, and (c) seasonal inflow forecasts for the local dam through hydrologic simulation to support management of freshwater resources and drought impacts on irrigated agriculture.
Human Splice-Site Prediction with Deep Neural Networks.
Naito, Tatsuhiko
2018-04-18
Accurate splice-site prediction is essential to delineate gene structures from sequence data. Several computational techniques have been applied to create a system to predict canonical splice sites. For classification tasks, deep neural networks (DNNs) have achieved record-breaking results and often outperformed other supervised learning techniques. In this study, a new method of splice-site prediction using DNNs was proposed. The proposed system receives an input sequence data and returns an answer as to whether it is splice site. The length of input is 140 nucleotides, with the consensus sequence (i.e., "GT" and "AG" for the donor and acceptor sites, respectively) in the middle. Each input sequence model is applied to the pretrained DNN model that determines the probability that an input is a splice site. The model consists of convolutional layers and bidirectional long short-term memory network layers. The pretraining and validation were conducted using the data set tested in previously reported methods. The performance evaluation results showed that the proposed method can outperform the previous methods. In addition, the pattern learned by the DNNs was visualized as position frequency matrices (PFMs). Some of PFMs were very similar to the consensus sequence. The trained DNN model and the brief source code for the prediction system are uploaded. Further improvement will be achieved following the further development of DNNs.
NASA Technical Reports Server (NTRS)
Zawodny, Nikolas S.; Boyd, D. Douglas, Jr.
2017-01-01
In this study, hover acoustic measurements are taken on isolated rotor-airframe configurations representative of smallscale, rotary-wing unmanned aircraft systems (UAS). Each rotor-airframe configuration consists of two fixed-pitch blades powered by a brushless motor, with a simplified airframe geometry intended to represent a generic multicopter arm. In addition to acoustic measurements, CFD-based aeroacoustic predictions are implemented on a subset of the experimentally tested rotor-airframe configurations in an effort to better understand the noise content of the rotor-airframe systems. Favorable agreements are obtained between acoustic measurements and predictions, based on both time- and frequency-domain post-processing techniques. Results indicate that close proximity of airframe surfaces result in the generation of considerable tonal acoustic content in the form of harmonics of the rotor blade passage frequency (BPF). Analysis of the acoustic prediction data shows that the presence of the airframe surfaces can generate noise levels either comparable to or greater than the rotor blade surfaces under certain rotor tip clearance conditions. Analysis of the on-surface Ffowcs Williams and Hawkings (FW-H) source terms provide insight as to the predicted physical noise-generating mechanisms on the rotor and airframe surfaces.
Wysham, Nicholas G; Abernethy, Amy P; Cox, Christopher E
2014-10-01
Prediction models in critical illness are generally limited to short-term mortality and uncommonly include patient-centered outcomes. Current outcome prediction tools are also insensitive to individual context or evolution in healthcare practice, potentially limiting their value over time. Improved prognostication of patient-centered outcomes in critical illness could enhance decision-making quality in the ICU. Patient-reported outcomes have emerged as precise methodological measures of patient-centered variables and have been successfully employed using diverse platforms and technologies, enhancing the value of research in critical illness survivorship and in direct patient care. The learning health system is an emerging ideal characterized by integration of multiple data sources into a smart and interconnected health information technology infrastructure with the goal of rapidly optimizing patient care. We propose a vision of a smart, interconnected learning health system with integrated electronic patient-reported outcomes to optimize patient-centered care, including critical care outcome prediction. A learning health system infrastructure integrating electronic patient-reported outcomes may aid in the management of critical illness-associated conditions and yield tools to improve prognostication of patient-centered outcomes in critical illness.
10 CFR 50.67 - Accident source term.
Code of Federal Regulations, 2014 CFR
2014-01-01
... occupancy of the control room under accident conditions without personnel receiving radiation exposures in... 10 Energy 1 2014-01-01 2014-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The...
10 CFR 50.67 - Accident source term.
Code of Federal Regulations, 2012 CFR
2012-01-01
... occupancy of the control room under accident conditions without personnel receiving radiation exposures in... 10 Energy 1 2012-01-01 2012-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The...
10 CFR 50.67 - Accident source term.
Code of Federal Regulations, 2010 CFR
2010-01-01
... occupancy of the control room under accident conditions without personnel receiving radiation exposures in... 10 Energy 1 2010-01-01 2010-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The...
10 CFR 50.67 - Accident source term.
Code of Federal Regulations, 2013 CFR
2013-01-01
... occupancy of the control room under accident conditions without personnel receiving radiation exposures in... 10 Energy 1 2013-01-01 2013-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The...
10 CFR 50.67 - Accident source term.
Code of Federal Regulations, 2011 CFR
2011-01-01
... occupancy of the control room under accident conditions without personnel receiving radiation exposures in... 10 Energy 1 2011-01-01 2011-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The...
Schroth, A.W.; Crusius, John; Chever, F.; Bostick, B.C.; Rouxel, O.J.
2011-01-01
Riverine iron (Fe) derived from glacial weathering is a critical micronutrient source to ecosystems of the Gulf of Alaska (GoA). Here we demonstrate that the source and chemical nature of riverine Fe input to the GoA could change dramatically due to the widespread watershed deglaciation that is underway. We examine Fe size partitioning, speciation, and isotopic composition in tributaries of the Copper River which exemplify a long-term GoA watershed evolution from one strongly influenced by glacial weathering to a boreal-forested watershed. Iron fluxes from glacierized tributaries bear high suspended sediment and colloidal Fe loads of mixed valence silicate species, with low concentrations of dissolved Fe and dissolved organic carbon (DOC). Iron isotopic composition is indicative of mechanical weathering as the Fe source. Conversely, Fe fluxes from boreal-forested systems have higher dissolved Fe concentrations corresponding to higher DOC concentrations. Iron colloids and suspended sediment consist of Fe (hydr)oxides and organic complexes. These watersheds have an iron isotopic composition indicative of an internal chemical processing source. We predict that as the GoA watershed evolves due to deglaciation, so will the source, flux, and chemical nature of riverine Fe loads, which could have significant ramifications for Alaskan marine and freshwater ecosystems.
Optimal use of EEG recordings to target active brain areas with transcranial electrical stimulation.
Dmochowski, Jacek P; Koessler, Laurent; Norcia, Anthony M; Bikson, Marom; Parra, Lucas C
2017-08-15
To demonstrate causal relationships between brain and behavior, investigators would like to guide brain stimulation using measurements of neural activity. Particularly promising in this context are electroencephalography (EEG) and transcranial electrical stimulation (TES), as they are linked by a reciprocity principle which, despite being known for decades, has not led to a formalism for relating EEG recordings to optimal stimulation parameters. Here we derive a closed-form expression for the TES configuration that optimally stimulates (i.e., targets) the sources of recorded EEG, without making assumptions about source location or distribution. We also derive a duality between TES targeting and EEG source localization, and demonstrate that in cases where source localization fails, so does the proposed targeting. Numerical simulations with multiple head models confirm these theoretical predictions and quantify the achieved stimulation in terms of focality and intensity. We show that constraining the stimulation currents automatically selects optimal montages that involve only a few (4-7) electrodes, with only incremental loss in performance when targeting focal activations. The proposed technique allows brain scientists and clinicians to rationally target the sources of observed EEG and thus overcomes a major obstacle to the realization of individualized or closed-loop brain stimulation. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Optimal use of EEG recordings to target active brain areas with transcranial electrical stimulation
Dmochowski, Jacek P.; Koessler, Laurent; Norcia, Anthony M.; Bikson, Marom; Parra, Lucas C.
2018-01-01
To demonstrate causal relationships between brain and behavior, investigators would like to guide brain stimulation using measurements of neural activity. Particularly promising in this context are electroencephalography (EEG) and transcranial electrical stimulation (TES), as they are linked by a reciprocity principle which, despite being known for decades, has not led to a formalism for relating EEG recordings to optimal stimulation parameters. Here we derive a closed-form expression for the TES configuration that optimally stimulates (i.e., targets) the sources of recorded EEG, without making assumptions about source location or distribution. We also derive a duality between TES targeting and EEG source localization, and demonstrate that in cases where source localization fails, so does the proposed targeting. Numerical simulations with multiple head models confirm these theoretical predictions and quantify the achieved stimulation in terms of focality and intensity. We show that constraining the stimulation currents automatically selects optimal montages that involve only a few (4–7) electrodes, with only incremental loss in performance when targeting focal activations. The proposed technique allows brain scientists and clinicians to rationally target the sources of observed EEG and thus overcomes a major obstacle to the realization of individualized or closed-loop brain stimulation. PMID:28578130
Application of acoustic radiosity methods to noise propagation within buildings
NASA Astrophysics Data System (ADS)
Muehleisen, Ralph T.; Beamer, C. Walter
2005-09-01
The prediction of sound pressure levels in rooms from transmitted sound is a difficult problem. The sound energy in the source room incident on the common wall must be accurately predicted. In the receiving room, the propagation of sound from the planar wall source must also be accurately predicted. The radiosity method naturally computes the spatial distribution of sound energy incident on a wall and also naturally predicts the propagation of sound from a planar area source. In this paper, the application of the radiosity method to sound transmission problems is introduced and explained.
Bodford, Jessica E; Kwan, Virginia S Y; Sobota, David S
2017-05-01
As technology's presence grows increasingly concrete in global societies, so too do our relationships with the devices we keep close at hand from day to day. Whereas research has, in the past, framed smartphone addiction in terms of possessional attachment, the present research hypothesizes that anxious smartphone attachment stems from human attachment, in which Anxiously attached individuals may be more likely to generalize their anxious attachment style to communication devices. In the present study, we found support for this hypothesis and showed that anxious smartphone attachment predicts (1) anthropomorphic beliefs, (2) reliance on-or "clinginess" toward-smartphones, and (3) a seemingly compulsive urge to answer one's phone, even in dangerous situations (e.g., while driving). Taken together, we seek to provide a theoretical framework and methodological tools to identify the sources of technology attachment and those most at risk of engaging in dangerous or inappropriate behaviors as a result of attachment to ever-present mobile devices.
A k-omega-multivariate beta PDF for supersonic combustion
NASA Technical Reports Server (NTRS)
Alexopoulos, G. A.; Baurle, R. A.; Hassan, H. A.
1992-01-01
In an attempt to study the interaction between combustion and turbulence in supersonic flows, an assumed PDF has been employed. This makes it possible to calculate the time average of the chemical source terms that appear in the species conservation equations. In order to determine the averages indicated in an equation, two transport equations, one for the temperature (enthalpy) variance and one for Q, are required. Model equations are formulated for such quantities. The turbulent time scale controls the evolution. An algebraic model similar to that used by Eklund et al was used in an attempt to predict the recent measurements of Cheng et al. Predictions were satisfactory before ignition but were less satisfactory after ignition. One of the reasons for this behavior is the inadequacy of the algebraic turbulence model employed. Because of this, the objective of this work is to develop a k-omega model to remedy the situation.
Niethammer, Marc; Hart, Gabriel L.; Pace, Danielle F.; Vespa, Paul M.; Irimia, Andrei; Van Horn, John D.; Aylward, Stephen R.
2013-01-01
Standard image registration methods do not account for changes in image appearance. Hence, metamorphosis approaches have been developed which jointly estimate a space deformation and a change in image appearance to construct a spatio-temporal trajectory smoothly transforming a source to a target image. For standard metamorphosis, geometric changes are not explicitly modeled. We propose a geometric metamorphosis formulation, which explains changes in image appearance by a global deformation, a deformation of a geometric model, and an image composition model. This work is motivated by the clinical challenge of predicting the long-term effects of traumatic brain injuries based on time-series images. This work is also applicable to the quantification of tumor progression (e.g., estimating its infiltrating and displacing components) and predicting chronic blood perfusion changes after stroke. We demonstrate the utility of the method using simulated data as well as scans from a clinical traumatic brain injury patient. PMID:21995083
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neary, Vincent Sinclair; Yang, Zhaoqing; Wang, Taiping
A wave model test bed is established to benchmark, test and evaluate spectral wave models and modeling methodologies (i.e., best practices) for predicting the wave energy resource parameters recommended by the International Electrotechnical Commission, IEC TS 62600-101Ed. 1.0 ©2015. Among other benefits, the model test bed can be used to investigate the suitability of different models, specifically what source terms should be included in spectral wave models under different wave climate conditions and for different classes of resource assessment. The overarching goal is to use these investigations to provide industry guidance for model selection and modeling best practices depending onmore » the wave site conditions and desired class of resource assessment. Modeling best practices are reviewed, and limitations and knowledge gaps in predicting wave energy resource parameters are identified.« less
NASA Technical Reports Server (NTRS)
Dare, P. M.; Smith, P. J.
1983-01-01
The eddy kinetic energy budget is calculated for a 48-hour forecast of an intense occluding winter cyclone associated with a strong well-developed jet stream. The model output consists of the initialized (1200 GMT January 9, 1975) and the 12, 24, 36, and 48 hour forecast fields from the Drexel/NCAR Limited Area Mesoscale Prediction System (LAMPS) model. The LAMPS forecast compares well with observations for the first 24 hours, but then overdevelops the low-level cyclone while inadequately developing the upper-air wave and jet. Eddy kinetic energy was found to be concentrated in the upper-troposphere with maxima flanking the primary trough. The increases in kinetic energy were found to be due to an excess of the primary source term of kinetic energy content, which is the horizontal flux of eddy kinetic energy over the primary sinks, and the generation and dissipation of eddy kinetic energy.
The relationship between crustal tectonics and internal evolution in the moon and Mercury
NASA Technical Reports Server (NTRS)
Solomon, S. C.
1977-01-01
The relationship between crustal tectonics and thermal evolution is discussed in terms of the moon and Mercury. Finite strain theory and depth and temperature-dependent thermal expansion are used to evaluate previous conclusions about early lunar history. Factors bringing about core differentiation in the first 0.6 b.y. of Mercurian evolution are described. The influence of concentrating radioactive heat sources located in Mercury's crust on the predicted contraction is outlined. The predicted planetary volume change is explored with regard to quantitative limits on the extent of Mercurian core solidification. Lunar and Mercurian thermal stresses involved in thermal evolution are reviewed, noting the history of surface volcanism. It is concluded that surface faulting and volcanism are closely associated with the thermal evolution of the whole planetary volume. As the planet cools or is heated, several types of tectonic and volcanic effects may be produced by thermal stress occurring in the lithosphere.
NASA Technical Reports Server (NTRS)
Seasly, Elaine
2015-01-01
To combat contamination of physical assets and provide reliable data to decision makers in the space and missile defense community, a modular open system architecture for creation of contamination models and standards is proposed. Predictive tools for quantifying the effects of contamination can be calibrated from NASA data of long-term orbiting assets. This data can then be extrapolated to missile defense predictive models. By utilizing a modular open system architecture, sensitive data can be de-coupled and protected while benefitting from open source data of calibrated models. This system architecture will include modules that will allow the designer to trade the effects of baseline performance against the lifecycle degradation due to contamination while modeling the lifecycle costs of alternative designs. In this way, each member of the supply chain becomes an informed and active participant in managing contamination risk early in the system lifecycle.
Zhu, Yanhong; Huang, Lin; Li, Jingyi; Ying, Qi; Zhang, Hongliang; Liu, Xingang; Liao, Hong; Li, Nan; Liu, Zhenxin; Mao, Yuhao; Fang, Hao; Hu, Jianlin
2018-06-01
Particulate matter (PM) in the atmosphere has adverse effects on human health, ecosystems, and visibility. It also plays an important role in meteorology and climate change. A good understanding of its sources is essential for effective emission controls to reduce PM and to protect public health. In this study, a total of 239 PM source apportionment studies in China published during 1987-2017 were reviewed. The documents studied include peer-reviewed papers in international and Chinese journals, as well as degree dissertations. The methods applied in these studies were summarized and the main sources in various regions of China were identified. The trends of source contributions at two major cities with abundant studies over long-time periods were analyzed. The most frequently used methods for PM source apportionment in China are receptor models, including chemical mass balance (CMB), positive matrix factorization (PMF), and principle component analysis (PCA). Dust, fossil fuel combustion, transportation, biomass burning, industrial emission, secondary inorganic aerosol (SIA) and secondary organic aerosol (SOA) are the main source categories of fine PM identified in China. Even though the sources of PM vary among seven different geographical areas of China, SIA, industrial, and dust emissions are generally found to be the top three source categories in 2007-2016. A number of studies investigated the sources of SIA and SOA in China using air quality models and indicated that fossil fuel combustion and industrial emissions were the most important sources of SIA (total contributing 63.5%-88.1% of SO 4 2- , and 47.3%-70% NO 3 - ), and agriculture emissions were the dominant source of NH 4 + (contributing 53.9%-90%). Biogenic emissions were the most important source of SOA in China in summer, while residential and industrial emissions were important in winter. Long-term changes of PM sources at two megacities of Beijing and Nanjing indicated that the contributions of fossil fuel and industrial sources have been declining after stricter emission controls in recent years. In general, dust and industrial contributions decreased and transportation contributions increased after 2000. PM 2.5 emissions are predicted to decline in most regions during 2005-2030, even though the energy consumptions except biomass burning are predicted to continue to increase. Industrial, residential, and biomass burning sources will become more important in the future in the businuess-as-usual senarios. This review provides valuable information about main sources of PM and their trends in China. A few recommendations are suggested to further improve our understanding the sources and to develop effective PM control strategies in various regions of China. Copyright © 2018 Elsevier Ltd. All rights reserved.
Fine-Tuning the Accretion Disk Clock in Hercules X-1
NASA Technical Reports Server (NTRS)
Still, M.; Boyd, P.
2004-01-01
RXTE ASM count rates from the X-ray pulsar Her X-1 began falling consistently during the late months of 2003. The source is undergoing another state transition similar to the anomalous low state of 1999. This new event has triggered observations from both space and ground-based observatories. In order to aid data interpretation and telescope scheduling, and to facilitate the phase-connection of cycles before and after the state transition, we have re-calculated the precession ephemeris using cycles over the last 3.5 years. We report that the source has displayed a different precession period since the last anomalous event. Additional archival data from CGRO suggests that each low state is accompanied by a change in precession period and that the subsequent period is correlated with accretion flux. Consequently our analysis reveals long-term accretion disk behaviour which is predicted by theoretical models of radiation-driven warping.
NASA Astrophysics Data System (ADS)
Kadem, L.; Knapp, Y.; Pibarot, P.; Bertrand, E.; Garcia, D.; Durand, L. G.; Rieu, R.
2005-12-01
The effective orifice area (EOA) is the most commonly used parameter to assess the severity of aortic valve stenosis as well as the performance of valve substitutes. Particle image velocimetry (PIV) may be used for in vitro estimation of valve EOA. In the present study, we propose a new and simple method based on Howe’s developments of Lighthill’s aero-acoustic theory. This method is based on an acoustical source term (AST) to estimate the EOA from the transvalvular flow velocity measurements obtained by PIV. The EOAs measured by the AST method downstream of three sharp-edged orifices were in excellent agreement with the EOAs predicted from the potential flow theory used as the reference method in this study. Moreover, the AST method was more accurate than other conventional PIV methods based on streamlines, inflexion point or vorticity to predict the theoretical EOAs. The superiority of the AST method is likely due to the nonlinear form of the AST. There was also an excellent agreement between the EOAs measured by the AST method downstream of the three sharp-edged orifices as well as downstream of a bioprosthetic valve with those obtained by the conventional clinical method based on Doppler-echocardiographic measurements of transvalvular velocity. The results of this study suggest that this new simple PIV method provides an accurate estimation of the aortic valve flow EOA. This new method may thus be used as a reference method to estimate the EOA in experimental investigation of the performance of valve substitutes and to validate Doppler-echocardiographic measurements under various physiologic and pathologic flow conditions.
NASA Astrophysics Data System (ADS)
Gilmore, A. M.
2012-12-01
Drinking water, wastewater and reuse plants must deal with regulations associated with bacterial contamination and halogen disinfection procedures that can generate harmful disinfection by-products (DBPs) including trihalomethanes (THMs), haloacetic acids (HOAAs) and other compounds. The natural fluorescent chromophoric dissolved organic matter (CDOM) is regulated as the major DBP precursor. This study outlines the advantages and current limitations associated with optical monitoring of water treatment processes using tcontemporary Fluorescence Excitation-Emission Mapping (F-EEM). The F-EEM method coupled with practical peak indexing and multi-variate analyses is potentially superior in terms of cost, speed and sensitivity over conventional total organic carbon (TOC) meters and specific UV-absorbance (SUVA) measurements. Hence there is strong interest in developing revised environmental regulations around the F-EEM technique instruments which can incidentally simultaneously measure the SUVA and DOC parameters. Importantly, the F-EEM technique, compared to the single-point TOC and SUVA signals can resolve CDOM classes distinguishing those that strongly cause DBPs. The F-EEM DBP prediction method can be applied to surface water sources to evaluate DBP potential as a function of the point sources and reservoir depth profiles. It can also be applied in-line to rapidly adjust DOC removal processes including sedimentation-flocculation, microfiltration, reverse-osmosis, and ozonation. Limitations and interferences for F-EEMs are discussed including those common to SUVA and TOC in contrast to the advantages including that F-EEMs are less prone to interferences from inorganic carbon and metal contaminations and require little if any chemical preparation. In conclusion, the F-EEM method is discussed in terms of not only the DBP problem but also as a means of predicting (concurrent to DBP monitoring) organic membrane fouling in water-reuse and desalination plants.
Source term identification in atmospheric modelling via sparse optimization
NASA Astrophysics Data System (ADS)
Adam, Lukas; Branda, Martin; Hamburger, Thomas
2015-04-01
Inverse modelling plays an important role in identifying the amount of harmful substances released into atmosphere during major incidents such as power plant accidents or volcano eruptions. Another possible application of inverse modelling lies in the monitoring the CO2 emission limits where only observations at certain places are available and the task is to estimate the total releases at given locations. This gives rise to minimizing the discrepancy between the observations and the model predictions. There are two standard ways of solving such problems. In the first one, this discrepancy is regularized by adding additional terms. Such terms may include Tikhonov regularization, distance from a priori information or a smoothing term. The resulting, usually quadratic, problem is then solved via standard optimization solvers. The second approach assumes that the error term has a (normal) distribution and makes use of Bayesian modelling to identify the source term. Instead of following the above-mentioned approaches, we utilize techniques from the field of compressive sensing. Such techniques look for a sparsest solution (solution with the smallest number of nonzeros) of a linear system, where a maximal allowed error term may be added to this system. Even though this field is a developed one with many possible solution techniques, most of them do not consider even the simplest constraints which are naturally present in atmospheric modelling. One of such examples is the nonnegativity of release amounts. We believe that the concept of a sparse solution is natural in both problems of identification of the source location and of the time process of the source release. In the first case, it is usually assumed that there are only few release points and the task is to find them. In the second case, the time window is usually much longer than the duration of the actual release. In both cases, the optimal solution should contain a large amount of zeros, giving rise to the concept of sparsity. In the paper, we summarize several optimization techniques which are used for finding sparse solutions and propose their modifications to handle selected constraints such as nonnegativity constraints and simple linear constraints, for example the minimal or maximal amount of total release. These techniques range from successive convex approximations to solution of one nonconvex problem. On simple examples, we explain these techniques and compare them from the point of implementation simplicity, approximation capability and convergence properties. Finally, these methods will be applied on the European Tracer Experiment (ETEX) data and the results will be compared with the current state of arts techniques such as regularized least squares or Bayesian approach. The obtained results show the surprisingly good results of these techniques. This research is supported by EEA/Norwegian Financial Mechanism under project 7F14287 STRADI.
Medium- and Long-term Prediction of LOD Change with the Leap-step Autoregressive Model
NASA Astrophysics Data System (ADS)
Liu, Q. B.; Wang, Q. J.; Lei, M. F.
2015-09-01
It is known that the accuracies of medium- and long-term prediction of changes of length of day (LOD) based on the combined least-square and autoregressive (LS+AR) decrease gradually. The leap-step autoregressive (LSAR) model is more accurate and stable in medium- and long-term prediction, therefore it is used to forecast the LOD changes in this work. Then the LOD series from EOP 08 C04 provided by IERS (International Earth Rotation and Reference Systems Service) is used to compare the effectiveness of the LSAR and traditional AR methods. The predicted series resulted from the two models show that the prediction accuracy with the LSAR model is better than that from AR model in medium- and long-term prediction.
pyJac: Analytical Jacobian generator for chemical kinetics
NASA Astrophysics Data System (ADS)
Niemeyer, Kyle E.; Curtis, Nicholas J.; Sung, Chih-Jen
2017-06-01
Accurate simulations of combustion phenomena require the use of detailed chemical kinetics in order to capture limit phenomena such as ignition and extinction as well as predict pollutant formation. However, the chemical kinetic models for hydrocarbon fuels of practical interest typically have large numbers of species and reactions and exhibit high levels of mathematical stiffness in the governing differential equations, particularly for larger fuel molecules. In order to integrate the stiff equations governing chemical kinetics, generally reactive-flow simulations rely on implicit algorithms that require frequent Jacobian matrix evaluations. Some in situ and a posteriori computational diagnostics methods also require accurate Jacobian matrices, including computational singular perturbation and chemical explosive mode analysis. Typically, finite differences numerically approximate these, but for larger chemical kinetic models this poses significant computational demands since the number of chemical source term evaluations scales with the square of species count. Furthermore, existing analytical Jacobian tools do not optimize evaluations or support emerging SIMD processors such as GPUs. Here we introduce pyJac, a Python-based open-source program that generates analytical Jacobian matrices for use in chemical kinetics modeling and analysis. In addition to producing the necessary customized source code for evaluating reaction rates (including all modern reaction rate formulations), the chemical source terms, and the Jacobian matrix, pyJac uses an optimized evaluation order to minimize computational and memory operations. As a demonstration, we first establish the correctness of the Jacobian matrices for kinetic models of hydrogen, methane, ethylene, and isopentanol oxidation (number of species ranging 13-360) by showing agreement within 0.001% of matrices obtained via automatic differentiation. We then demonstrate the performance achievable on CPUs and GPUs using pyJac via matrix evaluation timing comparisons; the routines produced by pyJac outperformed first-order finite differences by 3-7.5 times and the existing analytical Jacobian software TChem by 1.1-2.2 times on a single-threaded basis. It is noted that TChem is not thread-safe, while pyJac is easily parallelized, and hence can greatly outperform TChem on multicore CPUs. The Jacobian matrix generator we describe here will be useful for reducing the cost of integrating chemical source terms with implicit algorithms in particular and algorithms that require an accurate Jacobian matrix in general. Furthermore, the open-source release of the program and Python-based implementation will enable wide adoption.
Nuclear Explosion Monitoring Advances and Challenges
NASA Astrophysics Data System (ADS)
Baker, G. E.
2015-12-01
We address the state-of-the-art in areas important to monitoring, current challenges, specific efforts that illustrate approaches addressing shortcomings in capabilities, and additional approaches that might be helpful. The exponential increase in the number of events that must be screened as magnitude thresholds decrease presents one of the greatest challenges. Ongoing efforts to exploit repeat seismic events using waveform correlation, subspace methods, and empirical matched field processing holds as much "game-changing" promise as anything being done, and further efforts to develop and apply such methods efficiently are critical. Greater accuracy of travel time, signal loss, and full waveform predictions are still needed to better locate and discriminate seismic events. Important developments include methods to model velocities using multiple types of data; to model attenuation with better separation of source, path, and site effects; and to model focusing and defocusing of surface waves. Current efforts to model higher frequency full waveforms are likely to improve source characterization while more effective estimation of attenuation from ambient noise holds promise for filling in gaps. Censoring in attenuation modeling is a critical problem to address. Quantifying uncertainty of discriminants is key to their operational use. Efforts to do so for moment tensor (MT) inversion are particularly important, and fundamental progress on the statistics of MT distributions is the most important advance needed in the near term in this area. Source physics is seeing great progress through theoretical, experimental, and simulation studies. The biggest need is to accurately predict the effects of source conditions on seismic generation. Uniqueness is the challenge here. Progress will depend on studies that probe what distinguishes mechanisms, rather than whether one of many possible mechanisms is consistent with some set of observations.
Bertrand, Julie Marilyne; Moulin, Chris John Anthony; Souchay, Céline
2017-05-01
Our objective was to explore metamemory in short-term memory across the lifespan. Five age groups participated in this study: 3 groups of children (4-13 years old), and younger and older adults. We used a three-phase task: prediction-span-postdiction. For prediction and postdiction phases, participants reported with a Yes/No response if they could recall in order a series of images. For the span task, they had to actually recall such series. From 4 years old, children have some ability to monitor their short-term memory and are able to adjust their prediction after experiencing the task. However, accuracy still improves significantly until adolescence. Although the older adults had a lower span, they were as accurate as young adults in their evaluation, suggesting that metamemory is unimpaired for short-term memory tasks in older adults. •We investigate metamemory for short-term memory tasks across the lifespan. •We find younger children cannot accurately predict their span length. •Older adults are accurate in predicting their span length. •People's metamemory accuracy was related to their short-term memory span.