Wesolowski, Edwin A.
1996-01-01
Two separate studies to simulate the effects of discharging treated wastewater to the Red River of the North at Fargo, North Dakota, and Moorhead, Minnesota, have been completed. In the first study, the Red River at Fargo Water-Quality Model was calibrated and verified for icefree conditions. In the second study, the Red River at Fargo Ice-Cover Water-Quality Model was verified for ice-cover conditions.To better understand and apply the Red River at Fargo Water-Quality Model and the Red River at Fargo Ice-Cover Water-Quality Model, the uncertainty associated with simulated constituent concentrations and property values was analyzed and quantified using the Enhanced Stream Water Quality Model-Uncertainty Analysis. The Monte Carlo simulation and first-order error analysis methods were used to analyze the uncertainty in simulated values for six constituents and properties at sites 5, 10, and 14 (upstream to downstream order). The constituents and properties analyzed for uncertainty are specific conductance, total organic nitrogen (reported as nitrogen), total ammonia (reported as nitrogen), total nitrite plus nitrate (reported as nitrogen), 5-day carbonaceous biochemical oxygen demand for ice-cover conditions and ultimate carbonaceous biochemical oxygen demand for ice-free conditions, and dissolved oxygen. Results are given in detail for both the ice-cover and ice-free conditions for specific conductance, total ammonia, and dissolved oxygen.The sensitivity and uncertainty of the simulated constituent concentrations and property values to input variables differ substantially between ice-cover and ice-free conditions. During ice-cover conditions, simulated specific-conductance values are most sensitive to the headwatersource specific-conductance values upstream of site 10 and the point-source specific-conductance values downstream of site 10. These headwater-source and point-source specific-conductance values also are the key sources of uncertainty. Simulated total ammonia concentrations are most sensitive to the point-source total ammonia concentrations at all three sites. Other input variables that contribute substantially to the variability of simulated total ammonia concentrations are the headwater-source total ammonia and the instream reaction coefficient for biological decay of total ammonia to total nitrite. Simulated dissolved-oxygen concentrations at all three sites are most sensitive to headwater-source dissolved-oxygen concentration. This input variable is the key source of variability for simulated dissolved-oxygen concentrations at sites 5 and 10. Headwatersource and point-source dissolved-oxygen concentrations are the key sources of variability for simulated dissolved-oxygen concentrations at site 14.During ice-free conditions, simulated specific-conductance values at all three sites are most sensitive to the headwater-source specific-conductance values. Headwater-source specificconductance values also are the key source of uncertainty. The input variables to which total ammonia and dissolved oxygen are most sensitive vary from site to site and may or may not correspond to the input variables that contribute the most to the variability. The input variables that contribute the most to the variability of simulated total ammonia concentrations are pointsource total ammonia, instream reaction coefficient for biological decay of total ammonia to total nitrite, and Manning's roughness coefficient. The input variables that contribute the most to the variability of simulated dissolved-oxygen concentrations are reaeration rate, sediment oxygen demand rate, and headwater-source algae as chlorophyll a.
He, Yan-Lin; Xu, Yuan; Geng, Zhi-Qiang; Zhu, Qun-Xiong
2016-03-01
In this paper, a hybrid robust model based on an improved functional link neural network integrating with partial least square (IFLNN-PLS) is proposed. Firstly, an improved functional link neural network with small norm of expanded weights and high input-output correlation (SNEWHIOC-FLNN) was proposed for enhancing the generalization performance of FLNN. Unlike the traditional FLNN, the expanded variables of the original inputs are not directly used as the inputs in the proposed SNEWHIOC-FLNN model. The original inputs are attached to some small norm of expanded weights. As a result, the correlation coefficient between some of the expanded variables and the outputs is enhanced. The larger the correlation coefficient is, the more relevant the expanded variables tend to be. In the end, the expanded variables with larger correlation coefficient are selected as the inputs to improve the performance of the traditional FLNN. In order to test the proposed SNEWHIOC-FLNN model, three UCI (University of California, Irvine) regression datasets named Housing, Concrete Compressive Strength (CCS), and Yacht Hydro Dynamics (YHD) are selected. Then a hybrid model based on the improved FLNN integrating with partial least square (IFLNN-PLS) was built. In IFLNN-PLS model, the connection weights are calculated using the partial least square method but not the error back propagation algorithm. Lastly, IFLNN-PLS was developed as an intelligent measurement model for accurately predicting the key variables in the Purified Terephthalic Acid (PTA) process and the High Density Polyethylene (HDPE) process. Simulation results illustrated that the IFLNN-PLS could significant improve the prediction performance. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Dash, S. M.; Pergament, H. S.
1978-01-01
The basic code structure is discussed, including the overall program flow and a brief description of all subroutines. Instructions on the preparation of input data, definitions of key FORTRAN variables, sample input and output, and a complete listing of the code are presented.
Stochastic analysis of multiphase flow in porous media: II. Numerical simulations
NASA Astrophysics Data System (ADS)
Abin, A.; Kalurachchi, J. J.; Kemblowski, M. W.; Chang, C.-M.
1996-08-01
The first paper (Chang et al., 1995b) of this two-part series described the stochastic analysis using spectral/perturbation approach to analyze steady state two-phase (water and oil) flow in a, liquid-unsaturated, three fluid-phase porous medium. In this paper, the results between the numerical simulations and closed-form expressions obtained using the perturbation approach are compared. We present the solution to the one-dimensional, steady-state oil and water flow equations. The stochastic input processes are the spatially correlated logk where k is the intrinsic permeability and the soil retention parameter, α. These solutions are subsequently used in the numerical simulations to estimate the statistical properties of the key output processes. The comparison between the results of the perturbation analysis and numerical simulations showed a good agreement between the two methods over a wide range of logk variability with three different combinations of input stochastic processes of logk and soil parameter α. The results clearly demonstrated the importance of considering the spatial variability of key subsurface properties under a variety of physical scenarios. The variability of both capillary pressure and saturation is affected by the type of input stochastic process used to represent the spatial variability. The results also demonstrated the applicability of perturbation theory in predicting the system variability and defining effective fluid properties through the ergodic assumption.
MODELING OF HUMAN EXPOSURE TO IN-VEHICLE PM2.5 FROM ENVIRONMENTAL TOBACCO SMOKE
Cao, Ye; Frey, H. Christopher
2012-01-01
Environmental tobacco smoke (ETS) is estimated to be a significant contributor to in-vehicle human exposure to fine particulate matter of 2.5 µm or smaller (PM2.5). A critical assessment was conducted of a mass balance model for estimating PM2.5 concentration with smoking in a motor vehicle. Recommendations for the range of inputs to the mass-balance model are given based on literature review. Sensitivity analysis was used to determine which inputs should be prioritized for data collection. Air exchange rate (ACH) and the deposition rate have wider relative ranges of variation than other inputs, representing inter-individual variability in operations, and inter-vehicle variability in performance, respectively. Cigarette smoking and emission rates, and vehicle interior volume, are also key inputs. The in-vehicle ETS mass balance model was incorporated into the Stochastic Human Exposure and Dose Simulation for Particulate Matter (SHEDS-PM) model to quantify the potential magnitude and variability of in-vehicle exposures to ETS. The in-vehicle exposure also takes into account near-road incremental PM2.5 concentration from on-road emissions. Results of probabilistic study indicate that ETS is a key contributor to the in-vehicle average and high-end exposure. Factors that mitigate in-vehicle ambient PM2.5 exposure lead to higher in-vehicle ETS exposure, and vice versa. PMID:23060732
Corcoran, Jennifer M.; Knight, Joseph F.; Gallant, Alisa L.
2013-01-01
Wetland mapping at the landscape scale using remotely sensed data requires both affordable data and an efficient accurate classification method. Random forest classification offers several advantages over traditional land cover classification techniques, including a bootstrapping technique to generate robust estimations of outliers in the training data, as well as the capability of measuring classification confidence. Though the random forest classifier can generate complex decision trees with a multitude of input data and still not run a high risk of over fitting, there is a great need to reduce computational and operational costs by including only key input data sets without sacrificing a significant level of accuracy. Our main questions for this study site in Northern Minnesota were: (1) how does classification accuracy and confidence of mapping wetlands compare using different remote sensing platforms and sets of input data; (2) what are the key input variables for accurate differentiation of upland, water, and wetlands, including wetland type; and (3) which datasets and seasonal imagery yield the best accuracy for wetland classification. Our results show the key input variables include terrain (elevation and curvature) and soils descriptors (hydric), along with an assortment of remotely sensed data collected in the spring (satellite visible, near infrared, and thermal bands; satellite normalized vegetation index and Tasseled Cap greenness and wetness; and horizontal-horizontal (HH) and horizontal-vertical (HV) polarization using L-band satellite radar). We undertook this exploratory analysis to inform decisions by natural resource managers charged with monitoring wetland ecosystems and to aid in designing a system for consistent operational mapping of wetlands across landscapes similar to those found in Northern Minnesota.
Soft Mixer Assignment in a Hierarchical Generative Model of Natural Scene Statistics
Schwartz, Odelia; Sejnowski, Terrence J.; Dayan, Peter
2010-01-01
Gaussian scale mixture models offer a top-down description of signal generation that captures key bottom-up statistical characteristics of filter responses to images. However, the pattern of dependence among the filters for this class of models is prespecified. We propose a novel extension to the gaussian scale mixture model that learns the pattern of dependence from observed inputs and thereby induces a hierarchical representation of these inputs. Specifically, we propose that inputs are generated by gaussian variables (modeling local filter structure), multiplied by a mixer variable that is assigned probabilistically to each input from a set of possible mixers. We demonstrate inference of both components of the generative model, for synthesized data and for different classes of natural images, such as a generic ensemble and faces. For natural images, the mixer variable assignments show invariances resembling those of complex cells in visual cortex; the statistics of the gaussian components of the model are in accord with the outputs of divisive normalization models. We also show how our model helps interrelate a wide range of models of image statistics and cortical processing. PMID:16999575
Data-driven process decomposition and robust online distributed modelling for large-scale processes
NASA Astrophysics Data System (ADS)
Shu, Zhang; Lijuan, Li; Lijuan, Yao; Shipin, Yang; Tao, Zou
2018-02-01
With the increasing attention of networked control, system decomposition and distributed models show significant importance in the implementation of model-based control strategy. In this paper, a data-driven system decomposition and online distributed subsystem modelling algorithm was proposed for large-scale chemical processes. The key controlled variables are first partitioned by affinity propagation clustering algorithm into several clusters. Each cluster can be regarded as a subsystem. Then the inputs of each subsystem are selected by offline canonical correlation analysis between all process variables and its controlled variables. Process decomposition is then realised after the screening of input and output variables. When the system decomposition is finished, the online subsystem modelling can be carried out by recursively block-wise renewing the samples. The proposed algorithm was applied in the Tennessee Eastman process and the validity was verified.
NASA Astrophysics Data System (ADS)
Zhuo, L.; Mekonnen, M. M.; Hoekstra, A. Y.
2014-06-01
Water Footprint Assessment is a fast-growing field of research, but as yet little attention has been paid to the uncertainties involved. This study investigates the sensitivity of and uncertainty in crop water footprint (in m3 t-1) estimates related to uncertainties in important input variables. The study focuses on the green (from rainfall) and blue (from irrigation) water footprint of producing maize, soybean, rice, and wheat at the scale of the Yellow River basin in the period 1996-2005. A grid-based daily water balance model at a 5 by 5 arcmin resolution was applied to compute green and blue water footprints of the four crops in the Yellow River basin in the period considered. The one-at-a-time method was carried out to analyse the sensitivity of the crop water footprint to fractional changes of seven individual input variables and parameters: precipitation (PR), reference evapotranspiration (ET0), crop coefficient (Kc), crop calendar (planting date with constant growing degree days), soil water content at field capacity (Smax), yield response factor (Ky) and maximum yield (Ym). Uncertainties in crop water footprint estimates related to uncertainties in four key input variables: PR, ET0, Kc, and crop calendar were quantified through Monte Carlo simulations. The results show that the sensitivities and uncertainties differ across crop types. In general, the water footprint of crops is most sensitive to ET0 and Kc, followed by the crop calendar. Blue water footprints were more sensitive to input variability than green water footprints. The smaller the annual blue water footprint is, the higher its sensitivity to changes in PR, ET0, and Kc. The uncertainties in the total water footprint of a crop due to combined uncertainties in climatic inputs (PR and ET0) were about ±20% (at 95% confidence interval). The effect of uncertainties in ET0was dominant compared to that of PR. The uncertainties in the total water footprint of a crop as a result of combined key input uncertainties were on average ±30% (at 95% confidence level).
Summary of the key features of seven biomathematical models of human fatigue and performance.
Mallis, Melissa M; Mejdal, Sig; Nguyen, Tammy T; Dinges, David F
2004-03-01
Biomathematical models that quantify the effects of circadian and sleep/wake processes on the regulation of alertness and performance have been developed in an effort to predict the magnitude and timing of fatigue-related responses in a variety of contexts (e.g., transmeridian travel, sustained operations, shift work). This paper summarizes key features of seven biomathematical models reviewed as part of the Fatigue and Performance Modeling Workshop held in Seattle, WA, on June 13-14, 2002. The Workshop was jointly sponsored by the National Aeronautics and Space Administration, U.S. Department of Defense, U.S. Army Medical Research and Materiel Command, Office of Naval Research, Air Force Office of Scientific Research, and U.S. Department of Transportation. An invitation was sent to developers of seven biomathematical models that were commonly cited in scientific literature and/or supported by government funding. On acceptance of the invitation to attend the Workshop, developers were asked to complete a survey of the goals, capabilities, inputs, and outputs of their biomathematical models of alertness and performance. Data from the completed surveys were summarized and juxtaposed to provide a framework for comparing features of the seven models. Survey responses revealed that models varied greatly relative to their reported goals and capabilities. While all modelers reported that circadian factors were key components of their capabilities, they differed markedly with regard to the roles of sleep and work times as input factors for prediction: four of the seven models had work time as their sole input variable(s), while the other three models relied on various aspects of sleep timing for model input. Models also differed relative to outputs: five sought to predict results from laboratory experiments, field, and operational data, while two models were developed without regard to predicting laboratory experimental results. All modelers provided published papers describing their models, with three of the models being proprietary. Although all models appear to have been fundamentally influenced by the two-process model of sleep regulation by Borbély, there is considerable diversity among them in the number and type of input and output variables, and their stated goals and capabilities.
Summary of the key features of seven biomathematical models of human fatigue and performance
NASA Technical Reports Server (NTRS)
Mallis, Melissa M.; Mejdal, Sig; Nguyen, Tammy T.; Dinges, David F.
2004-01-01
BACKGROUND: Biomathematical models that quantify the effects of circadian and sleep/wake processes on the regulation of alertness and performance have been developed in an effort to predict the magnitude and timing of fatigue-related responses in a variety of contexts (e.g., transmeridian travel, sustained operations, shift work). This paper summarizes key features of seven biomathematical models reviewed as part of the Fatigue and Performance Modeling Workshop held in Seattle, WA, on June 13-14, 2002. The Workshop was jointly sponsored by the National Aeronautics and Space Administration, U.S. Department of Defense, U.S. Army Medical Research and Materiel Command, Office of Naval Research, Air Force Office of Scientific Research, and U.S. Department of Transportation. METHODS: An invitation was sent to developers of seven biomathematical models that were commonly cited in scientific literature and/or supported by government funding. On acceptance of the invitation to attend the Workshop, developers were asked to complete a survey of the goals, capabilities, inputs, and outputs of their biomathematical models of alertness and performance. Data from the completed surveys were summarized and juxtaposed to provide a framework for comparing features of the seven models. RESULTS: Survey responses revealed that models varied greatly relative to their reported goals and capabilities. While all modelers reported that circadian factors were key components of their capabilities, they differed markedly with regard to the roles of sleep and work times as input factors for prediction: four of the seven models had work time as their sole input variable(s), while the other three models relied on various aspects of sleep timing for model input. Models also differed relative to outputs: five sought to predict results from laboratory experiments, field, and operational data, while two models were developed without regard to predicting laboratory experimental results. All modelers provided published papers describing their models, with three of the models being proprietary. CONCLUSIONS: Although all models appear to have been fundamentally influenced by the two-process model of sleep regulation by Borbely, there is considerable diversity among them in the number and type of input and output variables, and their stated goals and capabilities.
NASA Technical Reports Server (NTRS)
Meyn, Larry A.
2018-01-01
One of the goals of NASA's Revolutionary Vertical Lift Technology Project (RVLT) is to provide validated tools for multidisciplinary design, analysis and optimization (MDAO) of vertical lift vehicles. As part of this effort, the software package, RotorCraft Optimization Tools (RCOTOOLS), is being developed to facilitate incorporating key rotorcraft conceptual design codes into optimizations using the OpenMDAO multi-disciplinary optimization framework written in Python. RCOTOOLS, also written in Python, currently supports the incorporation of the NASA Design and Analysis of RotorCraft (NDARC) vehicle sizing tool and the Comprehensive Analytical Model of Rotorcraft Aerodynamics and Dynamics II (CAMRAD II) analysis tool into OpenMDAO-driven optimizations. Both of these tools use detailed, file-based inputs and outputs, so RCOTOOLS provides software wrappers to update input files with new design variable values, execute these codes and then extract specific response variable values from the file outputs. These wrappers are designed to be flexible and easy to use. RCOTOOLS also provides several utilities to aid in optimization model development, including Graphical User Interface (GUI) tools for browsing input and output files in order to identify text strings that are used to identify specific variables as optimization input and response variables. This paper provides an overview of RCOTOOLS and its use
Application of neural networks and sensitivity analysis to improved prediction of trauma survival.
Hunter, A; Kennedy, L; Henry, J; Ferguson, I
2000-05-01
The performance of trauma departments is widely audited by applying predictive models that assess probability of survival, and examining the rate of unexpected survivals and deaths. Although the TRISS methodology, a logistic regression modelling technique, is still the de facto standard, it is known that neural network models perform better. A key issue when applying neural network models is the selection of input variables. This paper proposes a novel form of sensitivity analysis, which is simpler to apply than existing techniques, and can be used for both numeric and nominal input variables. The technique is applied to the audit survival problem, and used to analyse the TRISS variables. The conclusions discuss the implications for the design of further improved scoring schemes and predictive models.
Error characterization of microwave satellite soil moisture data sets using fourier analysis
USDA-ARS?s Scientific Manuscript database
Soil moisture is a key geophysical variable in hydrological and meteorological processes. Accurate and current observations of soil moisture over meso to global scales used as inputs to hydrological, weather and climate modelling will benefit the predictability and understanding of these processes. ...
Error characterization of microwave satellite soil moisture data sets using fourier analysis
USDA-ARS?s Scientific Manuscript database
Abstract: Soil moisture is a key geophysical variable in hydrological and meteorological processes. Accurate and current observations of soil moisture over mesoscale to global scales as inputs to hydrological, weather and climate modelling will benefit the predictability and understanding of these p...
Model-free adaptive control of supercritical circulating fluidized-bed boilers
Cheng, George Shu-Xing; Mulkey, Steven L
2014-12-16
A novel 3-Input-3-Output (3.times.3) Fuel-Air Ratio Model-Free Adaptive (MFA) controller is introduced, which can effectively control key process variables including Bed Temperature, Excess O2, and Furnace Negative Pressure of combustion processes of advanced boilers. A novel 7-input-7-output (7.times.7) MFA control system is also described for controlling a combined 3-Input-3-Output (3.times.3) process of Boiler-Turbine-Generator (BTG) units and a 5.times.5 CFB combustion process of advanced boilers. Those boilers include Circulating Fluidized-Bed (CFB) Boilers and Once-Through Supercritical Circulating Fluidized-Bed (OTSC CFB) Boilers.
Racial and Cultural Factors and Learning Transfer
ERIC Educational Resources Information Center
Closson, Rosemary
2013-01-01
Baldwin and Ford (1988) specifically include learner characteristics as one of three key inputs into the learning transfer process but infrequently (actually almost never) has race, ethnicity, or culture been included as a variable when describing trainee characteristics. For the most part one is left to speculate as to the potential influence…
Mu, Dongyan; Addy, Min; Anderson, Erik; Chen, Paul; Ruan, Roger
2016-03-01
This study used life cycle assessment and technical economic analysis tools in evaluating a novel Scum-to-Biodiesel technology and compares the technology with scum digestion and combustion processes. The key variables that control environmental and economic performance are identified and discussed. The results show that all impacts examined for the Scum-to-Biodiesel technology are below zero indicating significant environmental benefits could be drawn from it. Of the three technologies examined, the Scum-to-Biodiesel technology has the best environmental performance in fossil fuel depletion, GHG emissions, and eutrophication, whereas combustion has the best performance on acidification. Of all process inputs assessed, process heat, glycerol, and methanol uses had the highest impacts, much more than any other inputs considered. The Scum-to-Biodiesel technology also makes higher revenue than other technologies. The diesel price is a key variable for its economic performance. The research demonstrates the feasibility and benefits in developing Scum-to-Biodiesel technology in wastewater treatment facilities. Copyright © 2015 Elsevier Ltd. All rights reserved.
Simple Sensitivity Analysis for Orion GNC
NASA Technical Reports Server (NTRS)
Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar
2013-01-01
The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch.We describe in this paper a sensitivity analysis tool (Critical Factors Tool or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.
Effects of input uncertainty on cross-scale crop modeling
NASA Astrophysics Data System (ADS)
Waha, Katharina; Huth, Neil; Carberry, Peter
2014-05-01
The quality of data on climate, soils and agricultural management in the tropics is in general low or data is scarce leading to uncertainty in process-based modeling of cropping systems. Process-based crop models are common tools for simulating crop yields and crop production in climate change impact studies, studies on mitigation and adaptation options or food security studies. Crop modelers are concerned about input data accuracy as this, together with an adequate representation of plant physiology processes and choice of model parameters, are the key factors for a reliable simulation. For example, assuming an error in measurements of air temperature, radiation and precipitation of ± 0.2°C, ± 2 % and ± 3 % respectively, Fodor & Kovacs (2005) estimate that this translates into an uncertainty of 5-7 % in yield and biomass simulations. In our study we seek to answer the following questions: (1) are there important uncertainties in the spatial variability of simulated crop yields on the grid-cell level displayed on maps, (2) are there important uncertainties in the temporal variability of simulated crop yields on the aggregated, national level displayed in time-series, and (3) how does the accuracy of different soil, climate and management information influence the simulated crop yields in two crop models designed for use at different spatial scales? The study will help to determine whether more detailed information improves the simulations and to advise model users on the uncertainty related to input data. We analyse the performance of the point-scale crop model APSIM (Keating et al., 2003) and the global scale crop model LPJmL (Bondeau et al., 2007) with different climate information (monthly and daily) and soil conditions (global soil map and African soil map) under different agricultural management (uniform and variable sowing dates) for the low-input maize-growing areas in Burkina Faso/West Africa. We test the models' response to different levels of input data from very little to very detailed information, and compare the models' abilities to represent the spatial variability and temporal variability in crop yields. We display the uncertainty in crop yield simulations from different input data and crop models in Taylor diagrams which are a graphical summary of the similarity between simulations and observations (Taylor, 2001). The observed spatial variability can be represented well from both models (R=0.6-0.8) but APSIM predicts higher spatial variability than LPJmL due to its sensitivity to soil parameters. Simulations with the same crop model, climate and sowing dates have similar statistics and therefore similar skill to reproduce the observed spatial variability. Soil data is less important for the skill of a crop model to reproduce the observed spatial variability. However, the uncertainty in simulated spatial variability from the two crop models is larger than from input data settings and APSIM is more sensitive to input data then LPJmL. Even with a detailed, point-scale crop model and detailed input data it is difficult to capture the complexity and diversity in maize cropping systems.
Properties of Zero-Free Transfer Function Matrices
NASA Astrophysics Data System (ADS)
D. O. Anderson, Brian; Deistler, Manfred
Transfer functions of linear, time-invariant finite-dimensional systems with more outputs than inputs, as arise in factor analysis (for example in econometrics), have, for state-variable descriptions with generic entries in the relevant matrices, no finite zeros. This paper gives a number of characterizations of such systems (and indeed square discrete-time systems with no zeros), using state-variable, impulse response, and matrix-fraction descriptions. Key properties include the ability to recover the input values at any time from a bounded interval of output values, without any knowledge of an initial state, and an ability to verify the no-zero property in terms of a property of the impulse response coefficient matrices. Results are particularized to cases where the transfer function matrix in question may or may not have a zero at infinity or a zero at zero.
Simple Sensitivity Analysis for Orion Guidance Navigation and Control
NASA Technical Reports Server (NTRS)
Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar
2013-01-01
The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch. We describe in this paper a sensitivity analysis tool ("Critical Factors Tool" or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.
ERIC Educational Resources Information Center
Csizér, Kata; Tankó, Gyula
2017-01-01
Apart from L2 motivation, self-regulation is also increasingly seen as a key variable in L2 learning in many foreign language learning contexts because classroom-centered instructive language teaching might not be able to provide sufficient input for students. Therefore, taking responsibility and regulating the learning processes and positive…
Timothy G.F. Kittel; Nan. A. Rosenbloom; J.A. Royle; C. Daly; W.P. Gibson; H.H. Fisher; P. Thornton; D.N. Yates; S. Aulenbach; C. Kaufman; R. McKeown; Dominque Bachelet; David S. Schimel
2004-01-01
Analysis and simulation of biospheric responses to historical forcing require surface climate data that capture those aspects of climate that control ecological processes, including key spatial gradients and modes of temporal variability. We developed a multivariate, gridded historical climate dataset for the conterminous USA as a common input database for the...
Zhang, Hang; Xu, Qingyan; Liu, Baicheng
2014-01-01
The rapid development of numerical modeling techniques has led to more accurate results in modeling metal solidification processes. In this study, the cellular automaton-finite difference (CA-FD) method was used to simulate the directional solidification (DS) process of single crystal (SX) superalloy blade samples. Experiments were carried out to validate the simulation results. Meanwhile, an intelligent model based on fuzzy control theory was built to optimize the complicate DS process. Several key parameters, such as mushy zone width and temperature difference at the cast-mold interface, were recognized as the input variables. The input variables were functioned with the multivariable fuzzy rule to get the output adjustment of withdrawal rate (v) (a key technological parameter). The multivariable fuzzy rule was built, based on the structure feature of casting, such as the relationship between section area, and the delay time of the temperature change response by changing v, and the professional experience of the operator as well. Then, the fuzzy controlling model coupled with CA-FD method could be used to optimize v in real-time during the manufacturing process. The optimized process was proven to be more flexible and adaptive for a steady and stray-grain free DS process. PMID:28788535
The effect of long-term changes in plant inputs on soil carbon stocks
NASA Astrophysics Data System (ADS)
Georgiou, K.; Li, Z.; Torn, M. S.
2017-12-01
Soil organic carbon (SOC) is the largest actively-cycling terrestrial reservoir of C and an integral component of thriving natural and managed ecosystems. C input interventions (e.g., litter removal or organic amendments) are common in managed landscapes and present an important decision for maintaining healthy soils in sustainable agriculture and forestry. Furthermore, climate and land-cover change can also affect the amount of plant C inputs that enter the soil through changes in plant productivity, allocation, and rooting depth. Yet, the processes that dictate the response of SOC to such changes in C inputs are poorly understood and inadequately represented in predictive models. Long-term litter manipulations are an invaluable resource for exploring key controls of SOC storage and validating model representations. Here we explore the response of SOC to long-term changes in plant C inputs across a range of biomes and soil types. We synthesize and analyze data from long-term litter manipulation field experiments, and focus our meta-analysis on changes to total SOC stocks, microbial biomass carbon, and mineral-associated (`protected') carbon pools and explore the relative contribution of above- versus below-ground C inputs. Our cross-site data comparison reveals that divergent SOC responses are observed between forest sites, particularly for treatments that increase C inputs to the soil. We explore trends among key variables (e.g., microbial biomass to SOC ratios) that inform soil C model representations. The assembled dataset is an important benchmark for evaluating process-based hypotheses and validating divergent model formulations.
Gu, Jianwei; Pitz, Mike; Breitner, Susanne; Birmili, Wolfram; von Klot, Stephanie; Schneider, Alexandra; Soentgen, Jens; Reller, Armin; Peters, Annette; Cyrys, Josef
2012-10-01
The success of epidemiological studies depends on the use of appropriate exposure variables. The purpose of this study is to extract a relatively small selection of variables characterizing ambient particulate matter from a large measurement data set. The original data set comprised a total of 96 particulate matter variables that have been continuously measured since 2004 at an urban background aerosol monitoring site in the city of Augsburg, Germany. Many of the original variables were derived from measured particle size distribution (PSD) across the particle diameter range 3 nm to 10 μm, including size-segregated particle number concentration, particle length concentration, particle surface concentration and particle mass concentration. The data set was complemented by integral aerosol variables. These variables were measured by independent instruments, including black carbon, sulfate, particle active surface concentration and particle length concentration. It is obvious that such a large number of measured variables cannot be used in health effect analyses simultaneously. The aim of this study is a pre-screening and a selection of the key variables that will be used as input in forthcoming epidemiological studies. In this study, we present two methods of parameter selection and apply them to data from a two-year period from 2007 to 2008. We used the agglomerative hierarchical cluster method to find groups of similar variables. In total, we selected 15 key variables from 9 clusters which are recommended for epidemiological analyses. We also applied a two-dimensional visualization technique called "heatmap" analysis to the Spearman correlation matrix. 12 key variables were selected using this method. Moreover, the positive matrix factorization (PMF) method was applied to the PSD data to characterize the possible particle sources. Correlations between the variables and PMF factors were used to interpret the meaning of the cluster and the heatmap analyses. Copyright © 2012 Elsevier B.V. All rights reserved.
Biological control of appetite: A daunting complexity.
MacLean, Paul S; Blundell, John E; Mennella, Julie A; Batterham, Rachel L
2017-03-01
This review summarizes a portion of the discussions of an NIH Workshop (Bethesda, MD, 2015) titled "Self-Regulation of Appetite-It's Complicated," which focused on the biological aspects of appetite regulation. This review summarizes the key biological inputs of appetite regulation and their implications for body weight regulation. These discussions offer an update of the long-held, rigid perspective of an "adipocentric" biological control, taking a broader view that also includes important inputs from the digestive tract, from lean mass, and from the chemical sensory systems underlying taste and smell. It is only beginning to be understood how these biological systems are integrated and how this integrated input influences appetite and food eating behaviors. The relevance of these biological inputs was discussed primarily in the context of obesity and the problem of weight regain, touching on topics related to the biological predisposition for obesity and the impact that obesity treatments (dieting, exercise, bariatric surgery, etc.) might have on appetite and weight loss maintenance. Finally considered is a common theme that pervaded the workshop discussions, which was individual variability. It is this individual variability in the predisposition for obesity and in the biological response to weight loss that makes the biological component of appetite regulation so complicated. When this individual biological variability is placed in the context of the diverse environmental and behavioral pressures that also influence food eating behaviors, it is easy to appreciate the daunting complexities that arise with the self-regulation of appetite. © 2017 The Obesity Society.
Biological Control of Appetite: A Daunting Complexity
MacLean, Paul S.; Blundell, John E.; Mennella, Julie A.; Batterham, Rachel L.
2017-01-01
Objective This review summarizes a portion of the discussions of an NIH Workshop (Bethesda, MD, 2015) entitled, “Self-Regulation of Appetite, It's Complicated,” which focused on the biological aspects of appetite regulation. Methods Here we summarize the key biological inputs of appetite regulation and their implications for body weight regulation. Results These discussions offer an update of the long-held, rigid perspective of an “adipocentric” biological control, taking a broader view that also includes important inputs from the digestive tract, from lean mass, and from the chemical sensory systems underlying taste and smell. We are only beginning to understand how these biological systems are integrated and how this integrated input influences appetite and food eating behaviors. The relevance of these biological inputs was discussed primarily in the context of obesity and the problem of weight regain, touching on topics related to the biological predisposition for obesity and the impact that obesity treatments (dieting, exercise, bariatric surgery, etc.) might have on appetite and weight loss maintenance. Finally, we consider a common theme that pervaded the workshop discussions, which was individual variability. Conclusions It is this individual variability in the predisposition for obesity and in the biological response to weight loss that makes the biological component of appetite regulation so complicated. When this individual biological variability is placed in the context of the diverse environmental and behavioral pressures that also influence food eating behaviors, it is easy to appreciate the daunting complexities that arise with the self-regulation of appetite. PMID:28229538
Random Predictor Models for Rigorous Uncertainty Quantification: Part 2
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2015-01-01
This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean, the variance, and the range of the model's parameter, thus of the output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, is bounded rigorously.
Random Predictor Models for Rigorous Uncertainty Quantification: Part 1
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2015-01-01
This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean and the variance of the model's parameters, thus of the predicted output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, can be bounded tightly and rigorously.
NASA Astrophysics Data System (ADS)
Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao
2017-03-01
Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction.
QKD Via a Quantum Wavelength Router Using Spatial Soliton
NASA Astrophysics Data System (ADS)
Kouhnavard, M.; Amiri, I. S.; Afroozeh, A.; Jalil, M. A.; Ali, J.; Yupapin, P. P.
2011-05-01
A system for continuous variable quantum key distribution via a wavelength router is proposed. The Kerr type of light in the nonlinear microring resonator (NMRR) induces the chaotic behavior. In this proposed system chaotic signals are generated by an optical soliton or Gaussian pulse within a NMRR system. The parameters, such as input power, MRRs radii and coupling coefficients can change and plays important role in determining the results in which the continuous signals are generated spreading over the spectrum. Large bandwidth signals of optical soliton are generated by the input pulse propagating within the MRRs, which is allowed to form the continuous wavelength or frequency with large tunable channel capacity. The continuous variable QKD is formed by using the localized spatial soliton pulses via a quantum router and networks. The selected optical spatial pulse can be used to perform the secure communication network. Here the entangled photon generated by chaotic signals has been analyzed. The continuous entangled photon is generated by using the polarization control unit incorporating into the MRRs, required to provide the continuous variable QKD. Results obtained have shown that the application of such a system for the simultaneous continuous variable quantum cryptography can be used in the mobile telephone hand set and networks. In this study frequency band of 500 MHz and 2.0 GHz and wavelengths of 775 nm, 2,325 nm and 1.55 μm can be obtained for QKD use with input optical soliton and Gaussian beam respectively.
Merrill, Scott C; Peairs, Frank B
2017-02-01
Models describing the effects of climate change on arthropod pest ecology are needed to help mitigate and adapt to forthcoming changes. Challenges arise because climate data are at resolutions that do not readily synchronize with arthropod biology. Here we explain how multiple sources of climate and weather data can be synthesized to quantify the effects of climate change on pest phenology. Predictions of phenological events differ substantially between models that incorporate scale-appropriate temperature variability and models that do not. As an illustrative example, we predicted adult emergence of a pest of sunflower, the sunflower stem weevil Cylindrocopturus adspersus (LeConte). Predictions of the timing of phenological events differed by an average of 11 days between models with different temperature variability inputs. Moreover, as temperature variability increases, developmental rates accelerate. Our work details a phenological modeling approach intended to help develop tools to plan for and mitigate the effects of climate change. Results show that selection of scale-appropriate temperature data is of more importance than selecting a climate change emission scenario. Predictions derived without appropriate temperature variability inputs will likely result in substantial phenological event miscalculations. Additionally, results suggest that increased temperature instability will lead to accelerated pest development. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
Infant perceptual development for faces and spoken words: An integrated approach
Watson, Tamara L; Robbins, Rachel A; Best, Catherine T
2014-01-01
There are obvious differences between recognizing faces and recognizing spoken words or phonemes that might suggest development of each capability requires different skills. Recognizing faces and perceiving spoken language, however, are in key senses extremely similar endeavors. Both perceptual processes are based on richly variable, yet highly structured input from which the perceiver needs to extract categorically meaningful information. This similarity could be reflected in the perceptual narrowing that occurs within the first year of life in both domains. We take the position that the perceptual and neurocognitive processes by which face and speech recognition develop are based on a set of common principles. One common principle is the importance of systematic variability in the input as a source of information rather than noise. Experience of this variability leads to perceptual tuning to the critical properties that define individual faces or spoken words versus their membership in larger groupings of people and their language communities. We argue that parallels can be drawn directly between the principles responsible for the development of face and spoken language perception. PMID:25132626
ERIC Educational Resources Information Center
UNESCO-UNEVOC International Centre for Technical and Vocational Education and Training, 2014
2014-01-01
Around the world technical and vocational education and training (TVET) is widely seen as having a key role in promoting both economic and socio-economic growth, increasing productivity, empowering citizens and alleviating poverty. Yet the quality of TVET in terms of learner outcomes and teaching inputs is variable. In some countries this…
Zhang, Yili; Golowasch, Jorge
2011-11-01
The pyloric network of decapods crustaceans can undergo dramatic rhythmic activity changes. Under normal conditions the network generates low frequency rhythmic activity that depends obligatorily on the presence of neuromodulatory input from the central nervous system. When this input is removed (decentralization) the rhythmic activity ceases. In the continued absence of this input, periodic activity resumes after a few hours in the form of episodic bursting across the entire network that later turns into stable rhythmic activity that is nearly indistinguishable from control (recovery). It has been proposed that an activity-dependent modification of ionic conductance levels in the pyloric pacemaker neuron drives the process of recovery of activity. Previous modeling attempts have captured some aspects of the temporal changes observed experimentally, but key features could not be reproduced. Here we examined a model in which slow activity-dependent regulation of ionic conductances and slower neuromodulator-dependent regulation of intracellular Ca(2+) concentration reproduce all the temporal features of this recovery. Key aspects of these two regulatory mechanisms are their independence and their different kinetics. We also examined the role of variability (noise) in the activity-dependent regulation pathway and observe that it can help to reduce unrealistic constraints that were otherwise required on the neuromodulator-dependent pathway. We conclude that small variations in intracellular Ca(2+) concentration, a Ca(2+) uptake regulation mechanism that is directly targeted by neuromodulator-activated signaling pathways, and variability in the Ca(2+) concentration sensing signaling pathway can account for the observed changes in neuronal activity. Our conclusions are all amenable to experimental analysis.
Expendable vs reusable propulsion systems cost sensitivity
NASA Technical Reports Server (NTRS)
Hamaker, Joseph W.; Dodd, Glenn R.
1989-01-01
One of the key trade studies that must be considered when studying any new space transportation hardware is whether to go reusable or expendable. An analysis is presented here for such a trade relative to a proposed Liquid Rocket Booster which is being studied at MSFC. The assumptions or inputs to the trade were developed and integrated into a model that compares the Life-Cycle Costs of both a reusable LRB and an expendable LRB. Sensitivities were run by varying the input variables to see their effect on total cost. In addition a Monte-Carlo simulation was run to determine the amount of cost risk that may be involved in a decision to reuse or expend.
Gilsenan, M B; Lambe, J; Gibney, M J
2003-11-01
A key component of a food chemical exposure assessment using probabilistic analysis is the selection of the most appropriate input distribution to represent exposure variables. The study explored the type of parametric distribution that could be used to model variability in food consumption data likely to be included in a probabilistic exposure assessment of food additives. The goodness-of-fit of a range of continuous distributions to observed data of 22 food categories expressed as average daily intakes among consumers from the North-South Ireland Food Consumption Survey was assessed using the BestFit distribution fitting program. The lognormal distribution was most commonly accepted as a plausible parametric distribution to represent food consumption data when food intakes were expressed as absolute intakes (16/22 foods) and as intakes per kg body weight (18/22 foods). Results from goodness-of-fit tests were accompanied by lognormal probability plots for a number of food categories. The influence on food additive intake of using a lognormal distribution to model food consumption input data was assessed by comparing modelled intake estimates with observed intakes. Results from the present study advise some level of caution about the use of a lognormal distribution as a mode of input for food consumption data in probabilistic food additive exposure assessments and the results highlight the need for further research in this area.
Sensitivity analysis of navy aviation readiness based sparing model
2017-09-01
variability. (See Figure 4.) Figure 4. Research design flowchart 18 Figure 4 lays out the four steps of the methodology , starting in the upper left-hand...as a function of changes in key inputs. We develop NAVARM Experimental Designs (NED), a computational tool created by applying a state-of-the-art...experimental design to the NAVARM model. Statistical analysis of the resulting data identifies the most influential cost factors. Those are, in order of
Integrated Projectile Systems Synthesis Model (IPSSM)
1976-08-01
Lethal area effectiveness Batch mode I terior ballistics Trajectory calculations Weapon system modeling ""TRACT (Cenetsmae an revers elds It ecesuy and...Ballistics (AR) 29 E. Terminal Effectiveness Calculations (LA) 31 F. 6-D Trajectory (TR) 32 G. Recoil Mechanism Design (RM) 33 H. Sabot Design (SD) 33 I...Exterior Ballistics Program (AR) 79 Key Variable Input D2 Exterior Ballistics Program (AR) 89 List of Tables E Terminal Effectiveness Program (LA) 93
When Can Information from Ordinal Scale Variables Be Integrated?
ERIC Educational Resources Information Center
Kemp, Simon; Grace, Randolph C.
2010-01-01
Many theoretical constructs of interest to psychologists are multidimensional and derive from the integration of several input variables. We show that input variables that are measured on ordinal scales cannot be combined to produce a stable weakly ordered output variable that allows trading off the input variables. Instead a partial order is…
Spike Triggered Covariance in Strongly Correlated Gaussian Stimuli
Aljadeff, Johnatan; Segev, Ronen; Berry, Michael J.; Sharpee, Tatyana O.
2013-01-01
Many biological systems perform computations on inputs that have very large dimensionality. Determining the relevant input combinations for a particular computation is often key to understanding its function. A common way to find the relevant input dimensions is to examine the difference in variance between the input distribution and the distribution of inputs associated with certain outputs. In systems neuroscience, the corresponding method is known as spike-triggered covariance (STC). This method has been highly successful in characterizing relevant input dimensions for neurons in a variety of sensory systems. So far, most studies used the STC method with weakly correlated Gaussian inputs. However, it is also important to use this method with inputs that have long range correlations typical of the natural sensory environment. In such cases, the stimulus covariance matrix has one (or more) outstanding eigenvalues that cannot be easily equalized because of sampling variability. Such outstanding modes interfere with analyses of statistical significance of candidate input dimensions that modulate neuronal outputs. In many cases, these modes obscure the significant dimensions. We show that the sensitivity of the STC method in the regime of strongly correlated inputs can be improved by an order of magnitude or more. This can be done by evaluating the significance of dimensions in the subspace orthogonal to the outstanding mode(s). Analyzing the responses of retinal ganglion cells probed with Gaussian noise, we find that taking into account outstanding modes is crucial for recovering relevant input dimensions for these neurons. PMID:24039563
NASA Technical Reports Server (NTRS)
Sullivan, Sylvia C.; Betancourt, Ricardo Morales; Barahona, Donifan; Nenes, Athanasios
2016-01-01
Along with minimizing parameter uncertainty, understanding the cause of temporal and spatial variability of the nucleated ice crystal number, Ni, is key to improving the representation of cirrus clouds in climate models. To this end, sensitivities of Ni to input variables like aerosol number and diameter provide valuable information about nucleation regime and efficiency for a given model formulation. Here we use the adjoint model of the adjoint of a cirrus formation parameterization (Barahona and Nenes, 2009b) to understand Ni variability for various ice-nucleating particle (INP) spectra. Inputs are generated with the Community Atmosphere Model version 5, and simulations are done with a theoretically derived spectrum, an empirical lab-based spectrum and two field-based empirical spectra that differ in the nucleation threshold for black carbon particles and in the active site density for dust. The magnitude and sign of Ni sensitivity to insoluble aerosol number can be directly linked to nucleation regime and efficiency of various INP. The lab-based spectrum calculates much higher INP efficiencies than field-based ones, which reveals a disparity in aerosol surface properties. Ni sensitivity to temperature tends to be low, due to the compensating effects of temperature on INP spectrum parameters; this low temperature sensitivity regime has been experimentally reported before but never deconstructed as done here.
A data mining framework for time series estimation.
Hu, Xiao; Xu, Peng; Wu, Shaozhi; Asgari, Shadnaz; Bergsneider, Marvin
2010-04-01
Time series estimation techniques are usually employed in biomedical research to derive variables less accessible from a set of related and more accessible variables. These techniques are traditionally built from systems modeling approaches including simulation, blind decovolution, and state estimation. In this work, we define target time series (TTS) and its related time series (RTS) as the output and input of a time series estimation process, respectively. We then propose a novel data mining framework for time series estimation when TTS and RTS represent different sets of observed variables from the same dynamic system. This is made possible by mining a database of instances of TTS, its simultaneously recorded RTS, and the input/output dynamic models between them. The key mining strategy is to formulate a mapping function for each TTS-RTS pair in the database that translates a feature vector extracted from RTS to the dissimilarity between true TTS and its estimate from the dynamic model associated with the same TTS-RTS pair. At run time, a feature vector is extracted from an inquiry RTS and supplied to the mapping function associated with each TTS-RTS pair to calculate a dissimilarity measure. An optimal TTS-RTS pair is then selected by analyzing these dissimilarity measures. The associated input/output model of the selected TTS-RTS pair is then used to simulate the TTS given the inquiry RTS as an input. An exemplary implementation was built to address a biomedical problem of noninvasive intracranial pressure assessment. The performance of the proposed method was superior to that of a simple training-free approach of finding the optimal TTS-RTS pair by a conventional similarity-based search on RTS features. 2009 Elsevier Inc. All rights reserved.
Emergence of context-dependent variability across a basal ganglia network.
Woolley, Sarah C; Rajan, Raghav; Joshua, Mati; Doupe, Allison J
2014-04-02
Context dependence is a key feature of cortical-basal ganglia circuit activity, and in songbirds the cortical outflow of a basal ganglia circuit specialized for song, LMAN, shows striking increases in trial-by-trial variability and bursting when birds sing alone rather than to females. To reveal where this variability and its social regulation emerge, we recorded stepwise from corticostriatal (HVC) neurons and their target spiny and pallidal neurons in Area X. We find that corticostriatal and spiny neurons both show precise singing-related firing across both social settings. Pallidal neurons, in contrast, exhibit markedly increased trial-by-trial variation when birds sing alone, created by highly variable pauses in firing. This variability persists even when recurrent inputs from LMAN are ablated. These data indicate that variability and its context sensitivity emerge within the basal ganglia network, suggest a network mechanism for this emergence, and highlight variability generation and regulation as basal ganglia functions. Copyright © 2014 Elsevier Inc. All rights reserved.
Emergence of context-dependent variability across a basal ganglia network
Woolley, Sarah C.; Rajan, Raghav; Joshua, Mati; Doupe, Allison J.
2014-01-01
Summary Context-dependence is a key feature of cortical-basal ganglia circuit activity, and in songbirds, the cortical outflow of a basal ganglia circuit specialized for song, LMAN, shows striking increases in trial-by-trial variability and bursting when birds sing alone rather than to females. To reveal where this variability and its social regulation emerge, we recorded stepwise from cortico-striatal (HVC) neurons and their target spiny and pallidal neurons in Area X. We find that cortico-striatal and spiny neurons both show precise singing-related firing across both social settings. Pallidal neurons, in contrast, exhibit markedly increased trial-by-trial variation when birds sing alone, created by highly variable pauses in firing. This variability persists even when recurrent inputs from LMAN are ablated. These data indicate that variability and its context-sensitivity emerge within the basal ganglia network, suggest a network mechanism for this emergence, and highlight variability generation and regulation as basal ganglia functions. PMID:24698276
Managing fish habitat for flow and temperature extremes ...
Summer low flows and stream temperature maxima are key drivers affecting the sustainability of fish populations. Thus, it is critical to understand both the natural templates of spatiotemporal variability, how these are shifting due to anthropogenic influences of development and climate change, and how these impacts can be moderated by natural and constructed green infrastructure. Low flow statistics of New England streams have been characterized using a combination of regression equations to describe long-term averages as a function of indicators of hydrologic regime (rain- versus snow-dominated), precipitation, evapotranspiration or temperature, surface water storage, baseflow recession rates, and impervious cover. Difference equations have been constructed to describe interannual variation in low flow as a function of changing air temperature, precipitation, and ocean-atmospheric teleconnection indices. Spatial statistical network models have been applied to explore fine-scale variability of thermal regimes along stream networks in New England as a function of variables describing natural and altered energy inputs, groundwater contributions, and retention time. Low flows exacerbate temperature impacts by reducing thermal inertia of streams to energy inputs. Based on these models, we can construct scenarios of fish habitat suitability using current and projected future climate and the potential for preservation and restoration of historic habitat regimes th
Improving hospital weekend handover: a user-centered, standardised approach.
Mehra, Avi; Henein, Christin
2014-01-01
Clinical Handover remains one of the most perilous procedures in medicine (1). Weekend handover has emerged as a key area of concern with high variability in handover processes across hospitals (1,2,4, 5-10). Studying weekend handover processes within medicine at an acute teaching hospital revealed huge variability in documented content and structure. A total of 12 different pro formas were in use by the medical day-team to handover to the weekend team on-call. A Likert-survey of doctors revealed 93% felt the current handover system needed improvement with 71% stating that it did not ensure patient safety (Chi-squared, p-value <0.001, n=32). Semi-structured interviews of doctors identified common themes including "a lack of consistency in approach" "poor standardization" and "high variability". Seeking to address concerns of standardization, a standardized handover pro forma was developed using Royal College of Physician (RCP) guidelines (2), with direct end-user input. Results following implementation revealed a considerable improvement in documented ceiling of care, urgency of task and team member assignment with 100% uptake of the new proforma at both 4-week and 6-month post-implementation analyses. 88% of doctors surveyed perceived that the new proforma improved patient safety (p<0.01, n=25), with 62% highlighting that it allowed doctors to work more efficiently. Results also revealed that 44% felt further improvements were needed and highlighted electronic solutions and handover training as main priorities. Handover briefing was subsequently incorporated into junior doctor induction and education modules delivered, with good feedback. Following collaboration with key stakeholders and with end-user input, integrated electronic handover software was designed and funding secured. The software is currently under final development. Introducing a standardized handover proforma can be an effective initial step in improving weekend handover. Handover education and end-user involvement are key in improving the process. Electronic handover solutions have been shown to significantly increase the quality of handover and are worth considering (9, 10).
Modeling and Analysis of CNC Milling Process Parameters on Al3030 based Composite
NASA Astrophysics Data System (ADS)
Gupta, Anand; Soni, P. K.; Krishna, C. M.
2018-04-01
The machining of Al3030 based composites on Computer Numerical Control (CNC) high speed milling machine have assumed importance because of their wide application in aerospace industries, marine industries and automotive industries etc. Industries mainly focus on surface irregularities; material removal rate (MRR) and tool wear rate (TWR) which usually depends on input process parameters namely cutting speed, feed in mm/min, depth of cut and step over ratio. Many researchers have carried out researches in this area but very few have taken step over ratio or radial depth of cut also as one of the input variables. In this research work, the study of characteristics of Al3030 is carried out at high speed CNC milling machine over the speed range of 3000 to 5000 r.p.m. Step over ratio, depth of cut and feed rate are other input variables taken into consideration in this research work. A total nine experiments are conducted according to Taguchi L9 orthogonal array. The machining is carried out on high speed CNC milling machine using flat end mill of diameter 10mm. Flatness, MRR and TWR are taken as output parameters. Flatness has been measured using portable Coordinate Measuring Machine (CMM). Linear regression models have been developed using Minitab 18 software and result are validated by conducting selected additional set of experiments. Selection of input process parameters in order to get best machining outputs is the key contributions of this research work.
Weather models as virtual sensors to data-driven rainfall predictions in urban watersheds
NASA Astrophysics Data System (ADS)
Cozzi, Lorenzo; Galelli, Stefano; Pascal, Samuel Jolivet De Marc; Castelletti, Andrea
2013-04-01
Weather and climate predictions are a key element of urban hydrology where they are used to inform water management and assist in flood warning delivering. Indeed, the modelling of the very fast dynamics of urbanized catchments can be substantially improved by the use of weather/rainfall predictions. For example, in Singapore Marina Reservoir catchment runoff processes have a very short time of concentration (roughly one hour) and observational data are thus nearly useless for runoff predictions and weather prediction are required. Unfortunately, radar nowcasting methods do not allow to carrying out long - term weather predictions, whereas numerical models are limited by their coarse spatial scale. Moreover, numerical models are usually poorly reliable because of the fast motion and limited spatial extension of rainfall events. In this study we investigate the combined use of data-driven modelling techniques and weather variables observed/simulated with a numerical model as a way to improve rainfall prediction accuracy and lead time in the Singapore metropolitan area. To explore the feasibility of the approach, we use a Weather Research and Forecast (WRF) model as a virtual sensor network for the input variables (the states of the WRF model) to a machine learning rainfall prediction model. More precisely, we combine an input variable selection method and a non-parametric tree-based model to characterize the empirical relation between the rainfall measured at the catchment level and all possible weather input variables provided by WRF model. We explore different lead time to evaluate the model reliability for different long - term predictions, as well as different time lags to see how past information could improve results. Results show that the proposed approach allow a significant improvement of the prediction accuracy of the WRF model on the Singapore urban area.
A Variable Input-Output Model for Inflation, Growth, and Energy for the Korean Economy.
1983-12-01
and the sales price of cukput as determinan -s of the technical coefficients were suggested by Walras [Ref. 4] and many other eco.cmis.s. (Ref. 5] Arrow...34included in manufacturing and construction secter. The other industries include the social and government services. 32 Ii. * 1.’ *. - .-- :~ ~~\\ ~~ v...e3lectricity, government enterprise, and other social commercial industries. The rate of growth of the money suiply and interest ratqs on loans are the key
NASA Astrophysics Data System (ADS)
Shoaib, Syed Abu; Marshall, Lucy; Sharma, Ashish
2018-06-01
Every model to characterise a real world process is affected by uncertainty. Selecting a suitable model is a vital aspect of engineering planning and design. Observation or input errors make the prediction of modelled responses more uncertain. By way of a recently developed attribution metric, this study is aimed at developing a method for analysing variability in model inputs together with model structure variability to quantify their relative contributions in typical hydrological modelling applications. The Quantile Flow Deviation (QFD) metric is used to assess these alternate sources of uncertainty. The Australian Water Availability Project (AWAP) precipitation data for four different Australian catchments is used to analyse the impact of spatial rainfall variability on simulated streamflow variability via the QFD. The QFD metric attributes the variability in flow ensembles to uncertainty associated with the selection of a model structure and input time series. For the case study catchments, the relative contribution of input uncertainty due to rainfall is higher than that due to potential evapotranspiration, and overall input uncertainty is significant compared to model structure and parameter uncertainty. Overall, this study investigates the propagation of input uncertainty in a daily streamflow modelling scenario and demonstrates how input errors manifest across different streamflow magnitudes.
Heliocentric interplanetary low thrust trajectory optimization program, supplement 1, part 2
NASA Technical Reports Server (NTRS)
Mann, F. I.; Horsewood, J. L.
1978-01-01
The improvements made to the HILTOP electric propulsion trajectory computer program are described. A more realistic propulsion system model was implemented in which various thrust subsystem efficiencies and specific impulse are modeled as variable functions of power available to the propulsion system. The number of operating thrusters are staged, and the beam voltage is selected from a set of five (or less) constant voltages, based upon the application of variational calculus. The constant beam voltages may be optimized individually or collectively. The propulsion system logic is activated by a single program input key in such a manner as to preserve the HILTOP logic. An analysis describing these features, a complete description of program input quantities, and sample cases of computer output illustrating the program capabilities are presented.
Sullivan, Sylvia C.; Morales Betancourt, Ricardo; Barahona, Donifan; ...
2016-03-03
Along with minimizing parameter uncertainty, understanding the cause of temporal and spatial variability of the nucleated ice crystal number, N i, is key to improving the representation of cirrus clouds in climate models. To this end, sensitivities of N i to input variables like aerosol number and diameter provide valuable information about nucleation regime and efficiency for a given model formulation. Here we use the adjoint model of the adjoint of a cirrus formation parameterization (Barahona and Nenes, 2009b) to understand N i variability for various ice-nucleating particle (INP) spectra. Inputs are generated with the Community Atmosphere Model version 5, andmore » simulations are done with a theoretically derived spectrum, an empirical lab-based spectrum and two field-based empirical spectra that differ in the nucleation threshold for black carbon particles and in the active site density for dust. The magnitude and sign of N i sensitivity to insoluble aerosol number can be directly linked to nucleation regime and efficiency of various INP. The lab-based spectrum calculates much higher INP efficiencies than field-based ones, which reveals a disparity in aerosol surface properties. In conclusion, N i sensitivity to temperature tends to be low, due to the compensating effects of temperature on INP spectrum parameters; this low temperature sensitivity regime has been experimentally reported before but never deconstructed as done here.« less
Dengue: recent past and future threats
Rogers, David J.
2015-01-01
This article explores four key questions about statistical models developed to describe the recent past and future of vector-borne diseases, with special emphasis on dengue: (1) How many variables should be used to make predictions about the future of vector-borne diseases?(2) Is the spatial resolution of a climate dataset an important determinant of model accuracy?(3) Does inclusion of the future distributions of vectors affect predictions of the futures of the diseases they transmit?(4) Which are the key predictor variables involved in determining the distributions of vector-borne diseases in the present and future?Examples are given of dengue models using one, five or 10 meteorological variables and at spatial resolutions of from one-sixth to two degrees. Model accuracy is improved with a greater number of descriptor variables, but is surprisingly unaffected by the spatial resolution of the data. Dengue models with a reduced set of climate variables derived from the HadCM3 global circulation model predictions for the 1980s are improved when risk maps for dengue's two main vectors (Aedes aegypti and Aedes albopictus) are also included as predictor variables; disease and vector models are projected into the future using the global circulation model predictions for the 2020s, 2040s and 2080s. The Garthwaite–Koch corr-max transformation is presented as a novel way of showing the relative contribution of each of the input predictor variables to the map predictions. PMID:25688021
Mokhtari, Amirhossein; Moore, Christina M; Yang, Hong; Jaykus, Lee-Ann; Morales, Roberta; Cates, Sheryl C; Cowen, Peter
2006-06-01
We describe a one-dimensional probabilistic model of the role of domestic food handling behaviors on salmonellosis risk associated with the consumption of eggs and egg-containing foods. Six categories of egg-containing foods were defined based on the amount of egg contained in the food, whether eggs are pooled, and the degree of cooking practiced by consumers. We used bootstrap simulation to quantify uncertainty in risk estimates due to sampling error, and sensitivity analysis to identify key sources of variability and uncertainty in the model. Because of typical model characteristics such as nonlinearity, interaction between inputs, thresholds, and saturation points, Sobol's method, a novel sensitivity analysis approach, was used to identify key sources of variability. Based on the mean probability of illness, examples of foods from the food categories ranked from most to least risk of illness were: (1) home-made salad dressings/ice cream; (2) fried eggs/boiled eggs; (3) omelettes; and (4) baked foods/breads. For food categories that may include uncooked eggs (e.g., home-made salad dressings/ice cream), consumer handling conditions such as storage time and temperature after food preparation were the key sources of variability. In contrast, for food categories associated with undercooked eggs (e.g., fried/soft-boiled eggs), the initial level of Salmonella contamination and the log10 reduction due to cooking were the key sources of variability. Important sources of uncertainty varied with both the risk percentile and the food category under consideration. This work adds to previous risk assessments focused on egg production and storage practices, and provides a science-based approach to inform consumer risk communications regarding safe egg handling practices.
Biophysical Variables Retrieval Over Russian Winter Wheat Fields Using Medium Resolution
NASA Astrophysics Data System (ADS)
d'Andrimont, Raphael; Waldner, Francois; Bartalev, Sergey; Plotnikov, Dmitry; Kleschenko, Alexander; Virchenko, Oleg; de Wit, Allard; Roerink, Gerbert; Defourny, Pierre
2013-12-01
Winter wheat production in the Russian Federation represents one of the sources of uncertainty for the international commodity market. In particular, adverse weather conditions may induce winter kill resulting in large yields' losses. Improving the monitoring of winter- wheat in Russia with a focus on winter-kill damage and its impacts on yield is thus a key challenge.This paper presents the methods and the results of the biophysical variables retrieval on a daily basis as an input for crop growth modeling at parcel level over a 10-years period (2003-2012) in the Russian context. The field campaigns carried out on 2 sites in the Tula region from 2010 to 2012 shows that it is possible to characterize the spatial and temporal variability at pixel, field and regional scale using medium resolution sensors (MODIS) over Russian fields.
A neural circuit mechanism for regulating vocal variability during song learning in zebra finches.
Garst-Orozco, Jonathan; Babadi, Baktash; Ölveczky, Bence P
2014-12-15
Motor skill learning is characterized by improved performance and reduced motor variability. The neural mechanisms that couple skill level and variability, however, are not known. The zebra finch, a songbird, presents a unique opportunity to address this question because production of learned song and induction of vocal variability are instantiated in distinct circuits that converge on a motor cortex analogue controlling vocal output. To probe the interplay between learning and variability, we made intracellular recordings from neurons in this area, characterizing how their inputs from the functionally distinct pathways change throughout song development. We found that inputs that drive stereotyped song-patterns are strengthened and pruned, while inputs that induce variability remain unchanged. A simple network model showed that strengthening and pruning of action-specific connections reduces the sensitivity of motor control circuits to variable input and neural 'noise'. This identifies a simple and general mechanism for learning-related regulation of motor variability.
Teasing apart the effects of natural and constructed green ...
Summer low flows and stream temperature maxima are key drivers affecting the sustainability of fish populations. Thus, it is critical to understand both the natural templates of spatiotemporal variability, how these are shifting due to anthropogenic influences of development and climate change, and how these impacts can be moderated by natural and constructed green infrastructure. Low flow statistics of New England streams have been characterized using a combination of regression equations to describe long-term averages as a function of indicators of hydrologic regime (rain- versus snow-dominated), precipitation, evapotranspiration or temperature, surface water storage, baseflow recession rates, and impervious cover. Difference equations have been constructed to describe interannual variation in low flow as a function of changing air temperature, precipitation, and ocean-atmospheric teleconnection indices. Spatial statistical network models have been applied to explore fine-scale variability of thermal regimes along stream networks in New England as a function of variables describing natural and altered energy inputs, groundwater contributions, and retention time. Low flows exacerbate temperature impacts by reducing thermal inertia of streams to energy inputs. Based on these models, we can construct scenarios of fish habitat suitability using current and projected future climate and the potential for preservation and restoration of historic habitat regimes th
Hewitt, Judi E; Ellis, Joanne I; Thrush, Simon F
2016-08-01
Global climate change will undoubtedly be a pressure on coastal marine ecosystems, affecting not only species distributions and physiology but also ecosystem functioning. In the coastal zone, the environmental variables that may drive ecological responses to climate change include temperature, wave energy, upwelling events and freshwater inputs, and all act and interact at a variety of spatial and temporal scales. To date, we have a poor understanding of how climate-related environmental changes may affect coastal marine ecosystems or which environmental variables are likely to produce priority effects. Here we use time series data (17 years) of coastal benthic macrofauna to investigate responses to a range of climate-influenced variables including sea-surface temperature, southern oscillation indices (SOI, Z4), wind-wave exposure, freshwater inputs and rainfall. We investigate responses from the abundances of individual species to abundances of functional traits and test whether species that are near the edge of their tolerance to another stressor (in this case sedimentation) may exhibit stronger responses. The responses we observed were all nonlinear and some exhibited thresholds. While temperature was most frequently an important predictor, wave exposure and ENSO-related variables were also frequently important and most ecological variables responded to interactions between environmental variables. There were also indications that species sensitive to another stressor responded more strongly to weaker climate-related environmental change at the stressed site than the unstressed site. The observed interactions between climate variables, effects on key species or functional traits, and synergistic effects of additional anthropogenic stressors have important implications for understanding and predicting the ecological consequences of climate change to coastal ecosystems. © 2015 John Wiley & Sons Ltd.
Probabilistic estimation of residential air exchange rates for ...
Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER measurements. An algorithm for probabilistically estimating AER was developed based on the Lawrence Berkley National Laboratory Infiltration model utilizing housing characteristics and meteorological data with adjustment for window opening behavior. The algorithm was evaluated by comparing modeled and measured AERs in four US cities (Los Angeles, CA; Detroit, MI; Elizabeth, NJ; and Houston, TX) inputting study-specific data. The impact on the modeled AER of using publically available housing data representative of the region for each city was also assessed. Finally, modeled AER based on region-specific inputs was compared with those estimated using literature-based distributions. While modeled AERs were similar in magnitude to the measured AER they were consistently lower for all cities except Houston. AERs estimated using region-specific inputs were lower than those using study-specific inputs due to differences in window opening probabilities. The algorithm produced more spatially and temporally variable AERs compared with literature-based distributions reflecting within- and between-city differences, helping reduce error in estimates of air pollutant exposure. Published in the Journal of
Bottom-up and Top-down Input Augment the Variability of Cortical Neurons
Nassi, Jonathan J.; Kreiman, Gabriel; Born, Richard T.
2016-01-01
SUMMARY Neurons in the cerebral cortex respond inconsistently to a repeated sensory stimulus, yet they underlie our stable sensory experiences. Although the nature of this variability is unknown, its ubiquity has encouraged the general view that each cell produces random spike patterns that noisily represent its response rate. In contrast, here we show that reversibly inactivating distant sources of either bottom-up or top-down input to cortical visual areas in the alert primate reduces both the spike train irregularity and the trial-to-trial variability of single neurons. A simple model in which a fraction of the pre-synaptic input is silenced can reproduce this reduction in variability, provided that there exist temporal correlations primarily within, but not between, excitatory and inhibitory input pools. A large component of the variability of cortical neurons may therefore arise from synchronous input produced by signals arriving from multiple sources. PMID:27427459
Marli: Mars Lidar for Global Wind Profiles and Aerosol Profiles from Orbit
NASA Technical Reports Server (NTRS)
Abshire, J. B.; Guzewich, S. D.; Smith, M. D.; Riris, H.; Sun, X.; Gentry, B. M.; Yu, A.; Allan, G. R.
2016-01-01
The Mars Exploration Analysis Group's Next Orbiter Science Analysis Group (NEXSAG) has recently identified atmospheric wind measurements as one of 5 top compelling science objectives for a future Mars orbiter. To date, only isolated lander observations of martian winds exist. Winds are the key variable to understand atmospheric transport and answer fundamental questions about the three primary cycles of the martian climate: CO2, H2O, and dust. However, the direct lack of observations and imprecise and indirect inferences from temperature observations leave many basic questions about the atmospheric circulation unanswered. In addition to addressing high priority science questions, direct wind observations from orbit would help validate 3D general circulation models (GCMs) while also providing key input to atmospheric reanalyses. The dust and CO2 cycles on Mars are partially coupled and their influences on the atmospheric circulation modify the global wind field. Dust absorbs solar infrared radiation and its variable spatial distribution forces changes in the atmospheric temperature and wind fields. Thus it is important to simultaneously measure the height-resolved wind and dust profiles. MARLI provides a unique capability to observe these variables continuously, day and night, from orbit.
NASA Astrophysics Data System (ADS)
Lin, Zhuosheng; Yu, Simin; Lü, Jinhu
2017-06-01
In this paper, a novel approach for constructing one-way hash function based on 8D hyperchaotic map is presented. First, two nominal matrices both with constant and variable parameters are adopted for designing 8D discrete-time hyperchaotic systems, respectively. Then each input plaintext message block is transformed into 8 × 8 matrix following the order of left to right and top to bottom, which is used as a control matrix for the switch of the nominal matrix elements both with the constant parameters and with the variable parameters. Through this switching control, a new nominal matrix mixed with the constant and variable parameters is obtained for the 8D hyperchaotic map. Finally, the hash function is constructed with the multiple low 8-bit hyperchaotic system iterative outputs after being rounded down, and its secure analysis results are also given, validating the feasibility and reliability of the proposed approach. Compared with the existing schemes, the main feature of the proposed method is that it has a large number of key parameters with avalanche effect, resulting in the difficulty for estimating or predicting key parameters via various attacks.
Troutman, Brent M.
1982-01-01
Errors in runoff prediction caused by input data errors are analyzed by treating precipitation-runoff models as regression (conditional expectation) models. Independent variables of the regression consist of precipitation and other input measurements; the dependent variable is runoff. In models using erroneous input data, prediction errors are inflated and estimates of expected storm runoff for given observed input variables are biased. This bias in expected runoff estimation results in biased parameter estimates if these parameter estimates are obtained by a least squares fit of predicted to observed runoff values. The problems of error inflation and bias are examined in detail for a simple linear regression of runoff on rainfall and for a nonlinear U.S. Geological Survey precipitation-runoff model. Some implications for flood frequency analysis are considered. A case study using a set of data from Turtle Creek near Dallas, Texas illustrates the problems of model input errors.
Tritium Records to Trace Stratospheric Moisture Inputs in Antarctica
NASA Astrophysics Data System (ADS)
Fourré, E.; Landais, A.; Cauquoin, A.; Jean-Baptiste, P.; Lipenkov, V.; Petit, J.-R.
2018-03-01
Better assessing the dynamic of stratosphere-troposphere exchange is a key point to improve our understanding of the climate dynamic in the East Antarctica Plateau, a region where stratospheric inputs are expected to be important. Although tritium (3H or T), a nuclide naturally produced mainly in the stratosphere and rapidly entering the water cycle as HTO, seems a first-rate tracer to study these processes, tritium data are very sparse in this region. We present the first high-resolution measurements of tritium concentration over the last 50 years in three snow pits drilled at the Vostok station. Natural variability of the tritium records reveals two prominent frequencies, one at about 10 years (to be related to the solar Schwabe cycles) and the other one at a shorter periodicity: despite dating uncertainty at this short scale, a good correlation is observed between 3H and Na+ and an anticorrelation between 3H and δ18O measured on an individual pit. The outputs from the LMDZ Atmospheric General Circulation Model including stable water isotopes and tritium show the same 3H-δ18O anticorrelation and allow further investigation on the associated mechanism. At the interannual scale, the modeled 3H variability matches well with the Southern Annular Mode index. At the seasonal scale, we show that modeled stratospheric tritium inputs in the troposphere are favored in winter cold and dry conditions.
WiseView: Visualizing motion and variability of faint WISE sources
NASA Astrophysics Data System (ADS)
Caselden, Dan; Westin, Paul, III; Meisner, Aaron; Kuchner, Marc; Colin, Guillaume
2018-06-01
WiseView renders image blinks of Wide-field Infrared Survey Explorer (WISE) coadds spanning a multi-year time baseline in a browser. The software allows for easy visual identification of motion and variability for sources far beyond the single-frame detection limit, a key threshold not surmounted by many studies. WiseView transparently gathers small image cutouts drawn from many terabytes of unWISE coadds, facilitating access to this large and unique dataset. Users need only input the coordinates of interest and can interactively tune parameters including the image stretch, colormap and blink rate. WiseView was developed in the context of the Backyard Worlds: Planet 9 citizen science project, and has enabled hundreds of brown dwarf candidate discoveries by citizen scientists and professional astronomers.
NASA Astrophysics Data System (ADS)
Whitehead, James Joshua
The analysis documented herein provides an integrated approach for the conduct of optimization under uncertainty (OUU) using Monte Carlo Simulation (MCS) techniques coupled with response surface-based methods for characterization of mixture-dependent variables. This novel methodology provides an innovative means of conducting optimization studies under uncertainty in propulsion system design. Analytic inputs are based upon empirical regression rate information obtained from design of experiments (DOE) mixture studies utilizing a mixed oxidizer hybrid rocket concept. Hybrid fuel regression rate was selected as the target response variable for optimization under uncertainty, with maximization of regression rate chosen as the driving objective. Characteristic operational conditions and propellant mixture compositions from experimental efforts conducted during previous foundational work were combined with elemental uncertainty estimates as input variables. Response surfaces for mixture-dependent variables and their associated uncertainty levels were developed using quadratic response equations incorporating single and two-factor interactions. These analysis inputs, response surface equations and associated uncertainty contributions were applied to a probabilistic MCS to develop dispersed regression rates as a function of operational and mixture input conditions within design space. Illustrative case scenarios were developed and assessed using this analytic approach including fully and partially constrained operational condition sets over all of design mixture space. In addition, optimization sets were performed across an operationally representative region in operational space and across all investigated mixture combinations. These scenarios were selected as representative examples relevant to propulsion system optimization, particularly for hybrid and solid rocket platforms. Ternary diagrams, including contour and surface plots, were developed and utilized to aid in visualization. The concept of Expanded-Durov diagrams was also adopted and adapted to this study to aid in visualization of uncertainty bounds. Regions of maximum regression rate and associated uncertainties were determined for each set of case scenarios. Application of response surface methodology coupled with probabilistic-based MCS allowed for flexible and comprehensive interrogation of mixture and operating design space during optimization cases. Analyses were also conducted to assess sensitivity of uncertainty to variations in key elemental uncertainty estimates. The methodology developed during this research provides an innovative optimization tool for future propulsion design efforts.
Input from Key Stakeholders in the National Security Technology Incubator
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
This report documents the input from key stakeholders of the National Security Technology Incubator (NSTI) in developing a new technology incubator and related programs for southern New Mexico. The technology incubator is being developed as part of the National Security Preparedness Project (NSPP), funded by a Department of Energy (DOE)/National Nuclear Security Administration (NNSA) grant. This report includes identification of key stakeholders as well as a description and analysis of their input for the development of an incubator.
NASA Astrophysics Data System (ADS)
Zurita-Milla, R.; Laurent, V. C. E.; van Gijsel, J. A. E.
2015-12-01
Monitoring biophysical and biochemical vegetation variables in space and time is key to understand the earth system. Operational approaches using remote sensing imagery rely on the inversion of radiative transfer models, which describe the interactions between light and vegetation canopies. The inversion required to estimate vegetation variables is, however, an ill-posed problem because of variable compensation effects that can cause different combinations of soil and canopy variables to yield extremely similar spectral responses. In this contribution, we present a novel approach to visualise the ill-posed problem using self-organizing maps (SOM), which are a type of unsupervised neural network. The approach is demonstrated with simulations for Sentinel-2 data (13 bands) made with the Soil-Leaf-Canopy (SLC) radiative transfer model. A look-up table of 100,000 entries was built by randomly sampling 14 SLC model input variables between their minimum and maximum allowed values while using both a dark and a bright soil. The Sentinel-2 spectral simulations were used to train a SOM of 200 × 125 neurons. The training projected similar spectral signatures onto either the same, or contiguous, neuron(s). Tracing back the inputs that generated each spectral signature, we created a 200 × 125 map for each of the SLC variables. The lack of spatial patterns and the variability in these maps indicate ill-posed situations, where similar spectral signatures correspond to different canopy variables. For Sentinel-2, our results showed that leaf area index, crown cover and leaf chlorophyll, water and brown pigment content are less confused in the inversion than variables with noisier maps like fraction of brown canopy area, leaf dry matter content and the PROSPECT mesophyll parameter. This study supports both educational and on-going research activities on inversion algorithms and might be useful to evaluate the uncertainties of retrieved canopy biophysical and biochemical state variables.
Using Natural Language to Enhance Mission Effectiveness
NASA Technical Reports Server (NTRS)
Trujillo, Anna C.; Meszaros, Erica
2016-01-01
The availability of highly capable, yet relatively cheap, unmanned aerial vehicles (UAVs) is opening up new areas of use for hobbyists and for professional-related activities. The driving function of this research is allowing a non-UAV pilot, an operator, to define and manage a mission. This paper describes the preliminary usability measures of an interface that allows an operator to define the mission using speech to make inputs. An experiment was conducted to begin to enumerate the efficacy and user acceptance of using voice commands to define a multi-UAV mission and to provide high-level vehicle control commands such as "takeoff." The primary independent variable was input type - voice or mouse. The primary dependent variables consisted of the correctness of the mission parameter inputs and the time needed to make all inputs. Other dependent variables included NASA-TLX workload ratings and subjective ratings on a final questionnaire. The experiment required each subject to fill in an online form that contained comparable required information that would be needed for a package dispatcher to deliver packages. For each run, subjects typed in a simple numeric code for the package code. They then defined the initial starting position, the delivery location, and the return location using either pull-down menus or voice input. Voice input was accomplished using CMU Sphinx4-5prealpha for speech recognition. They then inputted the length of the package. These were the option fields. The subject had the system "Calculate Trajectory" and then "Takeoff" once the trajectory was calculated. Later, the subject used "Land" to finish the run. After the voice and mouse input blocked runs, subjects completed a NASA-TLX. At the conclusion of all runs, subjects completed a questionnaire asking them about their experience in inputting the mission parameters, and starting and stopping the mission using mouse and voice input. In general, the usability of voice commands is acceptable. With a relatively well-defined and simple vocabulary, the operator can input the vast majority of the mission parameters using simple, intuitive voice commands. However, voice input may be more applicable to initial mission specification rather than for critical commands such as the need to land immediately due to time and feedback constraints. It would also be convenient to retrieve relevant mission information using voice input. Therefore, further on-going research is looking at using intent from operator utterances to provide the relevant mission information to the operator. The information displayed will be inferred from the operator's utterances just before key phrases are spoken. Linguistic analysis of the context of verbal communication provides insight into the intended meaning of commonly heard phrases such as "What's it doing now?" Analyzing the semantic sphere surrounding these common phrases enables us to predict the operator's intent and supply the operator's desired information to the interface. This paper also describes preliminary investigations into the generation of the semantic space of UAV operation and the success at providing information to the interface based on the operator's utterances.
NASA Astrophysics Data System (ADS)
Powley, Helen R.; Krom, Michael D.; Van Cappellen, Philippe
2018-03-01
Human activities have significantly modified the inputs of land-derived phosphorus (P) and nitrogen (N) to the Mediterranean Sea (MS). Here, we reconstruct the external inputs of reactive P and N to the Western Mediterranean Sea (WMS) and Eastern Mediterranean Sea (EMS) over the period 1950-2030. We estimate that during this period the land derived P and N loads increased by factors of 3 and 2 to the WMS and EMS, respectively, with reactive P inputs peaking in the 1980s but reactive N inputs increasing continuously from 1950 to 2030. The temporal variations in reactive P and N inputs are imposed in a coupled P and N mass balance model of the MS to simulate the accompanying changes in water column nutrient distributions and primary production with time. The key question we address is whether these changes are large enough to be distinguishable from variations caused by confounding factors, specifically the relatively large inter-annual variability in thermohaline circulation (THC) of the MS. Our analysis indicates that for the intermediate and deep water masses of the MS the magnitudes of changes in reactive P concentrations due to changes in anthropogenic inputs are relatively small and likely difficult to diagnose because of the noise created by the natural circulation variability. Anthropogenic N enrichment should be more readily detectable in time series concentration data for dissolved organic N (DON) after the 1970s, and for nitrate (NO3) after the 1990s. The DON concentrations in the EMS are predicted to exhibit the largest anthropogenic enrichment signature. Temporal variations in annual primary production over the 1950-2030 period are dominated by variations in deep-water formation rates, followed by changes in riverine P inputs for the WMS and atmospheric P deposition for the EMS. Overall, our analysis indicates that the detection of basin-wide anthropogenic nutrient concentration trends in the MS is rendered difficult due to: (1) the Atlantic Ocean contributing the largest reactive P and N inputs to the MS, hence diluting the anthropogenic nutrient signatures, (2) the anti-estuarine circulation removing at least 45% of the anthropogenic nutrients inputs added to both basins of the MS between 1950 and 2030, and (3) variations in intermediate and deep water formation rates that add high natural noise to the P and N concentration trajectories.
Variance-based interaction index measuring heteroscedasticity
NASA Astrophysics Data System (ADS)
Ito, Keiichi; Couckuyt, Ivo; Poles, Silvia; Dhaene, Tom
2016-06-01
This work is motivated by the need to deal with models with high-dimensional input spaces of real variables. One way to tackle high-dimensional problems is to identify interaction or non-interaction among input parameters. We propose a new variance-based sensitivity interaction index that can detect and quantify interactions among the input variables of mathematical functions and computer simulations. The computation is very similar to first-order sensitivity indices by Sobol'. The proposed interaction index can quantify the relative importance of input variables in interaction. Furthermore, detection of non-interaction for screening can be done with as low as 4 n + 2 function evaluations, where n is the number of input variables. Using the interaction indices based on heteroscedasticity, the original function may be decomposed into a set of lower dimensional functions which may then be analyzed separately.
Input-variable sensitivity assessment for sediment transport relations
NASA Astrophysics Data System (ADS)
Fernández, Roberto; Garcia, Marcelo H.
2017-09-01
A methodology to assess input-variable sensitivity for sediment transport relations is presented. The Mean Value First Order Second Moment Method (MVFOSM) is applied to two bed load transport equations showing that it may be used to rank all input variables in terms of how their specific variance affects the overall variance of the sediment transport estimation. In sites where data are scarce or nonexistent, the results obtained may be used to (i) determine what variables would have the largest impact when estimating sediment loads in the absence of field observations and (ii) design field campaigns to specifically measure those variables for which a given transport equation is most sensitive; in sites where data are readily available, the results would allow quantifying the effect that the variance associated with each input variable has on the variance of the sediment transport estimates. An application of the method to two transport relations using data from a tropical mountain river in Costa Rica is implemented to exemplify the potential of the method in places where input data are limited. Results are compared against Monte Carlo simulations to assess the reliability of the method and validate its results. For both of the sediment transport relations used in the sensitivity analysis, accurate knowledge of sediment size was found to have more impact on sediment transport predictions than precise knowledge of other input variables such as channel slope and flow discharge.
Attentional modulation of neuronal variability in circuit models of cortex
Kanashiro, Tatjana; Ocker, Gabriel Koch; Cohen, Marlene R; Doiron, Brent
2017-01-01
The circuit mechanisms behind shared neural variability (noise correlation) and its dependence on neural state are poorly understood. Visual attention is well-suited to constrain cortical models of response variability because attention both increases firing rates and their stimulus sensitivity, as well as decreases noise correlations. We provide a novel analysis of population recordings in rhesus primate visual area V4 showing that a single biophysical mechanism may underlie these diverse neural correlates of attention. We explore model cortical networks where top-down mediated increases in excitability, distributed across excitatory and inhibitory targets, capture the key neuronal correlates of attention. Our models predict that top-down signals primarily affect inhibitory neurons, whereas excitatory neurons are more sensitive to stimulus specific bottom-up inputs. Accounting for trial variability in models of state dependent modulation of neuronal activity is a critical step in building a mechanistic theory of neuronal cognition. DOI: http://dx.doi.org/10.7554/eLife.23978.001 PMID:28590902
A Multifactor Approach to Research in Instructional Technology.
ERIC Educational Resources Information Center
Ragan, Tillman J.
In a field such as instructional design, explanations of educational outcomes must necessarily consider multiple input variables. To adequately understand the contribution made by the independent variables, it is helpful to have a visual conception of how the input variables interrelate. Two variable models are adequately represented by a two…
ENSO detection and use to inform the operation of large scale water systems
NASA Astrophysics Data System (ADS)
Pham, Vuong; Giuliani, Matteo; Castelletti, Andrea
2016-04-01
El Nino Southern Oscillation (ENSO) is a large-scale, coupled ocean-atmosphere phenomenon occurring in the tropical Pacific Ocean, and is considered one of the most significant factors causing hydro-climatic anomalies throughout the world. Water systems operations could benefit from a better understanding of this global phenomenon, which has the potential for enhancing the accuracy and lead-time of long-range streamflow predictions. In turn, these are key to design interannual water transfers in large scale water systems to contrast increasingly frequent extremes induced by changing climate. Despite the ENSO teleconnection is well defined in some locations such as Western USA and Australia, there is no consensus on how it can be detected and used in other river basins, particularly in Europe, Africa, and Asia. In this work, we contribute a general framework relying on Input Variable Selection techniques for detecting ENSO teleconnection and using this information for improving water reservoir operations. Core of our procedure is the Iterative Input variable Selection (IIS) algorithm, which is employed to find the most relevant determinants of streamflow variability for deriving predictive models based on the selected inputs as well as to find the most valuable information for conditioning operating decisions. Our framework is applied to the multipurpose operations of the Hoa Binh reservoir in the Red River basin (Vietnam), taking into account hydropower production, water supply for irrigation, and flood mitigation during the monsoon season. Numerical results show that our framework is able to quantify the relationship between the ENSO fluctuations and the Red River basin hydrology. Moreover, we demonstrate that such ENSO teleconnection represents valuable information for improving the operations of Hoa Binh reservoir.
The evolutionary and ecological roots of human social organization
Kaplan, Hillard S.; Hooper, Paul L.; Gurven, Michael
2009-01-01
Social organization among human foragers is characterized by a three-generational system of resource provisioning within families, long-term pair-bonding between men and women, high levels of cooperation between kin and non-kin, and relatively egalitarian social relationships. In this paper, we suggest that these core features of human sociality result from the learning- and skill-intensive human foraging niche, which is distinguished by a late age-peak in caloric production, high complementarity between male and female inputs to offspring viability, high gains to cooperation in production and risk-reduction, and a lack of economically defensible resources. We present an explanatory framework for understanding variation in social organization across human societies, highlighting the interactive effects of four key ecological and economic variables: (i) the role of skill in resource production; (ii) the degree of complementarity in male and female inputs into production; (iii) economies of scale in cooperative production and competition; and (iv) the economic defensibility of physical inputs into production. Finally, we apply this framework to understanding variation in social and political organization across foraging, horticulturalist, pastoralist and agriculturalist societies. PMID:19805435
Agriculture and Climate Change in Global Scenarios: Why Don't the Models Agree
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, Gerald; van der Mensbrugghe, Dominique; Ahammad, Helal
Agriculture is unique among economic sectors in the nature of impacts from climate change. The production activity that transforms inputs into agricultural outputs makes direct use of weather inputs. Previous studies of the impacts of climate change on agriculture have reported substantial differences in outcomes of key variables such as prices, production, and trade. These divergent outcomes arise from differences in model inputs and model specification. The goal of this paper is to review climate change results and underlying determinants from a model comparison exercise with 10 of the leading global economic models that include significant representation of agriculture. Bymore » providing common productivity drivers that include climate change effects, differences in model outcomes are reduced. All models show higher prices in 2050 because of negative productivity shocks from climate change. The magnitude of the price increases, and the adaptation responses, differ significantly across the various models. Substantial differences exist in the structural parameters affecting demand, area, and yield, and should be a topic for future research.« less
Flight dynamics analysis and simulation of heavy lift airships, volume 4. User's guide: Appendices
NASA Technical Reports Server (NTRS)
Emmen, R. D.; Tischler, M. B.
1982-01-01
This table contains all of the input variables to the three programs. The variables are arranged according to the name list groups in which they appear in the data files. The program name, subroutine name, definition and, where appropriate, a default input value and any restrictions are listed with each variable. The default input values are user supplied, not generated by the computer. These values remove a specific effect from the calculations, as explained in the table. The phrase "not used' indicates that a variable is not used in the calculations and are for identification purposes only. The engineering symbol, where it exists, is listed to assist the user in correlating these inputs with the discussion in the Technical Manual.
Artificial neural networks for modeling ammonia emissions released from sewage sludge composting
NASA Astrophysics Data System (ADS)
Boniecki, P.; Dach, J.; Pilarski, K.; Piekarska-Boniecka, H.
2012-09-01
The project was designed to develop, test and validate an original Neural Model describing ammonia emissions generated in composting sewage sludge. The composting mix was to include the addition of such selected structural ingredients as cereal straw, sawdust and tree bark. All created neural models contain 7 input variables (chemical and physical parameters of composting) and 1 output (ammonia emission). The α data file was subdivided into three subfiles: the learning file (ZU) containing 330 cases, the validation file (ZW) containing 110 cases and the test file (ZT) containing 110 cases. The standard deviation ratios (for all 4 created networks) ranged from 0.193 to 0.218. For all of the selected models, the correlation coefficient reached the high values of 0.972-0.981. The results show that he predictive neural model describing ammonia emissions from composted sewage sludge is well suited for assessing such emissions. The sensitivity analysis of the model for the input of variables of the process in question has shown that the key parameters describing ammonia emissions released in composting sewage sludge are pH and the carbon to nitrogen ratio (C:N).
Production Function Geometry with "Knightian" Total Product
ERIC Educational Resources Information Center
Truett, Dale B.; Truett, Lila J.
2007-01-01
Authors of principles and price theory textbooks generally illustrate short-run production using a total product curve that displays first increasing and then diminishing marginal returns to employment of the variable input(s). Although it seems reasonable that a temporary range of increasing returns to variable inputs will likely occur as…
Team performance in the Italian NHS: the role of reflexivity.
Urbini, Flavio; Callea, Antonino; Chirumbolo, Antonio; Talamo, Alessandra; Ingusci, Emanuela; Ciavolino, Enrico
2018-04-09
Purpose The purpose of this paper is twofold: first, to investigate the goodness of the input-process-output (IPO) model in order to evaluate work team performance within the Italian National Health Care System (NHS); and second, to test the mediating role of reflexivity as an overarching process factor between input and output. Design/methodology/approach The Italian version of the Aston Team Performance Inventory was administered to 351 employees working in teams in the Italian NHS. Mediation analyses with latent variables were performed via structural equation modeling (SEM); the significance of total, direct, and indirect effect was tested via bootstrapping. Findings Underpinned by the IPO framework, the results of SEM supported mediational hypotheses. First, the application of the IPO model in the Italian NHS showed adequate fit indices, showing that the process mediates the relationship between input and output factors. Second, reflexivity mediated the relationship between input and output, influencing some aspects of team performance. Practical implications The results provide useful information for HRM policies improving process dimensions of the IPO model via the mediating role of reflexivity as a key role in team performance. Originality/value This study is one of a limited number of studies that applied the IPO model in the Italian NHS. Moreover, no study has yet examined the role of reflexivity as a mediator between input and output factors in the IPO model.
NASA Technical Reports Server (NTRS)
Rayner, J. T.; Chuter, T. C.; Mclean, I. S.; Radostitz, J. V.; Nolt, I. G.
1988-01-01
A technique for establishing a stable intermediate temperature stage in liquid He/liquid N2 double vessel cryostats is described. The tertiary cold stage, which can be tuned to any temperature between 10 and 60 K, is ideal for cooling IR sensors for use in astronomy and physics applications. The device is called a variable-conductance gas switch. It is essentially a small chamber, located between the cold stage and liquid helium cold-face, whose thermal conductance may be controlled by varying the pressure of helium gas within the chamber. A key feature of this device is the large range of temperature control achieved with a very small (less than 10 mW) heat input from the cryogenic temperature control switch.
Bio-inspired online variable recruitment control of fluidic artificial muscles
NASA Astrophysics Data System (ADS)
Jenkins, Tyler E.; Chapman, Edward M.; Bryant, Matthew
2016-12-01
This paper details the creation of a hybrid variable recruitment control scheme for fluidic artificial muscle (FAM) actuators with an emphasis on maximizing system efficiency and switching control performance. Variable recruitment is the process of altering a system’s active number of actuators, allowing operation in distinct force regimes. Previously, FAM variable recruitment was only quantified with offline, manual valve switching; this study addresses the creation and characterization of novel, on-line FAM switching control algorithms. The bio-inspired algorithms are implemented in conjunction with a PID and model-based controller, and applied to a simulated plant model. Variable recruitment transition effects and chatter rejection are explored via a sensitivity analysis, allowing a system designer to weigh tradeoffs in actuator modeling, algorithm choice, and necessary hardware. Variable recruitment is further developed through simulation of a robotic arm tracking a variety of spline position inputs, requiring several levels of actuator recruitment. Switching controller performance is quantified and compared with baseline systems lacking variable recruitment. The work extends current variable recruitment knowledge by creating novel online variable recruitment control schemes, and exploring how online actuator recruitment affects system efficiency and control performance. Key topics associated with implementing a variable recruitment scheme, including the effects of modeling inaccuracies, hardware considerations, and switching transition concerns are also addressed.
Postselection technique for quantum channels with applications to quantum cryptography.
Christandl, Matthias; König, Robert; Renner, Renato
2009-01-16
We propose a general method for studying properties of quantum channels acting on an n-partite system, whose action is invariant under permutations of the subsystems. Our main result is that, in order to prove that a certain property holds for an arbitrary input, it is sufficient to consider the case where the input is a particular de Finetti-type state, i.e., a state which consists of n identical and independent copies of an (unknown) state on a single subsystem. Our technique can be applied to the analysis of information-theoretic problems. For example, in quantum cryptography, we get a simple proof for the fact that security of a discrete-variable quantum key distribution protocol against collective attacks implies security of the protocol against the most general attacks. The resulting security bounds are tighter than previously known bounds obtained with help of the exponential de Finetti theorem.
Wang, Jie-sheng; Han, Shuang; Shen, Na-na; Li, Shu-xia
2014-01-01
For meeting the forecasting target of key technology indicators in the flotation process, a BP neural network soft-sensor model based on features extraction of flotation froth images and optimized by shuffled cuckoo search algorithm is proposed. Based on the digital image processing technique, the color features in HSI color space, the visual features based on the gray level cooccurrence matrix, and the shape characteristics based on the geometric theory of flotation froth images are extracted, respectively, as the input variables of the proposed soft-sensor model. Then the isometric mapping method is used to reduce the input dimension, the network size, and learning time of BP neural network. Finally, a shuffled cuckoo search algorithm is adopted to optimize the BP neural network soft-sensor model. Simulation results show that the model has better generalization results and prediction accuracy. PMID:25133210
Procelewska, Joanna; Galilea, Javier Llamas; Clerc, Frederic; Farrusseng, David; Schüth, Ferdi
2007-01-01
The objective of this work is the construction of a correlation between characteristics of heterogeneous catalysts, encoded in a descriptor vector, and their experimentally measured performances in the propene oxidation reaction. In this paper the key issue in the modeling process, namely the selection of adequate input variables, is explored. Several data-driven feature selection strategies were applied in order to obtain an estimate of the differences in variance and information content of various attributes, furthermore to compare their relative importance. Quantitative property activity relationship techniques using probabilistic neural networks have been used for the creation of various semi-empirical models. Finally, a robust classification model, assigning selected attributes of solid compounds as input to an appropriate performance class in the model reaction was obtained. It has been evident that the mathematical support for the primary attributes set proposed by chemists can be highly desirable.
Applications of information theory, genetic algorithms, and neural models to predict oil flow
NASA Astrophysics Data System (ADS)
Ludwig, Oswaldo; Nunes, Urbano; Araújo, Rui; Schnitman, Leizer; Lepikson, Herman Augusto
2009-07-01
This work introduces a new information-theoretic methodology for choosing variables and their time lags in a prediction setting, particularly when neural networks are used in non-linear modeling. The first contribution of this work is the Cross Entropy Function (XEF) proposed to select input variables and their lags in order to compose the input vector of black-box prediction models. The proposed XEF method is more appropriate than the usually applied Cross Correlation Function (XCF) when the relationship among the input and output signals comes from a non-linear dynamic system. The second contribution is a method that minimizes the Joint Conditional Entropy (JCE) between the input and output variables by means of a Genetic Algorithm (GA). The aim is to take into account the dependence among the input variables when selecting the most appropriate set of inputs for a prediction problem. In short, theses methods can be used to assist the selection of input training data that have the necessary information to predict the target data. The proposed methods are applied to a petroleum engineering problem; predicting oil production. Experimental results obtained with a real-world dataset are presented demonstrating the feasibility and effectiveness of the method.
Improving hospital weekend handover: a user-centered, standardised approach
Mehra, Avi; Henein, Christin
2014-01-01
Clinical Handover remains one of the most perilous procedures in medicine (1). Weekend handover has emerged as a key area of concern with high variability in handover processes across hospitals (1,2,4, 5–10). Studying weekend handover processes within medicine at an acute teaching hospital revealed huge variability in documented content and structure. A total of 12 different pro formas were in use by the medical day-team to handover to the weekend team on-call. A Likert-survey of doctors revealed 93% felt the current handover system needed improvement with 71% stating that it did not ensure patient safety (Chi-squared, p-value <0.001, n=32). Semi-structured interviews of doctors identified common themes including “a lack of consistency in approach” “poor standardization” and “high variability”. Seeking to address concerns of standardization, a standardized handover pro forma was developed using Royal College of Physician (RCP) guidelines (2), with direct end-user input. Results following implementation revealed a considerable improvement in documented ceiling of care, urgency of task and team member assignment with 100% uptake of the new proforma at both 4-week and 6-month post-implementation analyses. 88% of doctors surveyed perceived that the new proforma improved patient safety (p<0.01, n=25), with 62% highlighting that it allowed doctors to work more efficiently. Results also revealed that 44% felt further improvements were needed and highlighted electronic solutions and handover training as main priorities. Handover briefing was subsequently incorporated into junior doctor induction and education modules delivered, with good feedback. Following collaboration with key stakeholders and with end-user input, integrated electronic handover software was designed and funding secured. The software is currently under final development. Introducing a standardized handover proforma can be an effective initial step in improving weekend handover. Handover education and end-user involvement are key in improving the process. Electronic handover solutions have been shown to significantly increase the quality of handover and are worth considering (9, 10). PMID:26734248
Parallel sort with a ranged, partitioned key-value store in a high perfomance computing environment
Bent, John M.; Faibish, Sorin; Grider, Gary; Torres, Aaron; Poole, Stephen W.
2016-01-26
Improved sorting techniques are provided that perform a parallel sort using a ranged, partitioned key-value store in a high performance computing (HPC) environment. A plurality of input data files comprising unsorted key-value data in a partitioned key-value store are sorted. The partitioned key-value store comprises a range server for each of a plurality of ranges. Each input data file has an associated reader thread. Each reader thread reads the unsorted key-value data in the corresponding input data file and performs a local sort of the unsorted key-value data to generate sorted key-value data. A plurality of sorted, ranged subsets of each of the sorted key-value data are generated based on the plurality of ranges. Each sorted, ranged subset corresponds to a given one of the ranges and is provided to one of the range servers corresponding to the range of the sorted, ranged subset. Each range server sorts the received sorted, ranged subsets and provides a sorted range. A plurality of the sorted ranges are concatenated to obtain a globally sorted result.
Nonequilibrium air radiation (Nequair) program: User's manual
NASA Technical Reports Server (NTRS)
Park, C.
1985-01-01
A supplement to the data relating to the calculation of nonequilibrium radiation in flight regimes of aeroassisted orbital transfer vehicles contains the listings of the computer code NEQAIR (Nonequilibrium Air Radiation), its primary input data, and explanation of the user-supplied input variables. The user-supplied input variables are the thermodynamic variables of air at a given point, i.e., number densities of various chemical species, translational temperatures of heavy particles and electrons, and vibrational temperature. These thermodynamic variables do not necessarily have to be in thermodynamic equilibrium. The code calculates emission and absorption characteristics of air under these given conditions.
Analytic uncertainty and sensitivity analysis of models with input correlations
NASA Astrophysics Data System (ADS)
Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu
2018-03-01
Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.
Harmonize input selection for sediment transport prediction
NASA Astrophysics Data System (ADS)
Afan, Haitham Abdulmohsin; Keshtegar, Behrooz; Mohtar, Wan Hanna Melini Wan; El-Shafie, Ahmed
2017-09-01
In this paper, three modeling approaches using a Neural Network (NN), Response Surface Method (RSM) and response surface method basis Global Harmony Search (GHS) are applied to predict the daily time series suspended sediment load. Generally, the input variables for forecasting the suspended sediment load are manually selected based on the maximum correlations of input variables in the modeling approaches based on NN and RSM. The RSM is improved to select the input variables by using the errors terms of training data based on the GHS, namely as response surface method and global harmony search (RSM-GHS) modeling method. The second-order polynomial function with cross terms is applied to calibrate the time series suspended sediment load with three, four and five input variables in the proposed RSM-GHS. The linear, square and cross corrections of twenty input variables of antecedent values of suspended sediment load and water discharge are investigated to achieve the best predictions of the RSM based on the GHS method. The performances of the NN, RSM and proposed RSM-GHS including both accuracy and simplicity are compared through several comparative predicted and error statistics. The results illustrated that the proposed RSM-GHS is as uncomplicated as the RSM but performed better, where fewer errors and better correlation was observed (R = 0.95, MAE = 18.09 (ton/day), RMSE = 25.16 (ton/day)) compared to the ANN (R = 0.91, MAE = 20.17 (ton/day), RMSE = 33.09 (ton/day)) and RSM (R = 0.91, MAE = 20.06 (ton/day), RMSE = 31.92 (ton/day)) for all types of input variables.
Akimoto, Yuki; Yugi, Katsuyuki; Uda, Shinsuke; Kudo, Takamasa; Komori, Yasunori; Kubota, Hiroyuki; Kuroda, Shinya
2013-01-01
Cells use common signaling molecules for the selective control of downstream gene expression and cell-fate decisions. The relationship between signaling molecules and downstream gene expression and cellular phenotypes is a multiple-input and multiple-output (MIMO) system and is difficult to understand due to its complexity. For example, it has been reported that, in PC12 cells, different types of growth factors activate MAP kinases (MAPKs) including ERK, JNK, and p38, and CREB, for selective protein expression of immediate early genes (IEGs) such as c-FOS, c-JUN, EGR1, JUNB, and FOSB, leading to cell differentiation, proliferation and cell death; however, how multiple-inputs such as MAPKs and CREB regulate multiple-outputs such as expression of the IEGs and cellular phenotypes remains unclear. To address this issue, we employed a statistical method called partial least squares (PLS) regression, which involves a reduction of the dimensionality of the inputs and outputs into latent variables and a linear regression between these latent variables. We measured 1,200 data points for MAPKs and CREB as the inputs and 1,900 data points for IEGs and cellular phenotypes as the outputs, and we constructed the PLS model from these data. The PLS model highlighted the complexity of the MIMO system and growth factor-specific input-output relationships of cell-fate decisions in PC12 cells. Furthermore, to reduce the complexity, we applied a backward elimination method to the PLS regression, in which 60 input variables were reduced to 5 variables, including the phosphorylation of ERK at 10 min, CREB at 5 min and 60 min, AKT at 5 min and JNK at 30 min. The simple PLS model with only 5 input variables demonstrated a predictive ability comparable to that of the full PLS model. The 5 input variables effectively extracted the growth factor-specific simple relationships within the MIMO system in cell-fate decisions in PC12 cells.
NASA Astrophysics Data System (ADS)
Bönecke, Eric; Lück, Erika; Gründling, Ralf; Rühlmann, Jörg; Franko, Uwe
2016-04-01
Today, the knowledge of within-field variability is essential for numerous purposes, including practical issues, such as precision and sustainable soil management. Therefore, process-oriented soil models have been applied for a considerable time to answer question of spatial soil nutrient and water dynamics, although, they can only be as consistent as their variation and resolution of soil input data. Traditional approaches, describe distribution of soil types, soil texture or other soil properties for greater soil units through generalised point information, e.g. from classical soil survey maps. Those simplifications are known to be afflicted with large uncertainties. Varying soil, crop or yield conditions are detected even within such homogenised soil units. However, recent advances of non-invasive soil survey and on-the-go monitoring techniques, made it possible to obtain vertical and horizontal dense information (3D) about various soil properties, particularly soil texture distribution which serves as an essential soil key variable affecting various other soil properties. Thus, in this study we based our simulations on detailed 3D soil type distribution (STD) maps (4x4 m) to adjacently built-up sufficient informative soil profiles including various soil physical and chemical properties. Our estimates of spatial STD are based on high-resolution lateral and vertical changes of electrical resistivity (ER), detected by a relatively new multi-sensor on-the-go ER monitoring device. We performed an algorithm including fuzzy-c-mean (FCM) logic and traditional soil classification to estimate STD from those inverted and layer-wise available ER data. STD is then used as key input parameter for our carbon, nitrogen and water transport model. We identified Pedological horizon depths and inferred hydrological soil variables (field capacity, permanent wilting point) from pedotransferfunctions (PTF) for each horizon. Furthermore, the spatial distribution of soil organic carbon (SOC), as essential input variable, was predicted by measured soil samples and associated to STD of the upper 30 cm. The comprehensive and high-resolution (4x4 m) soil profile information (up to 2 m soil depth) were then used to initialise a soil process model (Carbon and Nitrogen Dynamics - CANDY) for soil functional modelling (daily steps of matter fluxes, soil temperature and water balances). Our study was conducted on a practical field (~32,000 m²) of an agricultural farm in Central Germany with Chernozem soils under arid conditions (average rainfall < 550 mm). This soil region is known to have differences in soil structure mainly occurring within the subsoil, since topsoil conditions are described as homogenous. The modelled soil functions considered local climate information and practical farming activities. Results show, as expected, distinguished functional variability, both on spatial and temporal resolution for subsoil evident structures, e.g. visible differences for available water capacity within 0-100 cm but homogenous conditions for the topsoil.
Model-free adaptive control of advanced power plants
Cheng, George Shu-Xing; Mulkey, Steven L.; Wang, Qiang
2015-08-18
A novel 3-Input-3-Output (3.times.3) Model-Free Adaptive (MFA) controller with a set of artificial neural networks as part of the controller is introduced. A 3.times.3 MFA control system using the inventive 3.times.3 MFA controller is described to control key process variables including Power, Steam Throttle Pressure, and Steam Temperature of boiler-turbine-generator (BTG) units in conventional and advanced power plants. Those advanced power plants may comprise Once-Through Supercritical (OTSC) Boilers, Circulating Fluidized-Bed (CFB) Boilers, and Once-Through Supercritical Circulating Fluidized-Bed (OTSC CFB) Boilers.
Jackson, B Scott
2004-10-01
Many different types of integrate-and-fire models have been designed in order to explain how it is possible for a cortical neuron to integrate over many independent inputs while still producing highly variable spike trains. Within this context, the variability of spike trains has been almost exclusively measured using the coefficient of variation of interspike intervals. However, another important statistical property that has been found in cortical spike trains and is closely associated with their high firing variability is long-range dependence. We investigate the conditions, if any, under which such models produce output spike trains with both interspike-interval variability and long-range dependence similar to those that have previously been measured from actual cortical neurons. We first show analytically that a large class of high-variability integrate-and-fire models is incapable of producing such outputs based on the fact that their output spike trains are always mathematically equivalent to renewal processes. This class of models subsumes a majority of previously published models, including those that use excitation-inhibition balance, correlated inputs, partial reset, or nonlinear leakage to produce outputs with high variability. Next, we study integrate-and-fire models that have (nonPoissonian) renewal point process inputs instead of the Poisson point process inputs used in the preceding class of models. The confluence of our analytical and simulation results implies that the renewal-input model is capable of producing high variability and long-range dependence comparable to that seen in spike trains recorded from cortical neurons, but only if the interspike intervals of the inputs have infinite variance, a physiologically unrealistic condition. Finally, we suggest a new integrate-and-fire model that does not suffer any of the previously mentioned shortcomings. By analyzing simulation results for this model, we show that it is capable of producing output spike trains with interspike-interval variability and long-range dependence that match empirical data from cortical spike trains. This model is similar to the other models in this study, except that its inputs are fractional-gaussian-noise-driven Poisson processes rather than renewal point processes. In addition to this model's success in producing realistic output spike trains, its inputs have long-range dependence similar to that found in most subcortical neurons in sensory pathways, including the inputs to cortex. Analysis of output spike trains from simulations of this model also shows that a tight balance between the amounts of excitation and inhibition at the inputs to cortical neurons is not necessary for high interspike-interval variability at their outputs. Furthermore, in our analysis of this model, we show that the superposition of many fractional-gaussian-noise-driven Poisson processes does not approximate a Poisson process, which challenges the common assumption that the total effect of a large number of inputs on a neuron is well represented by a Poisson process.
Verification of models for ballistic movement time and endpoint variability.
Lin, Ray F; Drury, Colin G
2013-01-01
A hand control movement is composed of several ballistic movements. The time required in performing a ballistic movement and its endpoint variability are two important properties in developing movement models. The purpose of this study was to test potential models for predicting these two properties. Twelve participants conducted ballistic movements of specific amplitudes using a drawing tablet. The measured data of movement time and endpoint variability were then used to verify the models. This study was successful with Hoffmann and Gan's movement time model (Hoffmann, 1981; Gan and Hoffmann 1988) predicting more than 90.7% data variance for 84 individual measurements. A new theoretically developed ballistic movement variability model, proved to be better than Howarth, Beggs, and Bowden's (1971) model, predicting on average 84.8% of stopping-variable error and 88.3% of aiming-variable errors. These two validated models will help build solid theoretical movement models and evaluate input devices. This article provides better models for predicting end accuracy and movement time of ballistic movements that are desirable in rapid aiming tasks, such as keying in numbers on a smart phone. The models allow better design of aiming tasks, for example button sizes on mobile phones for different user populations.
Performance investigation of optical multicast overlay system using orthogonal modulation format
NASA Astrophysics Data System (ADS)
Singh, Simranjit; Singh, Sukhbir; Kaur, Ramandeep; Kaler, R. S.
2015-03-01
We proposed a bandwidth efficient wavelength division multiplexed-passive optical network (WDM-PON) to simultaneously transmit 60 Gb/s unicast and 10 Gb/s multicast services with 10 Gb/s upstream. The differential phase shift keying (DPSK) multicast signal is superimposed onto multiplexed non-return to zero/polarization shift keying (NRZ/PolSK) orthogonal modulated data signals. Upstream amplitude shift keying (ASK) signals formed without use of any additional light source and superimposed onto received unicast NRZ/PolSK signal before being transmitted back to optical line terminal (OLT). We also investigated the proposed WDM-PON system for variable optical input power, transmission distance of single mode fiber in multicast enable and disable mode. The measured Quality factor for all unicast and multicast signal is in acceptable range (>6). The original contribution of this paper is to propose a bandwidth efficient WDM-PON system that could be projected even in high speed scenario at reduced channel spacing and expected to be more technical viable due to use of optical orthogonal modulation formats.
Influential input classification in probabilistic multimedia models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maddalena, Randy L.; McKone, Thomas E.; Hsieh, Dennis P.H.
1999-05-01
Monte Carlo analysis is a statistical simulation method that is often used to assess and quantify the outcome variance in complex environmental fate and effects models. Total outcome variance of these models is a function of (1) the uncertainty and/or variability associated with each model input and (2) the sensitivity of the model outcome to changes in the inputs. To propagate variance through a model using Monte Carlo techniques, each variable must be assigned a probability distribution. The validity of these distributions directly influences the accuracy and reliability of the model outcome. To efficiently allocate resources for constructing distributions onemore » should first identify the most influential set of variables in the model. Although existing sensitivity and uncertainty analysis methods can provide a relative ranking of the importance of model inputs, they fail to identify the minimum set of stochastic inputs necessary to sufficiently characterize the outcome variance. In this paper, we describe and demonstrate a novel sensitivity/uncertainty analysis method for assessing the importance of each variable in a multimedia environmental fate model. Our analyses show that for a given scenario, a relatively small number of input variables influence the central tendency of the model and an even smaller set determines the shape of the outcome distribution. For each input, the level of influence depends on the scenario under consideration. This information is useful for developing site specific models and improving our understanding of the processes that have the greatest influence on the variance in outcomes from multimedia models.« less
Peer Educators and Close Friends as Predictors of Male College Students' Willingness to Prevent Rape
ERIC Educational Resources Information Center
Stein, Jerrold L.
2007-01-01
Astin's (1977, 1991, 1993) input-environment-outcome (I-E-O) model provided a conceptual framework for this study which measured 156 male college students' willingness to prevent rape (outcome variable). Predictor variables included personal attitudes (input variable), perceptions of close friends' attitudes toward rape and rape prevention…
Neural networks: further insights into error function, generalized weights and others
2016-01-01
The article is a continuum of a previous one providing further insights into the structure of neural network (NN). Key concepts of NN including activation function, error function, learning rate and generalized weights are introduced. NN topology can be visualized with generic plot() function by passing a “nn” class object. Generalized weights assist interpretation of NN model with respect to the independent effect of individual input variables. A large variance of generalized weights for a covariate indicates non-linearity of its independent effect. If generalized weights of a covariate are approximately zero, the covariate is considered to have no effect on outcome. Finally, prediction of new observations can be performed using compute() function. Make sure that the feature variables passed to the compute() function are in the same order to that in the training NN. PMID:27668220
NASA Astrophysics Data System (ADS)
Wang, Guocheng; Zhang, Wen; Sun, Wenjuan; Li, Tingting; Han, Pengfei
2017-10-01
Changes in the soil organic carbon (SOC) stock are determined by the balance between the carbon input from organic materials and the output from the decomposition of soil C. The fate of SOC in cropland soils plays a significant role in both sustainable agricultural production and climate change mitigation. The spatiotemporal changes of soil organic carbon in croplands in response to different carbon (C) input management and environmental conditions across the main global cereal systems were studied using a modeling approach. We also identified the key variables that drive SOC changes at a high spatial resolution (0.1° × 0.1°) and over a long timescale (54 years from 1961 to 2014). A widely used soil C turnover model (RothC) and state-of-the-art databases of soil and climate variables were used in the present study. The model simulations suggested that, on a global average, the cropland SOC density increased at annual rates of 0.22, 0.45 and 0.69 Mg C ha-1 yr-1 under crop residue retention rates of 30, 60 and 90 %, respectively. Increasing the quantity of C input could enhance soil C sequestration or reduce the rate of soil C loss, depending largely on the local soil and climate conditions. Spatially, under a specific crop residue retention rate, relatively higher soil C sinks were found across the central parts of the USA, western Europe, and the northern regions of China. Relatively smaller soil C sinks occurred in the high-latitude regions of both the Northern and Southern hemispheres, and SOC decreased across the equatorial zones of Asia, Africa and America. We found that SOC change was significantly influenced by the crop residue retention rate (linearly positive) and the edaphic variable of initial SOC content (linearly negative). Temperature had weak negative effects, and precipitation had significantly negative impacts on SOC changes. The results can help guide carbon input management practices to effectively mitigate climate change through soil C sequestration in croplands on a global scale.
The Effects of a Change in the Variability of Irrigation Water
NASA Astrophysics Data System (ADS)
Lyon, Kenneth S.
1983-10-01
This paper examines the short-run effects upon several variables of an increase in the variability of an input. The measure of an increase in the variability is the "mean preserving spread" suggested by Rothschild and Stiglitz (1970). The variables examined are real income (utility), expected profits, expected output, the quantity used of the controllable input, and the shadow price of the stochastic input. Four striking features of the results follow: (1) The concepts that have been useful in summarizing deterministic comparative static results are nearly absent when an input is stochastic. (2) Most of the signs of the partial derivatives depend upon more than concavity of the utility and production functions. (3) If the utility function is not "too" risk averse, then the risk-neutral results hold for the risk-aversion case. (4) If the production function is Cobb-Douglas, then definite results are achieved if the utility function is linear or if the "degree of risk-aversion" is "small."
Optimal allocation of testing resources for statistical simulations
NASA Astrophysics Data System (ADS)
Quintana, Carolina; Millwater, Harry R.; Singh, Gulshan; Golden, Patrick
2015-07-01
Statistical estimates from simulation involve uncertainty caused by the variability in the input random variables due to limited data. Allocating resources to obtain more experimental data of the input variables to better characterize their probability distributions can reduce the variance of statistical estimates. The methodology proposed determines the optimal number of additional experiments required to minimize the variance of the output moments given single or multiple constraints. The method uses multivariate t-distribution and Wishart distribution to generate realizations of the population mean and covariance of the input variables, respectively, given an amount of available data. This method handles independent and correlated random variables. A particle swarm method is used for the optimization. The optimal number of additional experiments per variable depends on the number and variance of the initial data, the influence of the variable in the output function and the cost of each additional experiment. The methodology is demonstrated using a fretting fatigue example.
Input variable selection and calibration data selection for storm water quality regression models.
Sun, Siao; Bertrand-Krajewski, Jean-Luc
2013-01-01
Storm water quality models are useful tools in storm water management. Interest has been growing in analyzing existing data for developing models for urban storm water quality evaluations. It is important to select appropriate model inputs when many candidate explanatory variables are available. Model calibration and verification are essential steps in any storm water quality modeling. This study investigates input variable selection and calibration data selection in storm water quality regression models. The two selection problems are mutually interacted. A procedure is developed in order to fulfil the two selection tasks in order. The procedure firstly selects model input variables using a cross validation method. An appropriate number of variables are identified as model inputs to ensure that a model is neither overfitted nor underfitted. Based on the model input selection results, calibration data selection is studied. Uncertainty of model performances due to calibration data selection is investigated with a random selection method. An approach using the cluster method is applied in order to enhance model calibration practice based on the principle of selecting representative data for calibration. The comparison between results from the cluster selection method and random selection shows that the former can significantly improve performances of calibrated models. It is found that the information content in calibration data is important in addition to the size of calibration data.
The art of spacecraft design: A multidisciplinary challenge
NASA Technical Reports Server (NTRS)
Abdi, F.; Ide, H.; Levine, M.; Austel, L.
1989-01-01
Actual design turn-around time has become shorter due to the use of optimization techniques which have been introduced into the design process. It seems that what, how and when to use these optimization techniques may be the key factor for future aircraft engineering operations. Another important aspect of this technique is that complex physical phenomena can be modeled by a simple mathematical equation. The new powerful multilevel methodology reduces time-consuming analysis significantly while maintaining the coupling effects. This simultaneous analysis method stems from the implicit function theorem and system sensitivity derivatives of input variables. Use of the Taylor's series expansion and finite differencing technique for sensitivity derivatives in each discipline makes this approach unique for screening dominant variables from nondominant variables. In this study, the current Computational Fluid Dynamics (CFD) aerodynamic and sensitivity derivative/optimization techniques are applied for a simple cone-type forebody of a high-speed vehicle configuration to understand basic aerodynamic/structure interaction in a hypersonic flight condition.
Uncertainty Analysis for a Jet Flap Airfoil
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Cruz, Josue
2006-01-01
An analysis of variance (ANOVA) study was performed to quantify the potential uncertainties of lift and pitching moment coefficient calculations from a computational fluid dynamics code, relative to an experiment, for a jet flap airfoil configuration. Uncertainties due to a number of factors including grid density, angle of attack and jet flap blowing coefficient were examined. The ANOVA software produced a numerical model of the input coefficient data, as functions of the selected factors, to a user-specified order (linear, 2-factor interference, quadratic, or cubic). Residuals between the model and actual data were also produced at each of the input conditions, and uncertainty confidence intervals (in the form of Least Significant Differences or LSD) for experimental, computational, and combined experimental / computational data sets were computed. The LSD bars indicate the smallest resolvable differences in the functional values (lift or pitching moment coefficient) attributable solely to changes in independent variable, given just the input data points from selected data sets. The software also provided a collection of diagnostics which evaluate the suitability of the input data set for use within the ANOVA process, and which examine the behavior of the resultant data, possibly suggesting transformations which should be applied to the data to reduce the LSD. The results illustrate some of the key features of, and results from, the uncertainty analysis studies, including the use of both numerical (continuous) and categorical (discrete) factors, the effects of the number and range of the input data points, and the effects of the number of factors considered simultaneously.
Multiple response optimization for higher dimensions in factors and responses
Lu, Lu; Chapman, Jessica L.; Anderson-Cook, Christine M.
2016-07-19
When optimizing a product or process with multiple responses, a two-stage Pareto front approach is a useful strategy to evaluate and balance trade-offs between different estimated responses to seek optimum input locations for achieving the best outcomes. After objectively eliminating non-contenders in the first stage by looking for a Pareto front of superior solutions, graphical tools can be used to identify a final solution in the second subjective stage to compare options and match with user priorities. Until now, there have been limitations on the number of response variables and input factors that could effectively be visualized with existing graphicalmore » summaries. We present novel graphical tools that can be more easily scaled to higher dimensions, in both the input and response spaces, to facilitate informed decision making when simultaneously optimizing multiple responses. A key aspect of these graphics is that the potential solutions can be flexibly sorted to investigate specific queries, and that multiple aspects of the solutions can be simultaneously considered. As a result, recommendations are made about how to evaluate the impact of the uncertainty associated with the estimated response surfaces on decision making with higher dimensions.« less
Fabric filter model sensitivity analysis. Final report Jun 1978-Feb 1979
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dennis, R.; Klemm, H.A.; Battye, W.
1979-04-01
The report gives results of a series of sensitivity tests of a GCA fabric filter model, as a precursor to further laboratory and/or field tests. Preliminary tests had shown good agreement with field data. However, the apparent agreement between predicted and actual values was based on limited comparisons: validation was carried out without regard to optimization of the data inputs selected by the filter users or manufactures. The sensitivity tests involved introducing into the model several hypothetical data inputs that reflect the expected ranges in the principal filter system variables. Such factors as air/cloth ratio, cleaning frequency, amount of cleaning,more » specific resistence coefficient K2, the number of compartments, and inlet concentration were examined in various permutations. A key objective of the tests was to determine the variables that require the greatest accuracy in estimation based on their overall impact on model output. For K2 variations, the system resistance and emission properties showed little change; but the cleaning requirement changed drastically. On the other hand, considerable difference in outlet dust concentration was indicated when the degree of fabric cleaning was varied. To make the findings more useful to persons assessing the probable success of proposed or existing filter systems, much of the data output is presented in graphs or charts.« less
A statistical survey of heat input parameters into the cusp thermosphere
NASA Astrophysics Data System (ADS)
Moen, J. I.; Skjaeveland, A.; Carlson, H. C.
2017-12-01
Based on three winters of observational data, we present those ionosphere parameters deemed most critical to realistic space weather ionosphere and thermosphere representation and prediction, in regions impacted by variability in the cusp. The CHAMP spacecraft revealed large variability in cusp thermosphere densities, measuring frequent satellite drag enhancements, up to doublings. The community recognizes a clear need for more realistic representation of plasma flows and electron densities near the cusp. Existing average-value models produce order of magnitude errors in these parameters, resulting in large under estimations of predicted drag. We fill this knowledge gap with statistics-based specification of these key parameters over their range of observed values. The EISCAT Svalbard Radar (ESR) tracks plasma flow Vi , electron density Ne, and electron, ion temperatures Te, Ti , with consecutive 2-3 minute windshield-wipe scans of 1000x500 km areas. This allows mapping the maximum Ti of a large area within or near the cusp with high temporal resolution. In magnetic field-aligned mode the radar can measure high-resolution profiles of these plasma parameters. By deriving statistics for Ne and Ti , we enable derivation of thermosphere heating deposition under background and frictional-drag-dominated magnetic reconnection conditions. We separate our Ne and Ti profiles into quiescent and enhanced states, which are not closely correlated due to the spatial structure of the reconnection foot point. Use of our data-based parameter inputs can make order of magnitude corrections to input data driving thermosphere models, enabling removal of previous two fold drag errors.
Artificial neural network model for ozone concentration estimation and Monte Carlo analysis
NASA Astrophysics Data System (ADS)
Gao, Meng; Yin, Liting; Ning, Jicai
2018-07-01
Air pollution in urban atmosphere directly affects public-health; therefore, it is very essential to predict air pollutant concentrations. Air quality is a complex function of emissions, meteorology and topography, and artificial neural networks (ANNs) provide a sound framework for relating these variables. In this study, we investigated the feasibility of using ANN model with meteorological parameters as input variables to predict ozone concentration in the urban area of Jinan, a metropolis in Northern China. We firstly found that the architecture of network of neurons had little effect on the predicting capability of ANN model. A parsimonious ANN model with 6 routinely monitored meteorological parameters and one temporal covariate (the category of day, i.e. working day, legal holiday and regular weekend) as input variables was identified, where the 7 input variables were selected following the forward selection procedure. Compared with the benchmarking ANN model with 9 meteorological and photochemical parameters as input variables, the predicting capability of the parsimonious ANN model was acceptable. Its predicting capability was also verified in term of warming success ratio during the pollution episodes. Finally, uncertainty and sensitivity analysis were also performed based on Monte Carlo simulations (MCS). It was concluded that the ANN could properly predict the ambient ozone level. Maximum temperature, atmospheric pressure, sunshine duration and maximum wind speed were identified as the predominate input variables significantly influencing the prediction of ambient ozone concentrations.
Correction of I/Q channel errors without calibration
Doerry, Armin W.; Tise, Bertice L.
2002-01-01
A method of providing a balanced demodular output for a signal such as a Doppler radar having an analog pulsed input; includes adding a variable phase shift as a function of time to the input signal, applying the phase shifted input signal to a demodulator; and generating a baseband signal from the input signal. The baseband signal is low-pass filtered and converted to a digital output signal. By removing the variable phase shift from the digital output signal, a complex data output is formed that is representative of the output of a balanced demodulator.
Coding stimulus amplitude by correlated neural activity
NASA Astrophysics Data System (ADS)
Metzen, Michael G.; Ávila-Åkerberg, Oscar; Chacron, Maurice J.
2015-04-01
While correlated activity is observed ubiquitously in the brain, its role in neural coding has remained controversial. Recent experimental results have demonstrated that correlated but not single-neuron activity can encode the detailed time course of the instantaneous amplitude (i.e., envelope) of a stimulus. These have furthermore demonstrated that such coding required and was optimal for a nonzero level of neural variability. However, a theoretical understanding of these results is still lacking. Here we provide a comprehensive theoretical framework explaining these experimental findings. Specifically, we use linear response theory to derive an expression relating the correlation coefficient to the instantaneous stimulus amplitude, which takes into account key single-neuron properties such as firing rate and variability as quantified by the coefficient of variation. The theoretical prediction was in excellent agreement with numerical simulations of various integrate-and-fire type neuron models for various parameter values. Further, we demonstrate a form of stochastic resonance as optimal coding of stimulus variance by correlated activity occurs for a nonzero value of noise intensity. Thus, our results provide a theoretical explanation of the phenomenon by which correlated but not single-neuron activity can code for stimulus amplitude and how key single-neuron properties such as firing rate and variability influence such coding. Correlation coding by correlated but not single-neuron activity is thus predicted to be a ubiquitous feature of sensory processing for neurons responding to weak input.
Segmentation and learning in the quantitative analysis of microscopy images
NASA Astrophysics Data System (ADS)
Ruggiero, Christy; Ross, Amy; Porter, Reid
2015-02-01
In material science and bio-medical domains the quantity and quality of microscopy images is rapidly increasing and there is a great need to automatically detect, delineate and quantify particles, grains, cells, neurons and other functional "objects" within these images. These are challenging problems for image processing because of the variability in object appearance that inevitably arises in real world image acquisition and analysis. One of the most promising (and practical) ways to address these challenges is interactive image segmentation. These algorithms are designed to incorporate input from a human operator to tailor the segmentation method to the image at hand. Interactive image segmentation is now a key tool in a wide range of applications in microscopy and elsewhere. Historically, interactive image segmentation algorithms have tailored segmentation on an image-by-image basis, and information derived from operator input is not transferred between images. But recently there has been increasing interest to use machine learning in segmentation to provide interactive tools that accumulate and learn from the operator input over longer periods of time. These new learning algorithms reduce the need for operator input over time, and can potentially provide a more dynamic balance between customization and automation for different applications. This paper reviews the state of the art in this area, provides a unified view of these algorithms, and compares the segmentation performance of various design choices.
Estimating pesticide runoff in small streams.
Schriever, Carola A; von der Ohe, Peter C; Liess, Matthias
2007-08-01
Surface runoff is one of the most important pathways for pesticides to enter surface waters. Mathematical models are employed to characterize its spatio-temporal variability within landscapes, but they must be simple owing to the limited availability and low resolution of data at this scale. This study aimed to validate a simplified spatially-explicit model that is developed for the regional scale to calculate the runoff potential (RP). The RP is a generic indicator of the magnitude of pesticide inputs into streams via runoff. The underlying runoff model considers key environmental factors affecting runoff (precipitation, topography, land use, and soil characteristics), but predicts losses of a generic substance instead of any one pesticide. We predicted and evaluated RP for 20 small streams. RP input data were extracted from governmental databases. Pesticide measurements from a triennial study were used for validation. Measured pesticide concentrations were standardized by the applied mass per catchment and the water solubility of the relevant compounds. The maximum standardized concentration per site and year (runoff loss, R(Loss)) provided a generalized measure of observed pesticide inputs into the streams. Average RP explained 75% (p<0.001) of the variance in R(Loss). Our results imply that the generic indicator can give an adequate estimate of runoff inputs into small streams, wherever data of similar resolution are available. Therefore, we suggest RP for a first quick and cost-effective location of potential runoff hot spots at the landscape level.
Delpierre, Nicolas; Berveiller, Daniel; Granda, Elena; Dufrêne, Eric
2016-04-01
Although the analysis of flux data has increased our understanding of the interannual variability of carbon inputs into forest ecosystems, we still know little about the determinants of wood growth. Here, we aimed to identify which drivers control the interannual variability of wood growth in a mesic temperate deciduous forest. We analysed a 9-yr time series of carbon fluxes and aboveground wood growth (AWG), reconstructed at a weekly time-scale through the combination of dendrometer and wood density data. Carbon inputs and AWG anomalies appeared to be uncorrelated from the seasonal to interannual scales. More than 90% of the interannual variability of AWG was explained by a combination of the growth intensity during a first 'critical period' of the wood growing season, occurring close to the seasonal maximum, and the timing of the first summer growth halt. Both atmospheric and soil water stress exerted a strong control on the interannual variability of AWG at the study site, despite its mesic conditions, whilst not affecting carbon inputs. Carbon sink activity, not carbon inputs, determined the interannual variations in wood growth at the study site. Our results provide a functional understanding of the dependence of radial growth on precipitation observed in dendrological studies. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.
NASA Technical Reports Server (NTRS)
Glassman, Arthur J.; Jones, Scott M.
1991-01-01
This analysis and this computer code apply to full, split, and dual expander cycles. Heat regeneration from the turbine exhaust to the pump exhaust is allowed. The combustion process is modeled as one of chemical equilibrium in an infinite-area or a finite-area combustor. Gas composition in the nozzle may be either equilibrium or frozen during expansion. This report, which serves as a users guide for the computer code, describes the system, the analysis methodology, and the program input and output. Sample calculations are included to show effects of key variables such as nozzle area ratio and oxidizer-to-fuel mass ratio.
Crew behavior and performance in space analog environments
NASA Technical Reports Server (NTRS)
Kanki, Barbara G.
1992-01-01
The objectives and the current status of the Crew Factors research program conducted at NASA-Ames Research Center are reviewed. The principal objectives of the program are to determine the effects of a broad class of input variables on crew performance and to provide guidance with respect to the design and management of crews assigned to future space missions. A wide range of research environments are utilized, including controlled experimental settings, high fidelity full mission simulator facilities, and fully operational field environments. Key group processes are identified, and preliminary data are presented on the effect of crew size, type, and structure on team performance.
Small Interactive Image Processing System (SMIPS) system description
NASA Technical Reports Server (NTRS)
Moik, J. G.
1973-01-01
The Small Interactive Image Processing System (SMIPS) operates under control of the IBM-OS/MVT operating system and uses an IBM-2250 model 1 display unit as interactive graphic device. The input language in the form of character strings or attentions from keys and light pen is interpreted and causes processing of built-in image processing functions as well as execution of a variable number of application programs kept on a private disk file. A description of design considerations is given and characteristics, structure and logic flow of SMIPS are summarized. Data management and graphic programming techniques used for the interactive manipulation and display of digital pictures are also discussed.
Micro Computer Feedback Report for the Strategic Leader Development Inventory
1993-05-01
POS or NEG variables CALL CREATE MEM DIR ;make a memory directory JC SELS ;exat I error CALL SELECT-SCREEN ;dlsplay select screen JC SEL4 ;no flles in...get keyboaI Input CMP AL,1Bh3 ;ls I an Esc key ? JNZ SEL2 ;X not goto nrod test G-95 JMP SEL4 ;Exit SEL2: CMP AL,OOh Iskapick? JZ SEL ;I YES exit loop...position CALL READ DATE ;gat DOS daoe od 4e CALL F4ND -ERO ;kxlae OW In data ue JC SEL.5 SEL4 : CALL RELEASE MEM DIR ;release meu block CLC ;cler carry fag
NASA Technical Reports Server (NTRS)
Vanderploeg, J. M.; Stewart, D. F.; Davis, J. R.
1986-01-01
Space motion sickness clinical characteristics, time course, prediction of susceptibility, and effectiveness of countermeasures were evaluated. Although there is wide individual variability, there appear to be typical patterns of symptom development. The duration of symptoms ranges from several hours to four days with the majority of individuals being symptom free by the end of third day. The etiology of this malady remains uncertain but evidence points to reinterpretation of otolith inputs as being a key factor in the response of the neurovestibular system. Prediction of susceptibility and severity remains unsatisfactory. Countermeasures tried include medications, preflight adaptation, and autogenic feedback training. No countermeasure is entirely successful in eliminating or alleviating symptoms.
Input Variability Facilitates Unguided Subcategory Learning in Adults
Eidsvåg, Sunniva Sørhus; Austad, Margit; Asbjørnsen, Arve E.
2015-01-01
Purpose This experiment investigated whether input variability would affect initial learning of noun gender subcategories in an unfamiliar, natural language (Russian), as it is known to assist learning of other grammatical forms. Method Forty adults (20 men, 20 women) were familiarized with examples of masculine and feminine Russian words. Half of the participants were familiarized with 32 different root words in a high-variability condition. The other half were familiarized with 16 different root words, each repeated twice for a total of 32 presentations in a high-repetition condition. Participants were tested on untrained members of the category to assess generalization. Familiarization and testing was completed 2 additional times. Results Only participants in the high-variability group showed evidence of learning after an initial period of familiarization. Participants in the high-repetition group were able to learn after additional input. Both groups benefited when words included 2 cues to gender compared to a single cue. Conclusions The results demonstrate that the degree of input variability can influence learners' ability to generalize a grammatical subcategory (noun gender) from a natural language. In addition, the presence of multiple cues to linguistic subcategory facilitated learning independent of variability condition. PMID:25680081
Input Variability Facilitates Unguided Subcategory Learning in Adults.
Eidsvåg, Sunniva Sørhus; Austad, Margit; Plante, Elena; Asbjørnsen, Arve E
2015-06-01
This experiment investigated whether input variability would affect initial learning of noun gender subcategories in an unfamiliar, natural language (Russian), as it is known to assist learning of other grammatical forms. Forty adults (20 men, 20 women) were familiarized with examples of masculine and feminine Russian words. Half of the participants were familiarized with 32 different root words in a high-variability condition. The other half were familiarized with 16 different root words, each repeated twice for a total of 32 presentations in a high-repetition condition. Participants were tested on untrained members of the category to assess generalization. Familiarization and testing was completed 2 additional times. Only participants in the high-variability group showed evidence of learning after an initial period of familiarization. Participants in the high-repetition group were able to learn after additional input. Both groups benefited when words included 2 cues to gender compared to a single cue. The results demonstrate that the degree of input variability can influence learners' ability to generalize a grammatical subcategory (noun gender) from a natural language. In addition, the presence of multiple cues to linguistic subcategory facilitated learning independent of variability condition.
Latin Hypercube Sampling (LHS) UNIX Library/Standalone
DOE Office of Scientific and Technical Information (OSTI.GOV)
2004-05-13
The LHS UNIX Library/Standalone software provides the capability to draw random samples from over 30 distribution types. It performs the sampling by a stratified sampling method called Latin Hypercube Sampling (LHS). Multiple distributions can be sampled simultaneously, with user-specified correlations amongst the input distributions, LHS UNIX Library/ Standalone provides a way to generate multi-variate samples. The LHS samples can be generated either as a callable library (e.g., from within the DAKOTA software framework) or as a standalone capability. LHS UNIX Library/Standalone uses the Latin Hypercube Sampling method (LHS) to generate samples. LHS is a constrained Monte Carlo sampling scheme. Inmore » LHS, the range of each variable is divided into non-overlapping intervals on the basis of equal probability. A sample is selected at random with respect to the probability density in each interval, If multiple variables are sampled simultaneously, then values obtained for each are paired in a random manner with the n values of the other variables. In some cases, the pairing is restricted to obtain specified correlations amongst the input variables. Many simulation codes have input parameters that are uncertain and can be specified by a distribution, To perform uncertainty analysis and sensitivity analysis, random values are drawn from the input parameter distributions, and the simulation is run with these values to obtain output values. If this is done repeatedly, with many input samples drawn, one can build up a distribution of the output as well as examine correlations between input and output variables.« less
Deterministic quantum teleportation of photonic quantum bits by a hybrid technique.
Takeda, Shuntaro; Mizuta, Takahiro; Fuwa, Maria; van Loock, Peter; Furusawa, Akira
2013-08-15
Quantum teleportation allows for the transfer of arbitrary unknown quantum states from a sender to a spatially distant receiver, provided that the two parties share an entangled state and can communicate classically. It is the essence of many sophisticated protocols for quantum communication and computation. Photons are an optimal choice for carrying information in the form of 'flying qubits', but the teleportation of photonic quantum bits (qubits) has been limited by experimental inefficiencies and restrictions. Main disadvantages include the fundamentally probabilistic nature of linear-optics Bell measurements, as well as the need either to destroy the teleported qubit or attenuate the input qubit when the detectors do not resolve photon numbers. Here we experimentally realize fully deterministic quantum teleportation of photonic qubits without post-selection. The key step is to make use of a hybrid technique involving continuous-variable teleportation of a discrete-variable, photonic qubit. When the receiver's feedforward gain is optimally tuned, the continuous-variable teleporter acts as a pure loss channel, and the input dual-rail-encoded qubit, based on a single photon, represents a quantum error detection code against photon loss and hence remains completely intact for most teleportation events. This allows for a faithful qubit transfer even with imperfect continuous-variable entangled states: for four qubits the overall transfer fidelities range from 0.79 to 0.82 and all of them exceed the classical limit of teleportation. Furthermore, even for a relatively low level of the entanglement, qubits are teleported much more efficiently than in previous experiments, albeit post-selectively (taking into account only the qubit subspaces), and with a fidelity comparable to the previously reported values.
NASA Astrophysics Data System (ADS)
Bauerle, William L.; Daniels, Alex B.; Barnard, David M.
2014-05-01
Sensitivity of carbon uptake and water use estimates to changes in physiology was determined with a coupled photosynthesis and stomatal conductance ( g s) model, linked to canopy microclimate with a spatially explicit scheme (MAESTRA). The sensitivity analyses were conducted over the range of intraspecific physiology parameter variation observed for Acer rubrum L. and temperate hardwood C3 (C3) vegetation across the following climate conditions: carbon dioxide concentration 200-700 ppm, photosynthetically active radiation 50-2,000 μmol m-2 s-1, air temperature 5-40 °C, relative humidity 5-95 %, and wind speed at the top of the canopy 1-10 m s-1. Five key physiological inputs [quantum yield of electron transport ( α), minimum stomatal conductance ( g 0), stomatal sensitivity to the marginal water cost of carbon gain ( g 1), maximum rate of electron transport ( J max), and maximum carboxylation rate of Rubisco ( V cmax)] changed carbon and water flux estimates ≥15 % in response to climate gradients; variation in α, J max, and V cmax input resulted in up to ~50 and 82 % intraspecific and C3 photosynthesis estimate output differences respectively. Transpiration estimates were affected up to ~46 and 147 % by differences in intraspecific and C3 g 1 and g 0 values—two parameters previously overlooked in modeling land-atmosphere carbon and water exchange. We show that a variable environment, within a canopy or along a climate gradient, changes the spatial parameter effects of g 0, g 1, α, J max, and V cmax in photosynthesis- g s models. Since variation in physiology parameter input effects are dependent on climate, this approach can be used to assess the geographical importance of key physiology model inputs when estimating large scale carbon and water exchange.
Speed control system for an access gate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bzorgi, Fariborz M
2012-03-20
An access control apparatus for an access gate. The access gate typically has a rotator that is configured to rotate around a rotator axis at a first variable speed in a forward direction. The access control apparatus may include a transmission that typically has an input element that is operatively connected to the rotator. The input element is generally configured to rotate at an input speed that is proportional to the first variable speed. The transmission typically also has an output element that has an output speed that is higher than the input speed. The input element and the outputmore » element may rotate around a common transmission axis. A retardation mechanism may be employed. The retardation mechanism is typically configured to rotate around a retardation mechanism axis. Generally the retardation mechanism is operatively connected to the output element of the transmission and is configured to retard motion of the access gate in the forward direction when the first variable speed is above a control-limit speed. In many embodiments the transmission axis and the retardation mechanism axis are substantially co-axial. Some embodiments include a freewheel/catch mechanism that has an input connection that is operatively connected to the rotator. The input connection may be configured to engage an output connection when the rotator is rotated at the first variable speed in a forward direction and configured for substantially unrestricted rotation when the rotator is rotated in a reverse direction opposite the forward direction. The input element of the transmission is typically operatively connected to the output connection of the freewheel/catch mechanism.« less
NASA Astrophysics Data System (ADS)
Zimoń, Małgorzata; Sawko, Robert; Emerson, David; Thompson, Christopher
2017-11-01
Uncertainty quantification (UQ) is increasingly becoming an indispensable tool for assessing the reliability of computational modelling. Efficient handling of stochastic inputs, such as boundary conditions, physical properties or geometry, increases the utility of model results significantly. We discuss the application of non-intrusive generalised polynomial chaos techniques in the context of fluid engineering simulations. Deterministic and Monte Carlo integration rules are applied to a set of problems, including ordinary differential equations and the computation of aerodynamic parameters subject to random perturbations. In particular, we analyse acoustic wave propagation in a heterogeneous medium to study the effects of mesh resolution, transients, number and variability of stochastic inputs. We consider variants of multi-level Monte Carlo and perform a novel comparison of the methods with respect to numerical and parametric errors, as well as computational cost. The results provide a comprehensive view of the necessary steps in UQ analysis and demonstrate some key features of stochastic fluid flow systems.
Security of BB84 with weak randomness and imperfect qubit encoding
NASA Astrophysics Data System (ADS)
Zhao, Liang-Yuan; Yin, Zhen-Qiang; Li, Hong-Wei; Chen, Wei; Fang, Xi; Han, Zheng-Fu; Huang, Wei
2018-03-01
The main threats for the well-known Bennett-Brassard 1984 (BB84) practical quantum key distribution (QKD) systems are that its encoding is inaccurate and measurement device may be vulnerable to particular attacks. Thus, a general physical model or security proof to tackle these loopholes simultaneously and quantitatively is highly desired. Here we give a framework on the security of BB84 when imperfect qubit encoding and vulnerability of measurement device are both considered. In our analysis, the potential attacks to measurement device are generalized by the recently proposed weak randomness model which assumes the input random numbers are partially biased depending on a hidden variable planted by an eavesdropper. And the inevitable encoding inaccuracy is also introduced here. From a fundamental view, our work reveals the potential information leakage due to encoding inaccuracy and weak randomness input. For applications, our result can be viewed as a useful tool to quantitatively evaluate the security of a practical QKD system.
Haas, Jessica R.; Thompson, Matthew P.; Tillery, Anne C.; Scott, Joe H.
2017-01-01
Wildfires can increase the frequency and magnitude of catastrophic debris flows. Integrated, proactive natural hazard assessment would therefore characterize landscapes based on the potential for the occurrence and interactions of wildfires and postwildfire debris flows. This chapter presents a new modeling effort that can quantify the variability surrounding a key input to postwildfire debris-flow modeling, the amount of watershed burned at moderate to high severity, in a prewildfire context. The use of stochastic wildfire simulation captures variability surrounding the timing and location of ignitions, fire weather patterns, and ultimately the spatial patterns of watershed area burned. Model results provide for enhanced estimates of postwildfire debris-flow hazard in a prewildfire context, and multiple hazard metrics are generated to characterize and contrast hazards across watersheds. Results can guide mitigation efforts by allowing planners to identify which factors may be contributing the most to the hazard rankings of watersheds.
Gupta, Himanshu; Schiros, Chun G; Sharifov, Oleg F; Jain, Apurva; Denney, Thomas S
2016-08-31
Recently released American College of Cardiology/American Heart Association (ACC/AHA) guideline recommends the Pooled Cohort equations for evaluating atherosclerotic cardiovascular risk of individuals. The impact of the clinical input variable uncertainties on the estimates of ten-year cardiovascular risk based on ACC/AHA guidelines is not known. Using a publicly available the National Health and Nutrition Examination Survey dataset (2005-2010), we computed maximum and minimum ten-year cardiovascular risks by assuming clinically relevant variations/uncertainties in input of age (0-1 year) and ±10 % variation in total-cholesterol, high density lipoprotein- cholesterol, and systolic blood pressure and by assuming uniform distribution of the variance of each variable. We analyzed the changes in risk category compared to the actual inputs at 5 % and 7.5 % risk limits as these limits define the thresholds for consideration of drug therapy in the new guidelines. The new-pooled cohort equations for risk estimation were implemented in a custom software package. Based on our input variances, changes in risk category were possible in up to 24 % of the population cohort at both 5 % and 7.5 % risk boundary limits. This trend was consistently noted across all subgroups except in African American males where most of the cohort had ≥7.5 % baseline risk regardless of the variation in the variables. The uncertainties in the input variables can alter the risk categorization. The impact of these variances on the ten-year risk needs to be incorporated into the patient/clinician discussion and clinical decision making. Incorporating good clinical practices for the measurement of critical clinical variables and robust standardization of laboratory parameters to more stringent reference standards is extremely important for successful implementation of the new guidelines. Furthermore, ability to customize the risk calculator inputs to better represent unique clinical circumstances specific to individual needs would be highly desirable in the future versions of the risk calculator.
NASA Astrophysics Data System (ADS)
Hale, R. L.; Grimm, N. B.; Vorosmarty, C. J.
2014-12-01
An ongoing challenge for society is to harness the benefits of phosphorus (P) while minimizing negative effects on downstream ecosystems. To meet this challenge we must understand the controls on the delivery of anthropogenic P from landscapes to downstream ecosystems. We used a model that incorporates P inputs to watersheds, hydrology, and infrastructure (sewers, waste-water treatment plants, and reservoirs) to reconstruct historic P yields for the northeastern U.S. from 1930 to 2002. At the regional scale, increases in P inputs were paralleled by increased fractional retention, thus P loading to the coast did not increase significantly. We found that temporal variation in regional P yield was correlated with P inputs. Spatial patterns of watershed P yields were best predicted by inputs, but the correlation between inputs and yields in space weakened over time, due to infrastructure development. Although the magnitude of infrastructure effect was small, its role changed over time and was important in creating spatial and temporal heterogeneity in input-yield relationships. We then conducted a hierarchical cluster analysis to identify a typology of anthropogenic P cycling, using data on P inputs (fertilizer, livestock feed, and human food), infrastructure (dams, wastewater treatment plants, sewers), and hydrology (runoff coefficient). We identified 6 key types of watersheds that varied significantly in climate, infrastructure, and the types and amounts of P inputs. Annual watershed P yields and retention varied significantly across watershed types. Although land cover varied significantly across typologies, clusters based on land cover alone did not explain P budget patterns, suggesting that this variable is insufficient to understand patterns of P cycling across large spatial scales. Furthermore, clusters varied over time as patterns of climate, P use, and infrastructure changed. Our results demonstrate that the drivers of P cycles are spatially and temporally heterogeneous, yet they also suggest that a relatively simple typology of watersheds can be useful for understanding regional P cycles and may help inform P management approaches.
Aitkenhead, Matt J; Black, Helaina I J
2018-02-01
Using the International Centre for Research in Agroforestry-International Soil Reference and Information Centre (ICRAF-ISRIC) global soil spectroscopy database, models were developed to estimate a number of soil variables using different input data types. These input types included: (1) site data only; (2) visible-near-infrared (Vis-NIR) diffuse reflectance spectroscopy only; (3) combined site and Vis-NIR data; (4) red-green-blue (RGB) color data only; and (5) combined site and RGB color data. The models produced variable estimation accuracy, with RGB only being generally worst and spectroscopy plus site being best. However, we showed that for certain variables, estimation accuracy levels achieved with the "site plus RGB input data" were sufficiently good to provide useful estimates (r 2 > 0.7). These included major elements (Ca, Si, Al, Fe), organic carbon, and cation exchange capacity. Estimates for bulk density, contrast-to-noise (C/N), and P were moderately good, but K was not well estimated using this model type. For the "spectra plus site" model, many more variables were well estimated, including many that are important indicators for agricultural productivity and soil health. Sum of cation, electrical conductivity, Si, Ca, and Al oxides, and C/N ratio were estimated using this approach with r 2 values > 0.9. This work provides a mechanism for identifying the cost-effectiveness of using different model input data, with associated costs, for estimating soil variables to required levels of accuracy.
NASA Astrophysics Data System (ADS)
Nossent, Jiri; Pereira, Fernando; Bauwens, Willy
2015-04-01
Precipitation is one of the key inputs for hydrological models. As long as the values of the hydrological model parameters are fixed, a variation of the rainfall input is expected to induce a change in the model output. Given the increased awareness of uncertainty on rainfall records, it becomes more important to understand the impact of this input - output dynamic. Yet, modellers often still have the intention to mimic the observed flow, whatever the deviation of the employed records from the actual rainfall might be, by recklessly adapting the model parameter values. But is it actually possible to vary the model parameter values in such a way that a certain (observed) model output can be generated based on inaccurate rainfall inputs? Thus, how important is the rainfall uncertainty for the model output with respect to the model parameter importance? To address this question, we apply the Sobol' sensitivity analysis method to assess and compare the importance of the rainfall uncertainty and the model parameters on the output of the hydrological model. In order to be able to treat the regular model parameters and input uncertainty in the same way, and to allow a comparison of their influence, a possible approach is to represent the rainfall uncertainty by a parameter. To tackle the latter issue, we apply so called rainfall multipliers on hydrological independent storm events, as a probabilistic parameter representation of the possible rainfall variation. As available rainfall records are very often point measurements at a discrete time step (hourly, daily, monthly,…), they contain uncertainty due to a latent lack of spatial and temporal variability. The influence of the latter variability can also be different for hydrological models with different spatial and temporal scale. Therefore, we perform the sensitivity analyses on a semi-distributed model (SWAT) and a lumped model (NAM). The assessment and comparison of the importance of the rainfall uncertainty and the model parameters is achieved by considering different scenarios for the included parameters and the state of the models.
Howey, Meghan C L; Palace, Michael W; McMichael, Crystal H
2016-07-05
Building monuments was one way that past societies reconfigured their landscapes in response to shifting social and ecological factors. Understanding the connections between those factors and monument construction is critical, especially when multiple types of monuments were constructed across the same landscape. Geospatial technologies enable past cultural activities and environmental variables to be examined together at large scales. Many geospatial modeling approaches, however, are not designed for presence-only (occurrence) data, which can be limiting given that many archaeological site records are presence only. We use maximum entropy modeling (MaxEnt), which works with presence-only data, to predict the distribution of monuments across large landscapes, and we analyze MaxEnt output to quantify the contributions of spatioenvironmental variables to predicted distributions. We apply our approach to co-occurring Late Precontact (ca. A.D. 1000-1600) monuments in Michigan: (i) mounds and (ii) earthwork enclosures. Many of these features have been destroyed by modern development, and therefore, we conducted archival research to develop our monument occurrence database. We modeled each monument type separately using the same input variables. Analyzing variable contribution to MaxEnt output, we show that mound and enclosure landscape suitability was driven by contrasting variables. Proximity to inland lakes was key to mound placement, and proximity to rivers was key to sacred enclosures. This juxtaposition suggests that mounds met local needs for resource procurement success, whereas enclosures filled broader regional needs for intergroup exchange and shared ritual. Our study shows how MaxEnt can be used to develop sophisticated models of past cultural processes, including monument building, with imperfect, limited, presence-only data.
NLEdit: A generic graphical user interface for Fortran programs
NASA Technical Reports Server (NTRS)
Curlett, Brian P.
1994-01-01
NLEdit is a generic graphical user interface for the preprocessing of Fortran namelist input files. The interface consists of a menu system, a message window, a help system, and data entry forms. A form is generated for each namelist. The form has an input field for each namelist variable along with a one-line description of that variable. Detailed help information, default values, and minimum and maximum allowable values can all be displayed via menu picks. Inputs are processed through a scientific calculator program that allows complex equations to be used instead of simple numeric inputs. A custom user interface is generated simply by entering information about the namelist input variables into an ASCII file. There is no need to learn a new graphics system or programming language. NLEdit can be used as a stand-alone program or as part of a larger graphical user interface. Although NLEdit is intended for files using namelist format, it can be easily modified to handle other file formats.
Computing Shapes Of Cascade Diffuser Blades
NASA Technical Reports Server (NTRS)
Tran, Ken; Prueger, George H.
1993-01-01
Computer program generates sizes and shapes of cascade-type blades for use in axial or radial turbomachine diffusers. Generates shapes of blades rapidly, incorporating extensive cascade data to determine optimum incidence and deviation angle for blade design based on 65-series data base of National Advisory Commission for Aeronautics and Astronautics (NACA). Allows great variability in blade profile through input variables. Also provides for design of three-dimensional blades by allowing variable blade stacking. Enables designer to obtain computed blade-geometry data in various forms: as input for blade-loading analysis; as input for quasi-three-dimensional analysis of flow; or as points for transfer to computer-aided design.
Influence of the UV Environment on the Synthesis of Prebiotic Molecules.
Ranjan, Sukrit; Sasselov, Dimitar D
2016-01-01
Ultraviolet radiation is common to most planetary environments and could play a key role in the chemistry of molecules relevant to abiogenesis (prebiotic chemistry). In this work, we explore the impact of UV light on prebiotic chemistry that might occur in liquid water on the surface of a planet with an atmosphere. We consider effects including atmospheric absorption, attenuation by water, and stellar variability to constrain the UV input as a function of wavelength. We conclude that the UV environment would be characterized by broadband input, and wavelengths below 204 nm and 168 nm would be shielded out by atmospheric CO2 and water, respectively. We compare this broadband prebiotic UV input to the narrowband UV sources (e.g., mercury lamps) often used in laboratory studies of prebiotic chemistry and explore the implications for the conclusions drawn from these experiments. We consider as case studies the ribonucleotide synthesis pathway of Powner et al. (2009) and the sugar synthesis pathway of Ritson and Sutherland (2012). Irradiation by narrowband UV light from a mercury lamp formed an integral component of these studies; we quantitatively explore the impact of more realistic UV input on the conclusions that can be drawn from these experiments. Finally, we explore the constraints solar UV input places on the buildup of prebiotically important feedstock gasses like CH4 and HCN. Our results demonstrate the importance of characterizing the wavelength dependence (action spectra) of prebiotic synthesis pathways to determine how pathways derived under laboratory irradiation conditions will function under planetary prebiotic conditions.
Integrative Data Analysis of Multi-Platform Cancer Data with a Multimodal Deep Learning Approach.
Liang, Muxuan; Li, Zhizhong; Chen, Ting; Zeng, Jianyang
2015-01-01
Identification of cancer subtypes plays an important role in revealing useful insights into disease pathogenesis and advancing personalized therapy. The recent development of high-throughput sequencing technologies has enabled the rapid collection of multi-platform genomic data (e.g., gene expression, miRNA expression, and DNA methylation) for the same set of tumor samples. Although numerous integrative clustering approaches have been developed to analyze cancer data, few of them are particularly designed to exploit both deep intrinsic statistical properties of each input modality and complex cross-modality correlations among multi-platform input data. In this paper, we propose a new machine learning model, called multimodal deep belief network (DBN), to cluster cancer patients from multi-platform observation data. In our integrative clustering framework, relationships among inherent features of each single modality are first encoded into multiple layers of hidden variables, and then a joint latent model is employed to fuse common features derived from multiple input modalities. A practical learning algorithm, called contrastive divergence (CD), is applied to infer the parameters of our multimodal DBN model in an unsupervised manner. Tests on two available cancer datasets show that our integrative data analysis approach can effectively extract a unified representation of latent features to capture both intra- and cross-modality correlations, and identify meaningful disease subtypes from multi-platform cancer data. In addition, our approach can identify key genes and miRNAs that may play distinct roles in the pathogenesis of different cancer subtypes. Among those key miRNAs, we found that the expression level of miR-29a is highly correlated with survival time in ovarian cancer patients. These results indicate that our multimodal DBN based data analysis approach may have practical applications in cancer pathogenesis studies and provide useful guidelines for personalized cancer therapy.
Round-Robin approach to data flow optimization
NASA Technical Reports Server (NTRS)
Witt, J.
1978-01-01
A large data base, circular in structure, was required (for the Voyager Mission to Jupiter/Saturn) with the capability to completely update the data every four hours during high activity periods. The data is stored in key ordered format for retrieval but is not input in key order. Existing access methods for large data bases with rapid data replacement by keys become inefficient as the volume of data being replaced grows. The Round-Robin method was developed to alleviate this problem. The Round-Robin access method allows rapid updating of the data with continuous self cleaning where the oldest data (by key) is deleted and the newest data (by key) is kept regardless of the order of input.
Chen, Mingshi; Senay, Gabriel B.; Singh, Ramesh K.; Verdin, James P.
2016-01-01
Evapotranspiration (ET) is an important component of the water cycle – ET from the land surface returns approximately 60% of the global precipitation back to the atmosphere. ET also plays an important role in energy transport among the biosphere, atmosphere, and hydrosphere. Current regional to global and daily to annual ET estimation relies mainly on surface energy balance (SEB) ET models or statistical and empirical methods driven by remote sensing data and various climatological databases. These models have uncertainties due to inevitable input errors, poorly defined parameters, and inadequate model structures. The eddy covariance measurements on water, energy, and carbon fluxes at the AmeriFlux tower sites provide an opportunity to assess the ET modeling uncertainties. In this study, we focused on uncertainty analysis of the Operational Simplified Surface Energy Balance (SSEBop) model for ET estimation at multiple AmeriFlux tower sites with diverse land cover characteristics and climatic conditions. The 8-day composite 1-km MODerate resolution Imaging Spectroradiometer (MODIS) land surface temperature (LST) was used as input land surface temperature for the SSEBop algorithms. The other input data were taken from the AmeriFlux database. Results of statistical analysis indicated that the SSEBop model performed well in estimating ET with an R2 of 0.86 between estimated ET and eddy covariance measurements at 42 AmeriFlux tower sites during 2001–2007. It was encouraging to see that the best performance was observed for croplands, where R2 was 0.92 with a root mean square error of 13 mm/month. The uncertainties or random errors from input variables and parameters of the SSEBop model led to monthly ET estimates with relative errors less than 20% across multiple flux tower sites distributed across different biomes. This uncertainty of the SSEBop model lies within the error range of other SEB models, suggesting systematic error or bias of the SSEBop model is within the normal range. This finding implies that the simplified parameterization of the SSEBop model did not significantly affect the accuracy of the ET estimate while increasing the ease of model setup for operational applications. The sensitivity analysis indicated that the SSEBop model is most sensitive to input variables, land surface temperature (LST) and reference ET (ETo); and parameters, differential temperature (dT), and maximum ET scalar (Kmax), particularly during the non-growing season and in dry areas. In summary, the uncertainty assessment verifies that the SSEBop model is a reliable and robust method for large-area ET estimation. The SSEBop model estimates can be further improved by reducing errors in two input variables (ETo and LST) and two key parameters (Kmax and dT).
Neural Network Machine Learning and Dimension Reduction for Data Visualization
NASA Technical Reports Server (NTRS)
Liles, Charles A.
2014-01-01
Neural network machine learning in computer science is a continuously developing field of study. Although neural network models have been developed which can accurately predict a numeric value or nominal classification, a general purpose method for constructing neural network architecture has yet to be developed. Computer scientists are often forced to rely on a trial-and-error process of developing and improving accurate neural network models. In many cases, models are constructed from a large number of input parameters. Understanding which input parameters have the greatest impact on the prediction of the model is often difficult to surmise, especially when the number of input variables is very high. This challenge is often labeled the "curse of dimensionality" in scientific fields. However, techniques exist for reducing the dimensionality of problems to just two dimensions. Once a problem's dimensions have been mapped to two dimensions, it can be easily plotted and understood by humans. The ability to visualize a multi-dimensional dataset can provide a means of identifying which input variables have the highest effect on determining a nominal or numeric output. Identifying these variables can provide a better means of training neural network models; models can be more easily and quickly trained using only input variables which appear to affect the outcome variable. The purpose of this project is to explore varying means of training neural networks and to utilize dimensional reduction for visualizing and understanding complex datasets.
Post-cracking characteristics of high performance fiber reinforced cementitious composites
NASA Astrophysics Data System (ADS)
Suwannakarn, Supat W.
The application of high performance fiber reinforced cement composites (HPFRCC) in structural systems depends primarily on the material's tensile response, which is a direct function of fiber and matrix characteristics, the bond between them, and the fiber content or volume fraction. The objective of this dissertation is to evaluate and model the post-cracking behavior of HPFRCC. In particular, it focused on the influential parameters controlling tensile behavior and the variability associated with them. The key parameters considered include: the stress and strain at first cracking, the stress and strain at maximum post-cracking, the shape of the stress-strain or stress-elongation response, the multiple cracking process, the shape of the resistance curve after crack localization, the energy associated with the multiple cracking process, and the stress versus crack opening response of a single crack. Both steel fibers and polymeric fibers, perceived to have the greatest potential for current commercial applications, are considered. The main variables covered include fiber type (Torex, Hooked, PVA, and Spectra) and fiber volume fraction (ranging from 0.75% to 2.0%). An extensive experimental program is carried out using direct tensile tests and stress-versus crack opening displacement tests on notched tensile prisms. The key experimental results were analysed and modeled using simple prediction equations which, combined with a composite mechanics approach, allowed for predicting schematic simplified stress-strain and stress-displacement response curves for use in structural modeling. The experimental data show that specimens reinforced with Torex fibers performs best, follows by Hooked and Spectra fibers, then PVA fibers. Significant variability in key parameters was observed througout suggesting that variability must be studied further. The new information obtained can be used as input for material models for finite element analysis and can provide greater confidence in using the HPFRC composites in structural applications. It also provides a good foundation to integrate these composites in conventional structural analysis and design.
Guzmán-Bárcenas, José; Hernández, José Alfredo; Arias-Martínez, Joel; Baptista-González, Héctor; Ceballos-Reyes, Guillermo; Irles, Claudine
2016-07-21
Leptin and insulin levels are key factors regulating fetal and neonatal energy homeostasis, development and growth. Both biomarkers are used as predictors of weight gain and obesity during infancy. There are currently no prediction algorithms for cord blood (UCB) hormone levels using Artificial Neural Networks (ANN) that have been directly trained with anthropometric maternal and neonatal data, from neonates exposed to distinct metabolic environments during pregnancy (obese with or without gestational diabetes mellitus or lean women). The aims were: 1) to develop ANN models that simulate leptin and insulin concentrations in UCB based on maternal and neonatal data (ANN perinatal model) or from only maternal data during early gestation (ANN prenatal model); 2) To evaluate the biological relevance of each parameter (maternal and neonatal anthropometric variables). We collected maternal and neonatal anthropometric data (n = 49) in normoglycemic healthy lean, obese or obese with gestational diabetes mellitus women, as well as determined UCB leptin and insulin concentrations by ELISA. The ANN perinatal model consisted of an input layer of 12 variables (maternal and neonatal anthropometric and biochemical data from early gestation and at term) while the ANN prenatal model used only 6 variables (maternal anthropometric from early gestation) in the input layer. For both networks, the output layer contained 1 variable to UCB leptin or to UCB insulin concentration. The best architectures for the ANN perinatal models estimating leptin and insulin were 12-5-1 while for the ANN prenatal models, 6-5-1 and 6-4-1 were found for leptin and insulin, respectively. ANN models presented an excellent agreement between experimental and simulated values. Interestingly, the use of only prenatal maternal anthropometric data was sufficient to estimate UCB leptin and insulin values. Maternal BMI, weight and age as well as neonatal birth were the most influential parameters for leptin while maternal morbidity was the most significant factor for insulin prediction. Low error percentage and short computing time makes these ANN models interesting in a translational research setting, to be applied for the prediction of neonatal leptin and insulin values from maternal anthropometric data, and possibly the on-line estimation during pregnancy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
LaFarge, R.A.
1990-05-01
MCPRAM (Monte Carlo PReprocessor for AMEER), a computer program that uses Monte Carlo techniques to create an input file for the AMEER trajectory code, has been developed for the Sandia National Laboratories VAX and Cray computers. Users can select the number of trajectories to compute, which AMEER variables to investigate, and the type of probability distribution for each variable. Any legal AMEER input variable can be investigated anywhere in the input run stream with either a normal, uniform, or Rayleigh distribution. Users also have the option to use covariance matrices for the investigation of certain correlated variables such as boostermore » pre-reentry errors and wind, axial force, and atmospheric models. In conjunction with MCPRAM, AMEER was modified to include the variables introduced by the covariance matrices and to include provisions for six types of fuze models. The new fuze models and the new AMEER variables are described in this report.« less
Input Variability Facilitates Unguided Subcategory Learning in Adults
ERIC Educational Resources Information Center
Eidsvåg, Sunniva Sørhus; Austad, Margit; Plante, Elena; Asbjørnsen, Arve E.
2015-01-01
Purpose: This experiment investigated whether input variability would affect initial learning of noun gender subcategories in an unfamiliar, natural language (Russian), as it is known to assist learning of other grammatical forms. Method: Forty adults (20 men, 20 women) were familiarized with examples of masculine and feminine Russian words. Half…
NASA Astrophysics Data System (ADS)
Harrison, Benjamin; Sandiford, Mike; McLaren, Sandra
2016-04-01
Supervised machine learning algorithms attempt to build a predictive model using empirical data. Their aim is to take a known set of input data along with known responses to the data, and adaptively train a model to generate predictions for new data inputs. A key attraction to their use is the ability to perform as function approximators where the definition of an explicit relationship between variables is infeasible. We present a novel means of estimating thermal conductivity using a supervised self-organising map algorithm, trained on about 150 thermal conductivity measurements, and using a suite of five electric logs common to 14 boreholes. A key motivation of the study was to supplement the small number of direct measurements of thermal conductivity with the decades of borehole data acquired in the Gippsland Basin to produce more confident calculations of surface heat flow. A previous attempt to generate estimates from well-log data in the Gippsland Basin using classic petrophysical log interpretation methods was able to produce reasonable synthetic thermal conductivity logs for only four boreholes. The current study has extended this to a further ten boreholes. Interesting outcomes from the study are: the method appears stable at very low sample sizes (< ~100); the SOM permits quantitative analysis of essentially qualitative uncalibrated well-log data; and the method's moderate success at prediction with minimal effort tuning the algorithm's parameters.
Impact of inherent meteorology uncertainty on air quality ...
It is well established that there are a number of different classifications and sources of uncertainties in environmental modeling systems. Air quality models rely on two key inputs, namely, meteorology and emissions. When using air quality models for decision making, it is important to understand how uncertainties in these inputs affect the simulated concentrations. Ensembles are one method to explore how uncertainty in meteorology affects air pollution concentrations. Most studies explore this uncertainty by running different meteorological models or the same model with different physics options and in some cases combinations of different meteorological and air quality models. While these have been shown to be useful techniques in some cases, we present a technique that leverages the initial condition perturbations of a weather forecast ensemble, namely, the Short-Range Ensemble Forecast system to drive the four-dimensional data assimilation in the Weather Research and Forecasting (WRF)-Community Multiscale Air Quality (CMAQ) model with a key focus being the response of ozone chemistry and transport. Results confirm that a sizable spread in WRF solutions, including common weather variables of temperature, wind, boundary layer depth, clouds, and radiation, can cause a relatively large range of ozone-mixing ratios. Pollutant transport can be altered by hundreds of kilometers over several days. Ozone-mixing ratios of the ensemble can vary as much as 10–20 ppb
Next-Generation Lightweight Mirror Modeling Software
NASA Technical Reports Server (NTRS)
Arnold, William R., Sr.; Fitzgerald, Mathew; Rosa, Rubin Jaca; Stahl, Phil
2013-01-01
The advances in manufacturing techniques for lightweight mirrors, such as EXELSIS deep core low temperature fusion, Corning's continued improvements in the Frit bonding process and the ability to cast large complex designs, combined with water-jet and conventional diamond machining of glasses and ceramics has created the need for more efficient means of generating finite element models of these structures. Traditional methods of assembling 400,000 + element models can take weeks of effort, severely limiting the range of possible optimization variables. This paper will introduce model generation software developed under NASA sponsorship for the design of both terrestrial and space based mirrors. The software deals with any current mirror manufacturing technique, single substrates, multiple arrays of substrates, as well as the ability to merge submodels into a single large model. The modeler generates both mirror and suspension system elements, suspensions can be created either for each individual petal or the whole mirror. A typical model generation of 250,000 nodes and 450,000 elements only takes 5-10 minutes, much of that time being variable input time. The program can create input decks for ANSYS, ABAQUS and NASTRAN. An archive/retrieval system permits creation of complete trade studies, varying cell size, depth, and petal size, suspension geometry with the ability to recall a particular set of parameters and make small or large changes with ease. The input decks created by the modeler are text files which can be modified by any editor, all the key shell thickness parameters are accessible and comments in deck identify which groups of elements are associated with these parameters. This again makes optimization easier. With ANSYS decks, the nodes representing support attachments are grouped into components; in ABAQUS these are SETS and in NASTRAN as GRIDPOINT SETS, this make integration of these models into large telescope or satellite models possible
Next Generation Lightweight Mirror Modeling Software
NASA Technical Reports Server (NTRS)
Arnold, William; Fitzgerald, Matthew; Stahl, Philip
2013-01-01
The advances in manufacturing techniques for lightweight mirrors, such as EXELSIS deep core low temperature fusion, Corning's continued improvements in the Frit bonding process and the ability to cast large complex designs, combined with water-jet and conventional diamond machining of glasses and ceramics has created the need for more efficient means of generating finite element models of these structures. Traditional methods of assembling 400,000 + element models can take weeks of effort, severely limiting the range of possible optimization variables. This paper will introduce model generation software developed under NASA sponsorship for the design of both terrestrial and space based mirrors. The software deals with any current mirror manufacturing technique, single substrates, multiple arrays of substrates, as well as the ability to merge submodels into a single large model. The modeler generates both mirror and suspension system elements, suspensions can be created either for each individual petal or the whole mirror. A typical model generation of 250,000 nodes and 450,000 elements only takes 5-10 minutes, much of that time being variable input time. The program can create input decks for ANSYS, ABAQUS and NASTRAN. An archive/retrieval system permits creation of complete trade studies, varying cell size, depth, and petal size, suspension geometry with the ability to recall a particular set of parameters and make small or large changes with ease. The input decks created by the modeler are text files which can be modified by any editor, all the key shell thickness parameters are accessible and comments in deck identify which groups of elements are associated with these parameters. This again makes optimization easier. With ANSYS decks, the nodes representing support attachments are grouped into components; in ABAQUS these are SETS and in NASTRAN as GRIDPOINT SETS, this make integration of these models into large telescope or satellite models possible.
Next Generation Lightweight Mirror Modeling Software
NASA Technical Reports Server (NTRS)
Arnold, William R., Sr.; Fitzgerald, Mathew; Rosa, Rubin Jaca; Stahl, H. Philip
2013-01-01
The advances in manufacturing techniques for lightweight mirrors, such as EXELSIS deep core low temperature fusion, Corning's continued improvements in the Frit bonding process and the ability to cast large complex designs, combined with water-jet and conventional diamond machining of glasses and ceramics has created the need for more efficient means of generating finite element models of these structures. Traditional methods of assembling 400,000 + element models can take weeks of effort, severely limiting the range of possible optimization variables. This paper will introduce model generation software developed under NASA sponsorship for the design of both terrestrial and space based mirrors. The software deals with any current mirror manufacturing technique, single substrates, multiple arrays of substrates, as well as the ability to merge submodels into a single large model. The modeler generates both mirror and suspension system elements, suspensions can be created either for each individual petal or the whole mirror. A typical model generation of 250,000 nodes and 450,000 elements only takes 5-10 minutes, much of that time being variable input time. The program can create input decks for ANSYS, ABAQUS and NASTRAN. An archive/retrieval system permits creation of complete trade studies, varying cell size, depth, and petal size, suspension geometry with the ability to recall a particular set of parameters and make small or large changes with ease. The input decks created by the modeler are text files which can be modified by any editor, all the key shell thickness parameters are accessible and comments in deck identify which groups of elements are associated with these parameters. This again makes optimization easier. With ANSYS decks, the nodes representing support attachments are grouped into components; in ABAQUS these are SETS and in NASTRAN as GRIDPOINT SETS, this make integration of these models into large telescope or satellite models easier.
Unified Least Squares Methods for the Evaluation of Diagnostic Tests With the Gold Standard
Tang, Liansheng Larry; Yuan, Ao; Collins, John; Che, Xuan; Chan, Leighton
2017-01-01
The article proposes a unified least squares method to estimate the receiver operating characteristic (ROC) parameters for continuous and ordinal diagnostic tests, such as cancer biomarkers. The method is based on a linear model framework using the empirically estimated sensitivities and specificities as input “data.” It gives consistent estimates for regression and accuracy parameters when the underlying continuous test results are normally distributed after some monotonic transformation. The key difference between the proposed method and the method of Tang and Zhou lies in the response variable. The response variable in the latter is transformed empirical ROC curves at different thresholds. It takes on many values for continuous test results, but few values for ordinal test results. The limited number of values for the response variable makes it impractical for ordinal data. However, the response variable in the proposed method takes on many more distinct values so that the method yields valid estimates for ordinal data. Extensive simulation studies are conducted to investigate and compare the finite sample performance of the proposed method with an existing method, and the method is then used to analyze 2 real cancer diagnostic example as an illustration. PMID:28469385
NASA Astrophysics Data System (ADS)
Li, Xianye; Meng, Xiangfeng; Wang, Yurong; Yang, Xiulun; Yin, Yongkai; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi
2017-09-01
A multiple-image encryption method is proposed that is based on row scanning compressive ghost imaging, (t, n) threshold secret sharing, and phase retrieval in the Fresnel domain. In the encryption process, after wavelet transform and Arnold transform of the target image, the ciphertext matrix can be first detected using a bucket detector. Based on a (t, n) threshold secret sharing algorithm, the measurement key used in the row scanning compressive ghost imaging can be decomposed and shared into two pairs of sub-keys, which are then reconstructed using two phase-only mask (POM) keys with fixed pixel values, placed in the input plane and transform plane 2 of the phase retrieval scheme, respectively; and the other POM key in the transform plane 1 can be generated and updated by the iterative encoding of each plaintext image. In each iteration, the target image acts as the input amplitude constraint in the input plane. During decryption, each plaintext image possessing all the correct keys can be successfully decrypted by measurement key regeneration, compression algorithm reconstruction, inverse wavelet transformation, and Fresnel transformation. Theoretical analysis and numerical simulations both verify the feasibility of the proposed method.
ERIC Educational Resources Information Center
Neuman, Susan B.; Koskinen, Patricia
1992-01-01
Analyzes whether comprehensible input via captioned television influences acquisition of science vocabulary and concepts using 129 bilingual seventh and eighth graders. Finds that comprehensible input is a key ingredient in language acquisition and reading development. (MG)
Statistics of optimal information flow in ensembles of regulatory motifs
NASA Astrophysics Data System (ADS)
Crisanti, Andrea; De Martino, Andrea; Fiorentino, Jonathan
2018-02-01
Genetic regulatory circuits universally cope with different sources of noise that limit their ability to coordinate input and output signals. In many cases, optimal regulatory performance can be thought to correspond to configurations of variables and parameters that maximize the mutual information between inputs and outputs. Since the mid-2000s, such optima have been well characterized in several biologically relevant cases. Here we use methods of statistical field theory to calculate the statistics of the maximal mutual information (the "capacity") achievable by tuning the input variable only in an ensemble of regulatory motifs, such that a single controller regulates N targets. Assuming (i) sufficiently large N , (ii) quenched random kinetic parameters, and (iii) small noise affecting the input-output channels, we can accurately reproduce numerical simulations both for the mean capacity and for the whole distribution. Our results provide insight into the inherent variability in effectiveness occurring in regulatory systems with heterogeneous kinetic parameters.
Missing pulse detector for a variable frequency source
Ingram, Charles B.; Lawhorn, John H.
1979-01-01
A missing pulse detector is provided which has the capability of monitoring a varying frequency pulse source to detect the loss of a single pulse or total loss of signal from the source. A frequency-to-current converter is used to program the output pulse width of a variable period retriggerable one-shot to maintain a pulse width slightly longer than one-half the present monitored pulse period. The retriggerable one-shot is triggered at twice the input pulse rate by employing a frequency doubler circuit connected between the one-shot input and the variable frequency source being monitored. The one-shot remains in the triggered or unstable state under normal conditions even though the source period is varying. A loss of an input pulse or single period of a fluctuating signal input will cause the one-shot to revert to its stable state, changing the output signal level to indicate a missing pulse or signal.
Input and language development in bilingually developing children.
Hoff, Erika; Core, Cynthia
2013-11-01
Language skills in young bilingual children are highly varied as a result of the variability in their language experiences, making it difficult for speech-language pathologists to differentiate language disorder from language difference in bilingual children. Understanding the sources of variability in bilingual contexts and the resulting variability in children's skills will help improve language assessment practices by speech-language pathologists. In this article, we review literature on bilingual first language development for children under 5 years of age. We describe the rate of development in single and total language growth, we describe effects of quantity of input and quality of input on growth, and we describe effects of family composition on language input and language growth in bilingual children. We provide recommendations for language assessment of young bilingual children and consider implications for optimizing children's dual language development. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
NASA Astrophysics Data System (ADS)
Kondapalli, S. P.
2017-12-01
In the present work, pulsed current microplasma arc welding is carried out on AISI 321 austenitic stainless steel of 0.3 mm thickness. Peak current, Base current, Pulse rate and Pulse width are chosen as the input variables, whereas grain size and hardness are considered as output responses. Response surface method is adopted by using Box-Behnken Design, and in total 27 experiments are performed. Empirical relation between input and output response is developed using statistical software and analysis of variance (ANOVA) at 95% confidence level to check the adequacy. The main effect and interaction effect of input variables on output response are also studied.
Statistical properties of superimposed stationary spike trains.
Deger, Moritz; Helias, Moritz; Boucsein, Clemens; Rotter, Stefan
2012-06-01
The Poisson process is an often employed model for the activity of neuronal populations. It is known, though, that superpositions of realistic, non- Poisson spike trains are not in general Poisson processes, not even for large numbers of superimposed processes. Here we construct superimposed spike trains from intracellular in vivo recordings from rat neocortex neurons and compare their statistics to specific point process models. The constructed superimposed spike trains reveal strong deviations from the Poisson model. We find that superpositions of model spike trains that take the effective refractoriness of the neurons into account yield a much better description. A minimal model of this kind is the Poisson process with dead-time (PPD). For this process, and for superpositions thereof, we obtain analytical expressions for some second-order statistical quantities-like the count variability, inter-spike interval (ISI) variability and ISI correlations-and demonstrate the match with the in vivo data. We conclude that effective refractoriness is the key property that shapes the statistical properties of the superposition spike trains. We present new, efficient algorithms to generate superpositions of PPDs and of gamma processes that can be used to provide more realistic background input in simulations of networks of spiking neurons. Using these generators, we show in simulations that neurons which receive superimposed spike trains as input are highly sensitive for the statistical effects induced by neuronal refractoriness.
NASA Astrophysics Data System (ADS)
Olgin, J. G.; Pennington, D. D.; Webb, N.
2017-12-01
A variety of models have been developed to better understand dust emissions - from initial erosive event to entrainment to transport through the atmosphere. Many of these models have been used to analyze dust emissions by representing atmospheric and surface variables, such as wind and soil moisture respectively, in a numerical model to determine the resulting dust emissions. Pertinent to modeling these variables, there are three important factors influencing emissions: 1) Friction velocity threshold based on wind interactions with dust, 2) horizontal flux (saltation) of dust and 3) vertical dust flux. While all of the existing models incorporate these processes, additional improvements are needed to yield results reflective of recorded data of dust events. Our investigation focuses on explicitly identifying specific land cover (LC) elements unique to the Chihuahan desert that contribute to aerodynamic roughness length (Zo); a main component to dust emission and key to provide results more representative of known dust events in semi-arid regions. These elements will be formulated into computer model inputs by conducting analysis (e.g. geostatistics) on field and satellite data to ascertain core LC characteristics responsible for affecting wind velocities (e.g. wind shadowing effects), which are conducive to dust emissions. This inputs will be used in a modified program using the Weather and Research Forecast model (WRF) to replicate previously recorded dust events. Results from this study will be presented here.
A low power low noise analog front end for portable healthcare system
NASA Astrophysics Data System (ADS)
Yanchao, Wang; Keren, Ke; Wenhui, Qin; Yajie, Qin; Ting, Yi; Zhiliang, Hong
2015-10-01
The presented analog front end (AFE) used to process human bio-signals consists of chopping instrument amplifier (IA), chopping spikes filter and programmable gain and bandwidth amplifier. The capacitor-coupling input of AFE can reject the DC electrode offset. The power consumption of current-feedback based IA is reduced by adopting capacitor divider in the input and feedback network. Besides, IA's input thermal noise is decreased by utilizing complementary CMOS input pairs which can offer higher transconductance. Fabricated in Global Foundry 0.35 μm CMOS technology, the chip consumes 3.96 μA from 3.3 V supply. The measured input noise is 0.85 μVrms (0.5-100 Hz) and the achieved noise efficient factor is 6.48. Project supported by the Science and Technology Commission of Shanghai Municipality (No. 13511501100), the State Key Laboratory Project of China (No. 11MS002), and the State Key Laboratory of ASIC & System, Fudan University.
Distinct developmental profiles in typical speech acquisition
Campbell, Thomas F.; Shriberg, Lawrence D.; Green, Jordan R.; Abdi, Hervé; Rusiewicz, Heather Leavy; Venkatesh, Lakshmi; Moore, Christopher A.
2012-01-01
Three- to five-year-old children produce speech that is characterized by a high level of variability within and across individuals. This variability, which is manifest in speech movements, acoustics, and overt behaviors, can be input to subgroup discovery methods to identify cohesive subgroups of speakers or to reveal distinct developmental pathways or profiles. This investigation characterized three distinct groups of typically developing children and provided normative benchmarks for speech development. These speech development profiles, identified among 63 typically developing preschool-aged speakers (ages 36–59 mo), were derived from the children's performance on multiple measures. These profiles were obtained by submitting to a k-means cluster analysis of 72 measures that composed three levels of speech analysis: behavioral (e.g., task accuracy, percentage of consonants correct), acoustic (e.g., syllable duration, syllable stress), and kinematic (e.g., variability of movements of the upper lip, lower lip, and jaw). Two of the discovered group profiles were distinguished by measures of variability but not by phonemic accuracy; the third group of children was characterized by their relatively low phonemic accuracy but not by an increase in measures of variability. Analyses revealed that of the original 72 measures, 8 key measures were sufficient to best distinguish the 3 profile groups. PMID:22357794
NASA Astrophysics Data System (ADS)
Bondareva, A. P.; Cheremkhin, P. A.; Evtikhiev, N. N.; Krasnov, V. V.; Starikov, S. N.
Scheme of optical image encryption with digital information input and dynamic encryption key based on two liquid crystal spatial light modulators and operating with spatially-incoherent monochromatic illumination is experimentally implemented. Results of experiments on images optical encryption and numerical decryption are presented. Satisfactory decryption error of 0.20÷0.27 is achieved.
Ventricular repolarization variability for hypoglycemia detection.
Ling, Steve; Nguyen, H T
2011-01-01
Hypoglycemia is the most acute and common complication of Type 1 diabetes and is a limiting factor in a glycemic management of diabetes. In this paper, two main contributions are presented; firstly, ventricular repolarization variabilities are introduced for hypoglycemia detection, and secondly, a swarm-based support vector machine (SVM) algorithm with the inputs of the repolarization variabilities is developed to detect hypoglycemia. By using the algorithm and including several repolarization variabilities as inputs, the best hypoglycemia detection performance is found with sensitivity and specificity of 82.14% and 60.19%, respectively.
The Role of Learner and Input Variables in Learning Inflectional Morphology
ERIC Educational Resources Information Center
Brooks, Patricia J.; Kempe, Vera; Sionov, Ariel
2006-01-01
To examine effects of input and learner characteristics on morphology acquisition, 60 adult English speakers learned to inflect masculine and feminine Russian nouns in nominative, dative, and genitive cases. By varying training vocabulary size (i.e., type variability), holding constant the number of learning trials, we tested whether learners…
Wideband low-noise variable-gain BiCMOS transimpedance amplifier
NASA Astrophysics Data System (ADS)
Meyer, Robert G.; Mack, William D.
1994-06-01
A new monolithic variable gain transimpedance amplifier is described. The circuit is realized in BiCMOS technology and has measured gain of 98 kilo ohms, bandwidth of 128 MHz, input noise current spectral density of 1.17 pA/square root of Hz and input signal-current handling capability of 3 mA.
Integrated controls design optimization
Lou, Xinsheng; Neuschaefer, Carl H.
2015-09-01
A control system (207) for optimizing a chemical looping process of a power plant includes an optimizer (420), an income algorithm (230) and a cost algorithm (225) and a chemical looping process models. The process models are used to predict the process outputs from process input variables. Some of the process in puts and output variables are related to the income of the plant; and some others are related to the cost of the plant operations. The income algorithm (230) provides an income input to the optimizer (420) based on a plurality of input parameters (215) of the power plant. The cost algorithm (225) provides a cost input to the optimizer (420) based on a plurality of output parameters (220) of the power plant. The optimizer (420) determines an optimized operating parameter solution based on at least one of the income input and the cost input, and supplies the optimized operating parameter solution to the power plant.
NASA Astrophysics Data System (ADS)
Creaco, E.; Berardi, L.; Sun, Siao; Giustolisi, O.; Savic, D.
2016-04-01
The growing availability of field data, from information and communication technologies (ICTs) in "smart" urban infrastructures, allows data modeling to understand complex phenomena and to support management decisions. Among the analyzed phenomena, those related to storm water quality modeling have recently been gaining interest in the scientific literature. Nonetheless, the large amount of available data poses the problem of selecting relevant variables to describe a phenomenon and enable robust data modeling. This paper presents a procedure for the selection of relevant input variables using the multiobjective evolutionary polynomial regression (EPR-MOGA) paradigm. The procedure is based on scrutinizing the explanatory variables that appear inside the set of EPR-MOGA symbolic model expressions of increasing complexity and goodness of fit to target output. The strategy also enables the selection to be validated by engineering judgement. In such context, the multiple case study extension of EPR-MOGA, called MCS-EPR-MOGA, is adopted. The application of the proposed procedure to modeling storm water quality parameters in two French catchments shows that it was able to significantly reduce the number of explanatory variables for successive analyses. Finally, the EPR-MOGA models obtained after the input selection are compared with those obtained by using the same technique without benefitting from input selection and with those obtained in previous works where other data-modeling techniques were used on the same data. The comparison highlights the effectiveness of both EPR-MOGA and the input selection procedure.
Estimating Ecosystem Carbon Stock Change in the Conterminous United States from 1971 to 2010
NASA Astrophysics Data System (ADS)
Liu, J.; Sleeter, B. M.; Zhu, Z.; Loveland, T. R.; Sohl, T.; Howard, S. M.; Hawbaker, T. J.; Liu, S.; Heath, L. S.; Cochrane, M. A.; Key, C. H.; Jiang, H.; Price, D. T.; Chen, J. M.
2015-12-01
There is significant geographic variability in U.S. ecosystem carbon sequestration due to natural and human environmental conditions. Climate change, natural disturbance and human land use are the major driving forces that can alter local and regional carbon sequestration rates. In this study, a comprehensive environmental input dataset (1-km resolution) was developed and used in the process-based Integrated Biosphere Simulator (IBIS) to quantify the U.S. carbon stock changes from 1971-2010, which potentially forms a baseline for future U.S. carbon scenarios. The key environmental data sources include land cover change information from more than 2,600 sample blocks across U.S. (10-km by 10-km in size, 60-m resolution, 1973-2000), wildland fire scar and burn severity information (30-m resolution, 1984-2010), vegetation canopy percentage and live biomass level (30-m resolution, ~2000), spatially heterogeneous atmospheric carbon dioxide and nitrogen deposition (~50-km resolution, 2003-2009), and newly available climate (4-km resolution, 1895-2010) and soil variables (1-km resolution, ~2000). The IBIS simulated the effects of atmospheric CO2 fertilization, nitrogen deposition, climate change, fire, logging, and deforestation/devegetation on ecosystem carbon changes. Multiple comparable simulations were implemented to quantify the contributions of key environmental drivers.
Singh, Kavita; Haney, Erica; Olorunsaiye, Comfort
2013-07-01
Globally 2.5 million children under-five die from vaccine preventable diseases, and in Nigeria only 23 % of children ages 12-23 months are fully immunized. The international community is promoting gender equality as a means to improve the health and well-being of women and their children. This paper looks at whether measures of gender equality, autonomy and individual attitudes towards gender norms, are associated with a child being fully immunized in Nigeria. Data from currently married women with a child 12-23 months from the 2008 Nigeria demographic and health survey were used to study the influence of autonomy and gender attitudes on whether or not a child is fully immunized. Multivariate logistic regression was used and several key socioeconomic variables were controlled for including wealth and education, which are considered key inputs into gender equality. Findings indicated that household decision-making and attitudes towards wife beating were significantly associated with a child being fully immunized after controlling for socioeconomic variables. Ethnicity, wealth and education were also significant factors. Programmatic and policy implications indicate the potential for the promotion of gender equality as a means to improve child health. Gender equality can be seen as a means to enable women to access life-saving services for their children.
Hartmann, Christoph; Lazar, Andreea; Nessler, Bernhard; Triesch, Jochen
2015-01-01
Even in the absence of sensory stimulation the brain is spontaneously active. This background “noise” seems to be the dominant cause of the notoriously high trial-to-trial variability of neural recordings. Recent experimental observations have extended our knowledge of trial-to-trial variability and spontaneous activity in several directions: 1. Trial-to-trial variability systematically decreases following the onset of a sensory stimulus or the start of a motor act. 2. Spontaneous activity states in sensory cortex outline the region of evoked sensory responses. 3. Across development, spontaneous activity aligns itself with typical evoked activity patterns. 4. The spontaneous brain activity prior to the presentation of an ambiguous stimulus predicts how the stimulus will be interpreted. At present it is unclear how these observations relate to each other and how they arise in cortical circuits. Here we demonstrate that all of these phenomena can be accounted for by a deterministic self-organizing recurrent neural network model (SORN), which learns a predictive model of its sensory environment. The SORN comprises recurrently coupled populations of excitatory and inhibitory threshold units and learns via a combination of spike-timing dependent plasticity (STDP) and homeostatic plasticity mechanisms. Similar to balanced network architectures, units in the network show irregular activity and variable responses to inputs. Additionally, however, the SORN exhibits sequence learning abilities matching recent findings from visual cortex and the network’s spontaneous activity reproduces the experimental findings mentioned above. Intriguingly, the network’s behaviour is reminiscent of sampling-based probabilistic inference, suggesting that correlates of sampling-based inference can develop from the interaction of STDP and homeostasis in deterministic networks. We conclude that key observations on spontaneous brain activity and the variability of neural responses can be accounted for by a simple deterministic recurrent neural network which learns a predictive model of its sensory environment via a combination of generic neural plasticity mechanisms. PMID:26714277
Partial Granger causality--eliminating exogenous inputs and latent variables.
Guo, Shuixia; Seth, Anil K; Kendrick, Keith M; Zhou, Cong; Feng, Jianfeng
2008-07-15
Attempts to identify causal interactions in multivariable biological time series (e.g., gene data, protein data, physiological data) can be undermined by the confounding influence of environmental (exogenous) inputs. Compounding this problem, we are commonly only able to record a subset of all related variables in a system. These recorded variables are likely to be influenced by unrecorded (latent) variables. To address this problem, we introduce a novel variant of a widely used statistical measure of causality--Granger causality--that is inspired by the definition of partial correlation. Our 'partial Granger causality' measure is extensively tested with toy models, both linear and nonlinear, and is applied to experimental data: in vivo multielectrode array (MEA) local field potentials (LFPs) recorded from the inferotemporal cortex of sheep. Our results demonstrate that partial Granger causality can reveal the underlying interactions among elements in a network in the presence of exogenous inputs and latent variables in many cases where the existing conditional Granger causality fails.
NASA Technical Reports Server (NTRS)
Briggs, Maxwell; Schifer, Nicholas
2011-01-01
Test hardware used to validate net heat prediction models. Problem: Net Heat Input cannot be measured directly during operation. Net heat input is a key parameter needed in prediction of efficiency for convertor performance. Efficiency = Electrical Power Output (Measured) divided by Net Heat Input (Calculated). Efficiency is used to compare convertor designs and trade technology advantages for mission planning.
Decina, Stephen M; Templer, Pamela H; Hutyra, Lucy R; Gately, Conor K; Rao, Preeti
2017-12-31
Atmospheric deposition of nitrogen (N) is a major input of N to the biosphere and is elevated beyond preindustrial levels throughout many ecosystems. Deposition monitoring networks in the United States generally avoid urban areas in order to capture regional patterns of N deposition, and studies measuring N deposition in cities usually include only one or two urban sites in an urban-rural comparison or as an anchor along an urban-to-rural gradient. Describing patterns and drivers of atmospheric N inputs is crucial for understanding the effects of N deposition; however, little is known about the variability and drivers of atmospheric N inputs or their effects on soil biogeochemistry within urban ecosystems. We measured rates of canopy throughfall N as a measure of atmospheric N inputs, as well as soil net N mineralization and nitrification, soil solution N, and soil respiration at 15 sites across the greater Boston, Massachusetts area. Rates of throughfall N are 8.70±0.68kgNha -1 yr -1 , vary 3.5-fold across sites, and are positively correlated with rates of local vehicle N emissions. Ammonium (NH 4 + ) composes 69.9±2.2% of inorganic throughfall N inputs and is highest in late spring, suggesting a contribution from local fertilizer inputs. Soil solution NO 3 - is positively correlated with throughfall NO 3 - inputs. In contrast, soil solution NH 4 + , net N mineralization, nitrification, and soil respiration are not correlated with rates of throughfall N inputs. Rather, these processes are correlated with soil properties such as soil organic matter. Our results demonstrate high variability in rates of urban throughfall N inputs, correlation of throughfall N inputs with local vehicle N emissions, and a decoupling of urban soil biogeochemistry and throughfall N inputs. Copyright © 2017 Elsevier B.V. All rights reserved.
Dynamic modal estimation using instrumental variables
NASA Technical Reports Server (NTRS)
Salzwedel, H.
1980-01-01
A method to determine the modes of dynamical systems is described. The inputs and outputs of a system are Fourier transformed and averaged to reduce the error level. An instrumental variable method that estimates modal parameters from multiple correlations between responses of single input, multiple output systems is applied to estimate aircraft, spacecraft, and off-shore platform modal parameters.
Urban vs. Rural CLIL: An Analysis of Input-Related Variables, Motivation and Language Attainment
ERIC Educational Resources Information Center
Alejo, Rafael; Piquer-Píriz, Ana
2016-01-01
The present article carries out an in-depth analysis of the differences in motivation, input-related variables and linguistic attainment of the students at two content and language integrated learning (CLIL) schools operating within the same institutional and educational context, the Spanish region of Extremadura, and differing only in terms of…
Variable Input and the Acquisition of Plural Morphology
ERIC Educational Resources Information Center
Miller, Karen L.; Schmitt, Cristina
2012-01-01
The present article examines the effect of variable input on the acquisition of plural morphology in two varieties of Spanish: Chilean Spanish, where the plural marker is sometimes omitted due to a phonological process of syllable final /s/ lenition, and Mexican Spanish (of Mexico City), with no such lenition process. The goal of the study is to…
Precision digital pulse phase generator
McEwan, T.E.
1996-10-08
A timing generator comprises a crystal oscillator connected to provide an output reference pulse. A resistor-capacitor combination is connected to provide a variable-delay output pulse from an input connected to the crystal oscillator. A phase monitor is connected to provide duty-cycle representations of the reference and variable-delay output pulse phase. An operational amplifier drives a control voltage to the resistor-capacitor combination according to currents integrated from the phase monitor and injected into summing junctions. A digital-to-analog converter injects a control current into the summing junctions according to an input digital control code. A servo equilibrium results that provides a phase delay of the variable-delay output pulse to the output reference pulse that linearly depends on the input digital control code. 2 figs.
Precision digital pulse phase generator
McEwan, Thomas E.
1996-01-01
A timing generator comprises a crystal oscillator connected to provide an output reference pulse. A resistor-capacitor combination is connected to provide a variable-delay output pulse from an input connected to the crystal oscillator. A phase monitor is connected to provide duty-cycle representations of the reference and variable-delay output pulse phase. An operational amplifier drives a control voltage to the resistor-capacitor combination according to currents integrated from the phase monitor and injected into summing junctions. A digital-to-analog converter injects a control current into the summing junctions according to an input digital control code. A servo equilibrium results that provides a phase delay of the variable-delay output pulse to the output reference pulse that linearly depends on the input digital control code.
Key drivers of precipitation isotopes in Windhoek, Namibia (2012-2016)
NASA Astrophysics Data System (ADS)
Kaseke, K. F.; Wang, L.; Wanke, H.
2017-12-01
Southern African climate is characterized by large variability with precipitation model estimates varying by as much as 70% during summer. This difference between model estimates is partly because most models associate precipitation over Southern Africa with moisture inputs from the Indian Ocean while excluding inputs from the Atlantic Ocean. However, growing evidence suggests that the Atlantic Ocean may also contribute significant amounts of moisture to the region. This four-year (2012-2016) study investigates the isotopic composition (δ18O, δ2H and δ17O) of event-scale precipitation events, the key drivers of isotope variations and the origins of precipitation experienced in Windhoek, Namibia. Results indicate large storm-to-storm isotopic variability δ18O (25‰), δ2H (180‰) and δ17O (13‰) over the study period. Univariate analysis showed significant correlations between event precipitation isotopes and local meteorological parameters; lifted condensation level, relative humidity (RH), precipitation amount, average wind speed, surface and air temperature (p < 0.05). The number of significant correlations between local meteorological parameters and monthly isotopes was much lower suggesting loss of information through data aggregation. Nonetheless, the most significant isotope driver at both event and monthly scales was RH, consistent with the semi-arid classification of the site. Multiple linear regression analysis suggested RH, precipitation amount and air temperature were the most significant local drivers of precipitation isotopes accounting for about 50% of the variation implying that about 50% could be attributed to source origins. HYSLPIT trajectories indicated that 78% of precipitation originated from the Indian Ocean while 21% originated from the Atlantic Ocean. Given that three of the four study years were droughts while two of the three drought years were El Niño related, our data also suggests that δ'17O-δ'18O could be a useful tool to differentiate local vs synoptic (El Niño) droughts.
Regenerative braking device with rotationally mounted energy storage means
Hoppie, Lyle O.
1982-03-16
A regenerative braking device for an automotive vehicle includes an energy storage assembly (12) having a plurality of rubber rollers (26, 28) mounted for rotation between an input shaft (30) and an output shaft (32), clutches (50, 56) and brakes (52, 58) associated with each shaft, and a continuously variable transmission (22) connectable to a vehicle drivetrain and to the input and output shafts by the respective clutches. In a second embodiment the clutches and brakes are dispensed with and the variable ratio transmission is connected directly across the input and output shafts. In both embodiments the rubber rollers are torsionally stressed to accumulate energy from the vehicle when the input shaft rotates faster or relative to the output shaft and are torsionally relaxed to deliver energy to the vehicle when the output shaft rotates faster or relative to the input shaft.
NASA Technical Reports Server (NTRS)
Carlson, C. R.
1981-01-01
The user documentation of the SYSGEN model and its links with other simulations is described. The SYSGEN is a production costing and reliability model of electric utility systems. Hydroelectric, storage, and time dependent generating units are modeled in addition to conventional generating plants. Input variables, modeling options, output variables, and reports formats are explained. SYSGEN also can be run interactively by using a program called FEPS (Front End Program for SYSGEN). A format for SYSGEN input variables which is designed for use with FEPS is presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodriguez, Sal
2017-08-24
The code (aka computer program written as a Matlab script) uses a unique set of n independent equations to solve for n turbulence variables. The code requires the input of a characteristic dimension, a characteristic fluid velocity, the fluid dynamic viscosity, and the fluid density. Most importantly, the code estimates the size of three key turbulent eddies: Kolmogorov, Taylor, and integral. Based on the eddy sizes, dimples dimensions are prescribed such that the key eddies (principally Taylor, and sometimes Kolmogorov), can be generated by the dimple rim and flow unimpeded through the dimple’s concave cavity. It is hypothesized that turbulentmore » eddies are generated by the dimple rim at the dimple-surface interface. The newly-generated eddies in turn entrain the movement of surrounding regions of fluid, creating more mixing. The eddies also generate lift near the wall surrounding the dimple, as they accelerate and reduce pressure in the regions near and at the dimple cavity, thereby minimizing the fluid drag.« less
12 CFR Appendix A to Subpart B of... - Risk-Based Capital Test Methodology and Specifications
Code of Federal Regulations, 2013 CFR
2013-01-01
....3.2, Mortgage Amortization Schedule Inputs 3-32, Loan Group Inputs for Mortgage Amortization... Prepayment Explanatory Variables F 3.6.3.5.2, Multifamily Default and Prepayment Inputs 3-38, Loan Group... Group inputs for Gross Loss Severity F 3.3.4, Interest Rates Outputs3.6.3.3.4, Mortgage Amortization...
12 CFR Appendix A to Subpart B of... - Risk-Based Capital Test Methodology and Specifications
Code of Federal Regulations, 2011 CFR
2011-01-01
....3.2, Mortgage Amortization Schedule Inputs 3-32, Loan Group Inputs for Mortgage Amortization... Prepayment Explanatory Variables F 3.6.3.5.2, Multifamily Default and Prepayment Inputs 3-38, Loan Group... Group inputs for Gross Loss Severity F 3.3.4, Interest Rates Outputs3.6.3.3.4, Mortgage Amortization...
12 CFR Appendix A to Subpart B of... - Risk-Based Capital Test Methodology and Specifications
Code of Federal Regulations, 2012 CFR
2012-01-01
....3.2, Mortgage Amortization Schedule Inputs 3-32, Loan Group Inputs for Mortgage Amortization... Prepayment Explanatory Variables F 3.6.3.5.2, Multifamily Default and Prepayment Inputs 3-38, Loan Group... Group inputs for Gross Loss Severity F 3.3.4, Interest Rates Outputs3.6.3.3.4, Mortgage Amortization...
12 CFR Appendix A to Subpart B of... - Risk-Based Capital Test Methodology and Specifications
Code of Federal Regulations, 2014 CFR
2014-01-01
....3.2, Mortgage Amortization Schedule Inputs 3-32, Loan Group Inputs for Mortgage Amortization... Prepayment Explanatory Variables F 3.6.3.5.2, Multifamily Default and Prepayment Inputs 3-38, Loan Group... Group inputs for Gross Loss Severity F 3.3.4, Interest Rates Outputs3.6.3.3.4, Mortgage Amortization...
Intelligent Fault Diagnosis of HVCB with Feature Space Optimization-Based Random Forest
Ma, Suliang; Wu, Jianwen; Wang, Yuhao; Jia, Bowen; Jiang, Yuan
2018-01-01
Mechanical faults of high-voltage circuit breakers (HVCBs) always happen over long-term operation, so extracting the fault features and identifying the fault type have become a key issue for ensuring the security and reliability of power supply. Based on wavelet packet decomposition technology and random forest algorithm, an effective identification system was developed in this paper. First, compared with the incomplete description of Shannon entropy, the wavelet packet time-frequency energy rate (WTFER) was adopted as the input vector for the classifier model in the feature selection procedure. Then, a random forest classifier was used to diagnose the HVCB fault, assess the importance of the feature variable and optimize the feature space. Finally, the approach was verified based on actual HVCB vibration signals by considering six typical fault classes. The comparative experiment results show that the classification accuracy of the proposed method with the origin feature space reached 93.33% and reached up to 95.56% with optimized input feature vector of classifier. This indicates that feature optimization procedure is successful, and the proposed diagnosis algorithm has higher efficiency and robustness than traditional methods. PMID:29659548
Entangling the Whole by Beam Splitting a Part.
Croal, Callum; Peuntinger, Christian; Chille, Vanessa; Marquardt, Christoph; Leuchs, Gerd; Korolkova, Natalia; Mišta, Ladislav
2015-11-06
A beam splitter is a basic linear optical element appearing in many optics experiments and is frequently used as a continuous-variable entangler transforming a pair of input modes from a separable Gaussian state into an entangled state. However, a beam splitter is a passive operation that can create entanglement from Gaussian states only under certain conditions. One such condition is that the input light is suitably squeezed. We demonstrate, experimentally, that a beam splitter can create entanglement even from modes which do not possess such a squeezing provided that they are correlated to, but not entangled with, a third mode. Specifically, we show that a beam splitter can create three-mode entanglement by acting on two modes of a three-mode fully separable Gaussian state without entangling the two modes themselves. This beam splitter property is a key mechanism behind the performance of the protocol for entanglement distribution by separable states. Moreover, the property also finds application in collaborative quantum dense coding in which decoding of transmitted information is assisted by interference with a mode of the collaborating party.
Uncertainty in Ecohydrological Modeling in an Arid Region Determined with Bayesian Methods
Yang, Junjun; He, Zhibin; Du, Jun; Chen, Longfei; Zhu, Xi
2016-01-01
In arid regions, water resources are a key forcing factor in ecosystem circulation, and soil moisture is the critical link that constrains plant and animal life on the soil surface and underground. Simulation of soil moisture in arid ecosystems is inherently difficult due to high variability. We assessed the applicability of the process-oriented CoupModel for forecasting of soil water relations in arid regions. We used vertical soil moisture profiling for model calibration. We determined that model-structural uncertainty constituted the largest error; the model did not capture the extremes of low soil moisture in the desert-oasis ecotone (DOE), particularly below 40 cm soil depth. Our results showed that total uncertainty in soil moisture prediction was improved when input and output data, parameter value array, and structure errors were characterized explicitly. Bayesian analysis was applied with prior information to reduce uncertainty. The need to provide independent descriptions of uncertainty analysis (UA) in the input and output data was demonstrated. Application of soil moisture simulation in arid regions will be useful for dune-stabilization and revegetation efforts in the DOE. PMID:26963523
Fast implementation of length-adaptive privacy amplification in quantum key distribution
NASA Astrophysics Data System (ADS)
Zhang, Chun-Mei; Li, Mo; Huang, Jing-Zheng; Patcharapong, Treeviriyanupab; Li, Hong-Wei; Li, Fang-Yi; Wang, Chuan; Yin, Zhen-Qiang; Chen, Wei; Keattisak, Sripimanwat; Han, Zhen-Fu
2014-09-01
Post-processing is indispensable in quantum key distribution (QKD), which is aimed at sharing secret keys between two distant parties. It mainly consists of key reconciliation and privacy amplification, which is used for sharing the same keys and for distilling unconditional secret keys. In this paper, we focus on speeding up the privacy amplification process by choosing a simple multiplicative universal class of hash functions. By constructing an optimal multiplication algorithm based on four basic multiplication algorithms, we give a fast software implementation of length-adaptive privacy amplification. “Length-adaptive” indicates that the implementation of privacy amplification automatically adapts to different lengths of input blocks. When the lengths of the input blocks are 1 Mbit and 10 Mbit, the speed of privacy amplification can be as fast as 14.86 Mbps and 10.88 Mbps, respectively. Thus, it is practical for GHz or even higher repetition frequency QKD systems.
The Effect of Visual Variability on the Learning of Academic Concepts.
Bourgoyne, Ashley; Alt, Mary
2017-06-10
The purpose of this study was to identify effects of variability of visual input on development of conceptual representations of academic concepts for college-age students with normal language (NL) and those with language-learning disabilities (LLD). Students with NL (n = 11) and LLD (n = 11) participated in a computer-based training for introductory biology course concepts. Participants were trained on half the concepts under a low-variability condition and half under a high-variability condition. Participants completed a posttest in which they were asked to identify and rate the accuracy of novel and trained visual representations of the concepts. We performed separate repeated measures analyses of variance to examine the accuracy of identification and ratings. Participants were equally accurate on trained and novel items in the high-variability condition, but were less accurate on novel items only in the low-variability condition. The LLD group showed the same pattern as the NL group; they were just less accurate. Results indicated that high-variability visual input may facilitate the acquisition of academic concepts in college students with NL and LLD. High-variability visual input may be especially beneficial for generalization to novel representations of concepts. Implicit learning methods may be harnessed by college courses to provide students with basic conceptual knowledge when they are entering courses or beginning new units.
Whiteway, Matthew R; Butts, Daniel A
2017-03-01
The activity of sensory cortical neurons is not only driven by external stimuli but also shaped by other sources of input to the cortex. Unlike external stimuli, these other sources of input are challenging to experimentally control, or even observe, and as a result contribute to variability of neural responses to sensory stimuli. However, such sources of input are likely not "noise" and may play an integral role in sensory cortex function. Here we introduce the rectified latent variable model (RLVM) in order to identify these sources of input using simultaneously recorded cortical neuron populations. The RLVM is novel in that it employs nonnegative (rectified) latent variables and is much less restrictive in the mathematical constraints on solutions because of the use of an autoencoder neural network to initialize model parameters. We show that the RLVM outperforms principal component analysis, factor analysis, and independent component analysis, using simulated data across a range of conditions. We then apply this model to two-photon imaging of hundreds of simultaneously recorded neurons in mouse primary somatosensory cortex during a tactile discrimination task. Across many experiments, the RLVM identifies latent variables related to both the tactile stimulation as well as nonstimulus aspects of the behavioral task, with a majority of activity explained by the latter. These results suggest that properly identifying such latent variables is necessary for a full understanding of sensory cortical function and demonstrate novel methods for leveraging large population recordings to this end. NEW & NOTEWORTHY The rapid development of neural recording technologies presents new opportunities for understanding patterns of activity across neural populations. Here we show how a latent variable model with appropriate nonlinear form can be used to identify sources of input to a neural population and infer their time courses. Furthermore, we demonstrate how these sources are related to behavioral contexts outside of direct experimental control. Copyright © 2017 the American Physiological Society.
Not All Children Agree: Acquisition of Agreement when the Input Is Variable
ERIC Educational Resources Information Center
Miller, Karen
2012-01-01
In this paper we investigate the effect of variable input on the acquisition of grammar. More specifically, we examine the acquisition of the third person singular marker -s on the auxiliary "do" in comprehension and production in two groups of children who are exposed to similar varieties of English but that differ with respect to adult…
Boonjing, Veera; Intakosum, Sarun
2016-01-01
This study investigated the use of Artificial Neural Network (ANN) and Genetic Algorithm (GA) for prediction of Thailand's SET50 index trend. ANN is a widely accepted machine learning method that uses past data to predict future trend, while GA is an algorithm that can find better subsets of input variables for importing into ANN, hence enabling more accurate prediction by its efficient feature selection. The imported data were chosen technical indicators highly regarded by stock analysts, each represented by 4 input variables that were based on past time spans of 4 different lengths: 3-, 5-, 10-, and 15-day spans before the day of prediction. This import undertaking generated a big set of diverse input variables with an exponentially higher number of possible subsets that GA culled down to a manageable number of more effective ones. SET50 index data of the past 6 years, from 2009 to 2014, were used to evaluate this hybrid intelligence prediction accuracy, and the hybrid's prediction results were found to be more accurate than those made by a method using only one input variable for one fixed length of past time span. PMID:27974883
Inthachot, Montri; Boonjing, Veera; Intakosum, Sarun
2016-01-01
This study investigated the use of Artificial Neural Network (ANN) and Genetic Algorithm (GA) for prediction of Thailand's SET50 index trend. ANN is a widely accepted machine learning method that uses past data to predict future trend, while GA is an algorithm that can find better subsets of input variables for importing into ANN, hence enabling more accurate prediction by its efficient feature selection. The imported data were chosen technical indicators highly regarded by stock analysts, each represented by 4 input variables that were based on past time spans of 4 different lengths: 3-, 5-, 10-, and 15-day spans before the day of prediction. This import undertaking generated a big set of diverse input variables with an exponentially higher number of possible subsets that GA culled down to a manageable number of more effective ones. SET50 index data of the past 6 years, from 2009 to 2014, were used to evaluate this hybrid intelligence prediction accuracy, and the hybrid's prediction results were found to be more accurate than those made by a method using only one input variable for one fixed length of past time span.
A Predictor Analysis Framework for Surface Radiation Budget Reprocessing Using Design of Experiments
NASA Astrophysics Data System (ADS)
Quigley, Patricia Allison
Earth's Radiation Budget (ERB) is an accounting of all incoming energy from the sun and outgoing energy reflected and radiated to space by earth's surface and atmosphere. The National Aeronautics and Space Administration (NASA)/Global Energy and Water Cycle Experiment (GEWEX) Surface Radiation Budget (SRB) project produces and archives long-term datasets representative of this energy exchange system on a global scale. The data are comprised of the longwave and shortwave radiative components of the system and is algorithmically derived from satellite and atmospheric assimilation products, and acquired atmospheric data. It is stored as 3-hourly, daily, monthly/3-hourly, and monthly averages of 1° x 1° grid cells. Input parameters used by the algorithms are a key source of variability in the resulting output data sets. Sensitivity studies have been conducted to estimate the effects this variability has on the output data sets using linear techniques. This entails varying one input parameter at a time while keeping all others constant or by increasing all input parameters by equal random percentages, in effect changing input values for every cell for every three hour period and for every day in each month. This equates to almost 11 million independent changes without ever taking into consideration the interactions or dependencies among the input parameters. A more comprehensive method is proposed here for the evaluating the shortwave algorithm to identify both the input parameters and parameter interactions that most significantly affect the output data. This research utilized designed experiments that systematically and simultaneously varied all of the input parameters of the shortwave algorithm. A D-Optimal design of experiments (DOE) was chosen to accommodate the 14 types of atmospheric properties computed by the algorithm and to reduce the number of trials required by a full factorial study from millions to 128. A modified version of the algorithm was made available for testing such that global calculations of the algorithm were tuned to accept information for a single temporal and spatial point and for one month of averaged data. The points were from each of four atmospherically distinct regions to include the Amazon Rainforest, Sahara Desert, Indian Ocean and Mt. Everest. The same design was used for all of the regions. Least squares multiple regression analysis of the results of the modified algorithm identified those parameters and parameter interactions that most significantly affected the output products. It was found that Cosine solar zenith angle was the strongest influence on the output data in all four regions. The interaction of Cosine Solar Zenith Angle and Cloud Fraction had the strongest influence on the output data in the Amazon, Sahara Desert and Mt. Everest Regions, while the interaction of Cloud Fraction and Cloudy Shortwave Radiance most significantly affected output data in the Indian Ocean region. Second order response models were built using the resulting regression coefficients. A Monte Carlo simulation of each model extended the probability distribution beyond the initial design trials to quantify variability in the modeled output data.
Paleodust variability since the Last Glacial Maximum and implications for iron inputs to the ocean
NASA Astrophysics Data System (ADS)
Albani, S.; Mahowald, N. M.; Murphy, L. N.; Raiswell, R.; Moore, J. K.; Anderson, R. F.; McGee, D.; Bradtmiller, L. I.; Delmonte, B.; Hesse, P. P.; Mayewski, P. A.
2016-04-01
Changing climate conditions affect dust emissions and the global dust cycle, which in turn affects climate and biogeochemistry. In this study we use observationally constrained model reconstructions of the global dust cycle since the Last Glacial Maximum, combined with different simplified assumptions of atmospheric and sea ice processing of dust-borne iron, to provide estimates of soluble iron deposition to the oceans. For different climate conditions, we discuss uncertainties in model-based estimates of atmospheric processing and dust deposition to key oceanic regions, highlighting the large degree of uncertainty of this important variable for ocean biogeochemistry and the global carbon cycle. We also show the role of sea ice acting as a time buffer and processing agent, which results in a delayed and pulse-like soluble iron release into the ocean during the melting season, with monthly peaks up to ~17 Gg/month released into the Southern Oceans during the Last Glacial Maximum (LGM).
Signa, Geraldina; Mazzola, Antonio; Tramati, Cecilia Doriana; Vizzini, Salvatrice
2013-09-15
The role of a yellow-legged gull (Larus michahellis) small colony in conveying trace elements (As, Cd, Cr, Cu, Ni, Pb, THg, V, Zn) was assessed in a Mediterranean nature reserve (Marinello ponds) at various spatial and temporal scales. Trace element concentrations in guano were high and seasonally variable. In contrast, contamination in the ponds was not influenced by season but showed strong spatial variability among ponds, according to the different guano input. Biogenic enrichment factor B confirmed the role of gulls in the release of trace elements through guano subsidies. In addition, comparing trace element pond concentrations to the US NOAA's SQGs, As, Cu and Ni showed contamination levels associated with possible negative biological effects. Thus, this study reflects the need to take seabirds into account as key factors influencing ecological processes and contamination levels even in remote areas, especially around the Mediterranean, where these birds are abundant but overlooked. Copyright © 2013 Elsevier Ltd. All rights reserved.
Optimal Control of Thermo--Fluid Phenomena in Variable Domains
NASA Astrophysics Data System (ADS)
Volkov, Oleg; Protas, Bartosz
2008-11-01
This presentation concerns our continued research on adjoint--based optimization of viscous incompressible flows (the Navier--Stokes problem) coupled with heat conduction involving change of phase (the Stefan problem), and occurring in domains with variable boundaries. This problem is motivated by optimization of advanced welding techniques used in automotive manufacturing, where the goal is to determine an optimal heat input, so as to obtain a desired shape of the weld pool surface upon solidification. We argue that computation of sensitivities (gradients) in such free--boundary problems requires the use of the shape--differential calculus as a key ingredient. We also show that, with such tools available, the computational solution of the direct and inverse (optimization) problems can in fact be achieved in a similar manner and in a comparable computational time. Our presentation will address certain mathematical and computational aspects of the method. As an illustration we will consider the two--phase Stefan problem with contact point singularities where our approach allows us to obtain a thermodynamically consistent solution.
An ASM/ADM model interface for dynamic plant-wide simulation.
Nopens, Ingmar; Batstone, Damien J; Copp, John B; Jeppsson, Ulf; Volcke, Eveline; Alex, Jens; Vanrolleghem, Peter A
2009-04-01
Mathematical modelling has proven to be very useful in process design, operation and optimisation. A recent trend in WWTP modelling is to include the different subunits in so-called plant-wide models rather than focusing on parts of the entire process. One example of a typical plant-wide model is the coupling of an upstream activated sludge plant (including primary settler, and secondary clarifier) to an anaerobic digester for sludge digestion. One of the key challenges when coupling these processes has been the definition of an interface between the well accepted activated sludge model (ASM1) and anaerobic digestion model (ADM1). Current characterisation and interface models have key limitations, the most critical of which is the over-use of X(c) (or lumped complex) variable as a main input to the ADM1. Over-use of X(c) does not allow for variation of degradability, carbon oxidation state or nitrogen content. In addition, achieving a target influent pH through the proper definition of the ionic system can be difficult. In this paper, we define an interface and characterisation model that maps degradable components directly to carbohydrates, proteins and lipids (and their soluble analogues), as well as organic acids, rather than using X(c). While this interface has been designed for use with the Benchmark Simulation Model No. 2 (BSM2), it is widely applicable to ADM1 input characterisation in general. We have demonstrated the model both hypothetically (BSM2), and practically on a full-scale anaerobic digester treating sewage sludge.
INFANT HEALTH PRODUCTION FUNCTIONS: WHAT A DIFFERENCE THE DATA MAKE
Reichman, Nancy E.; Corman, Hope; Noonan, Kelly; Dave, Dhaval
2008-01-01
SUMMARY We examine the extent to which infant health production functions are sensitive to model specification and measurement error. We focus on the importance of typically unobserved but theoretically important variables (typically unobserved variables, TUVs), other non-standard covariates (NSCs), input reporting, and characterization of infant health. The TUVs represent wantedness, taste for risky behavior, and maternal health endowment. The NSCs include father characteristics. We estimate the effects of prenatal drug use, prenatal cigarette smoking, and First trimester prenatal care on birth weight, low birth weight, and a measure of abnormal infant health conditions. We compare estimates using self-reported inputs versus input measures that combine information from medical records and self-reports. We find that TUVs and NSCs are significantly associated with both inputs and outcomes, but that excluding them from infant health production functions does not appreciably affect the input estimates. However, using self-reported inputs leads to overestimated effects of inputs, particularly prenatal care, on outcomes, and using a direct measure of infant health does not always yield input estimates similar to those when using birth weight outcomes. The findings have implications for research, data collection, and public health policy. PMID:18792077
Thomas, R.E.
1959-01-20
An electronic circuit is presented for automatically computing the product of two selected variables by multiplying the voltage pulses proportional to the variables. The multiplier circuit has a plurality of parallel resistors of predetermined values connected through separate gate circults between a first input and the output terminal. One voltage pulse is applied to thc flrst input while the second voltage pulse is applied to control circuitry for the respective gate circuits. Thc magnitude of the second voltage pulse selects the resistors upon which the first voltage pulse is imprcssed, whereby the resultant output voltage is proportional to the product of the input voltage pulses
Embracing chaos and complexity: a quantum change for public health.
Resnicow, Kenneth; Page, Scott E
2008-08-01
Public health research and practice have been guided by a cognitive, rational paradigm where inputs produce linear, predictable changes in outputs. However, the conceptual and statistical assumptions underlying this paradigm may be flawed. In particular, this perspective does not adequately account for nonlinear and quantum influences on human behavior. We propose that health behavior change is better understood through the lens of chaos theory and complex adaptive systems. Key relevant principles include that behavior change (1) is often a quantum event; (2) can resemble a chaotic process that is sensitive to initial conditions, highly variable, and difficult to predict; and (3) occurs within a complex adaptive system with multiple components, where results are often greater than the sum of their parts.
Encryption and decryption using FPGA
NASA Astrophysics Data System (ADS)
Nayak, Nikhilesh; Chandak, Akshay; Shah, Nisarg; Karthikeyan, B.
2017-11-01
In this paper, we are performing multiple cryptography methods on a set of data and comparing their outputs. Here AES algorithm and RSA algorithm are used. Using AES Algorithm an 8 bit input (plain text) gets encrypted using a cipher key and the result is displayed on tera term (serially). For simulation a 128 bit input is used and operated with a 128 bit cipher key to generate encrypted text. The reverse operations are then performed to get decrypted text. In RSA Algorithm file handling is used to input plain text. This text is then operated on to get the encrypted and decrypted data, which are then stored in a file. Finally the results of both the algorithms are compared.
Rispoli, Matthew; Holt, Janet K.
2017-01-01
Purpose This follow-up study examined whether a parent intervention that increased the diversity of lexical noun phrase subjects in parent input and accelerated children's sentence diversity (Hadley et al., 2017) had indirect benefits on tense/agreement (T/A) morphemes in parent input and children's spontaneous speech. Method Differences in input variables related to T/A marking were compared for parents who received toy talk instruction and a quasi-control group: input informativeness and full is declaratives. Language growth on tense agreement productivity (TAP) was modeled for 38 children from language samples obtained at 21, 24, 27, and 30 months. Parent input properties following instruction and children's growth in lexical diversity and sentence diversity were examined as predictors of TAP growth. Results Instruction increased parent use of full is declaratives (ηp 2 ≥ .25) but not input informativeness. Children's sentence diversity was also a significant time-varying predictor of TAP growth. Two input variables, lexical noun phrase subject diversity and full is declaratives, were also significant predictors, even after controlling for children's sentence diversity. Conclusions These findings establish a link between children's sentence diversity and the development of T/A morphemes and provide evidence about characteristics of input that facilitate growth in this grammatical system. PMID:28892819
Kernel-PCA data integration with enhanced interpretability
2014-01-01
Background Nowadays, combining the different sources of information to improve the biological knowledge available is a challenge in bioinformatics. One of the most powerful methods for integrating heterogeneous data types are kernel-based methods. Kernel-based data integration approaches consist of two basic steps: firstly the right kernel is chosen for each data set; secondly the kernels from the different data sources are combined to give a complete representation of the available data for a given statistical task. Results We analyze the integration of data from several sources of information using kernel PCA, from the point of view of reducing dimensionality. Moreover, we improve the interpretability of kernel PCA by adding to the plot the representation of the input variables that belong to any dataset. In particular, for each input variable or linear combination of input variables, we can represent the direction of maximum growth locally, which allows us to identify those samples with higher/lower values of the variables analyzed. Conclusions The integration of different datasets and the simultaneous representation of samples and variables together give us a better understanding of biological knowledge. PMID:25032747
NASA Astrophysics Data System (ADS)
Forsythe, N.; Blenkinsop, S.; Fowler, H. J.
2015-05-01
A three-step climate classification was applied to a spatial domain covering the Himalayan arc and adjacent plains regions using input data from four global meteorological reanalyses. Input variables were selected based on an understanding of the climatic drivers of regional water resource variability and crop yields. Principal component analysis (PCA) of those variables and k-means clustering on the PCA outputs revealed a reanalysis ensemble consensus for eight macro-climate zones. Spatial statistics of input variables for each zone revealed consistent, distinct climatologies. This climate classification approach has potential for enhancing assessment of climatic influences on water resources and food security as well as for characterising the skill and bias of gridded data sets, both meteorological reanalyses and climate models, for reproducing subregional climatologies. Through their spatial descriptors (area, geographic centroid, elevation mean range), climate classifications also provide metrics, beyond simple changes in individual variables, with which to assess the magnitude of projected climate change. Such sophisticated metrics are of particular interest for regions, including mountainous areas, where natural and anthropogenic systems are expected to be sensitive to incremental climate shifts.
NASA Astrophysics Data System (ADS)
Hao, Wenrui; Lu, Zhenzhou; Li, Luyi
2013-05-01
In order to explore the contributions by correlated input variables to the variance of the output, a novel interpretation framework of importance measure indices is proposed for a model with correlated inputs, which includes the indices of the total correlated contribution and the total uncorrelated contribution. The proposed indices accurately describe the connotations of the contributions by the correlated input to the variance of output, and they can be viewed as the complement and correction of the interpretation about the contributions by the correlated inputs presented in "Estimation of global sensitivity indices for models with dependent variables, Computer Physics Communications, 183 (2012) 937-946". Both of them contain the independent contribution by an individual input. Taking the general form of quadratic polynomial as an illustration, the total correlated contribution and the independent contribution by an individual input are derived analytically, from which the components and their origins of both contributions of correlated input can be clarified without any ambiguity. In the special case that no square term is included in the quadratic polynomial model, the total correlated contribution by the input can be further decomposed into the variance contribution related to the correlation of the input with other inputs and the independent contribution by the input itself, and the total uncorrelated contribution can be further decomposed into the independent part by interaction between the input and others and the independent part by the input itself. Numerical examples are employed and their results demonstrate that the derived analytical expressions of the variance-based importance measure are correct, and the clarification of the correlated input contribution to model output by the analytical derivation is very important for expanding the theory and solutions of uncorrelated input to those of the correlated one.
Blade loss transient dynamics analysis. Volume 3: User's manual for TETRA program
NASA Technical Reports Server (NTRS)
Black, G. R.; Gallardo, V. C.; Storace, A. S.; Sagendorph, F.
1981-01-01
The users manual for TETRA contains program logic, flow charts, error messages, input sheets, modeling instructions, option descriptions, input variable descriptions, and demonstration problems. The process of obtaining a NASTRAN 17.5 generated modal input file for TETRA is also described with a worked sample.
The appropriate spatial scale for a distributed energy balance model was investigated by: (a) determining the scale of variability associated with the remotely sensed and GIS-generated model input data; and (b) examining the effects of input data spatial aggregation on model resp...
Multiplexer and time duration measuring circuit
Gray, Jr., James
1980-01-01
A multiplexer device is provided for multiplexing data in the form of randomly developed, variable width pulses from a plurality of pulse sources to a master storage. The device includes a first multiplexer unit which includes a plurality of input circuits each coupled to one of the pulse sources, with all input circuits being disabled when one input circuit receives an input pulse so that only one input pulse is multiplexed by the multiplexer unit at any one time.
Breaking the trade-off between efficiency and service.
Frei, Frances X
2006-11-01
For manufacturers, customers are the open wallets at the end of the supply chain. But for most service businesses, they are key inputs to the production process. Customers introduce tremendous variability to that process, but they also complain about any lack of consistency and don't care about the company's profit agenda. Managing customer-introduced variability, the author argues, is a central challenge for service companies. The first step is to diagnose which type of variability is causing mischief: Customers may arrive at different times, request different kinds of service, possess different capabilities, make varying degrees of effort, and have different personal preferences. Should companies accommodate variability or reduce it? Accommodation often involves asking employees to compensate for the variations among customers--a potentially costly solution. Reduction often means offering a limited menu of options, which may drive customers away. Some companies have learned to deal with customer-introduced variability without damaging either their operating environments or customers' service experiences. Starbucks, for example, handles capability variability among its customers by teaching them the correct ordering protocol. Dell deals with arrival and request variability in its high-end server business by outsourcing customer service while staying in close touch with customers to discuss their needs and assess their experiences with third-party providers. The effective management of variability often requires a company to influence customers' behavior. Managers attempting that kind of intervention can follow a three-step process: diagnosing the behavioral problem, designing an operating role for customers that creates new value for both parties, and testing and refining approaches for influencing behavior.
The impact of 14-nm photomask uncertainties on computational lithography solutions
NASA Astrophysics Data System (ADS)
Sturtevant, John; Tejnil, Edita; Lin, Tim; Schultze, Steffen; Buck, Peter; Kalk, Franklin; Nakagawa, Kent; Ning, Guoxiang; Ackmann, Paul; Gans, Fritz; Buergel, Christian
2013-04-01
Computational lithography solutions rely upon accurate process models to faithfully represent the imaging system output for a defined set of process and design inputs. These models, which must balance accuracy demands with simulation runtime boundary conditions, rely upon the accurate representation of multiple parameters associated with the scanner and the photomask. While certain system input variables, such as scanner numerical aperture, can be empirically tuned to wafer CD data over a small range around the presumed set point, it can be dangerous to do so since CD errors can alias across multiple input variables. Therefore, many input variables for simulation are based upon designed or recipe-requested values or independent measurements. It is known, however, that certain measurement methodologies, while precise, can have significant inaccuracies. Additionally, there are known errors associated with the representation of certain system parameters. With shrinking total CD control budgets, appropriate accounting for all sources of error becomes more important, and the cumulative consequence of input errors to the computational lithography model can become significant. In this work, we examine with a simulation sensitivity study, the impact of errors in the representation of photomask properties including CD bias, corner rounding, refractive index, thickness, and sidewall angle. The factors that are most critical to be accurately represented in the model are cataloged. CD Bias values are based on state of the art mask manufacturing data and other variables changes are speculated, highlighting the need for improved metrology and awareness.
Prediction of problematic wine fermentations using artificial neural networks.
Román, R César; Hernández, O Gonzalo; Urtubia, U Alejandra
2011-11-01
Artificial neural networks (ANNs) have been used for the recognition of non-linear patterns, a characteristic of bioprocesses like wine production. In this work, ANNs were tested to predict problems of wine fermentation. A database of about 20,000 data from industrial fermentations of Cabernet Sauvignon and 33 variables was used. Two different ways of inputting data into the model were studied, by points and by fermentation. Additionally, different sub-cases were studied by varying the predictor variables (total sugar, alcohol, glycerol, density, organic acids and nitrogen compounds) and the time of fermentation (72, 96 and 256 h). The input of data by fermentations gave better results than the input of data by points. In fact, it was possible to predict 100% of normal and problematic fermentations using three predictor variables: sugars, density and alcohol at 72 h (3 days). Overall, ANNs were capable of obtaining 80% of prediction using only one predictor variable at 72 h; however, it is recommended to add more fermentations to confirm this promising result.
Amiryousefi, Mohammad Reza; Mohebbi, Mohebbat; Khodaiyan, Faramarz
2014-01-01
The objectives of this study were to use image analysis and artificial neural network (ANN) to predict mass transfer kinetics as well as color changes and shrinkage of deep-fat fried ostrich meat cubes. Two generalized feedforward networks were separately developed by using the operation conditions as inputs. Results based on the highest numerical quantities of the correlation coefficients between the experimental versus predicted values, showed proper fitting. Sensitivity analysis results of selected ANNs showed that among the input variables, frying temperature was the most sensitive to moisture content (MC) and fat content (FC) compared to other variables. Sensitivity analysis results of selected ANNs showed that MC and FC were the most sensitive to frying temperature compared to other input variables. Similarly, for the second ANN architecture, microwave power density was the most impressive variable having the maximum influence on both shrinkage percentage and color changes. Copyright © 2013 Elsevier Ltd. All rights reserved.
Metamodeling and mapping of nitrate flux in the unsaturated zone and groundwater, Wisconsin, USA
NASA Astrophysics Data System (ADS)
Nolan, Bernard T.; Green, Christopher T.; Juckem, Paul F.; Liao, Lixia; Reddy, James E.
2018-04-01
Nitrate contamination of groundwater in agricultural areas poses a major challenge to the sustainability of water resources. Aquifer vulnerability models are useful tools that can help resource managers identify areas of concern, but quantifying nitrogen (N) inputs in such models is challenging, especially at large spatial scales. We sought to improve regional nitrate (NO3-) input functions by characterizing unsaturated zone NO3- transport to groundwater through use of surrogate, machine-learning metamodels of a process-based N flux model. The metamodels used boosted regression trees (BRTs) to relate mappable landscape variables to parameters and outputs of a previous "vertical flux method" (VFM) applied at sampled wells in the Fox, Wolf, and Peshtigo (FWP) river basins in northeastern Wisconsin. In this context, the metamodels upscaled the VFM results throughout the region, and the VFM parameters and outputs are the metamodel response variables. The study area encompassed the domain of a detailed numerical model that provided additional predictor variables, including groundwater recharge, to the metamodels. We used a statistical learning framework to test a range of model complexities to identify suitable hyperparameters of the six BRT metamodels corresponding to each response variable of interest: NO3- source concentration factor (which determines the local NO3- input concentration); unsaturated zone travel time; NO3- concentration at the water table in 1980, 2000, and 2020 (three separate metamodels); and NO3- "extinction depth", the eventual steady state depth of the NO3- front. The final metamodels were trained to 129 wells within the active numerical flow model area, and considered 58 mappable predictor variables compiled in a geographic information system (GIS). These metamodels had training and cross-validation testing R2 values of 0.52 - 0.86 and 0.22 - 0.38, respectively, and predictions were compiled as maps of the above response variables. Testing performance was reasonable, considering that we limited the metamodel predictor variables to mappable factors as opposed to using all available VFM input variables. Relationships between metamodel predictor variables and mapped outputs were generally consistent with expectations, e.g. with greater source concentrations and NO3- at the groundwater table in areas of intensive crop use and well drained soils. Shorter unsaturated zone travel times in poorly drained areas likely indicated preferential flow through clay soils, and a tendency for fine grained deposits to collocate with areas of shallower water table. Numerical estimates of groundwater recharge were important in the metamodels and may have been a proxy for N input and redox conditions in the northern FWP, which had shallow predicted NO3- extinction depth. The metamodel results provide proof-of-concept for regional characterization of unsaturated zone NO3- transport processes in a statistical framework based on readily mappable GIS input variables.
Two-Stage Variable Sample-Rate Conversion System
NASA Technical Reports Server (NTRS)
Tkacenko, Andre
2009-01-01
A two-stage variable sample-rate conversion (SRC) system has been pro posed as part of a digital signal-processing system in a digital com munication radio receiver that utilizes a variety of data rates. The proposed system would be used as an interface between (1) an analog- todigital converter used in the front end of the receiver to sample an intermediatefrequency signal at a fixed input rate and (2) digita lly implemented tracking loops in subsequent stages that operate at v arious sample rates that are generally lower than the input sample r ate. This Two-Stage System would be capable of converting from an input sample rate to a desired lower output sample rate that could be var iable and not necessarily a rational fraction of the input rate.
Luo, Zhongkui; Feng, Wenting; Luo, Yiqi; Baldock, Jeff; Wang, Enli
2017-10-01
Soil organic carbon (SOC) dynamics are regulated by the complex interplay of climatic, edaphic and biotic conditions. However, the interrelation of SOC and these drivers and their potential connection networks are rarely assessed quantitatively. Using observations of SOC dynamics with detailed soil properties from 90 field trials at 28 sites under different agroecosystems across the Australian cropping regions, we investigated the direct and indirect effects of climate, soil properties, carbon (C) inputs and soil C pools (a total of 17 variables) on SOC change rate (r C , Mg C ha -1 yr -1 ). Among these variables, we found that the most influential variables on r C were the average C input amount and annual precipitation, and the total SOC stock at the beginning of the trials. Overall, C inputs (including C input amount and pasture frequency in the crop rotation system) accounted for 27% of the relative influence on r C , followed by climate 25% (including precipitation and temperature), soil C pools 24% (including pool size and composition) and soil properties (such as cation exchange capacity, clay content, bulk density) 24%. Path analysis identified a network of intercorrelations of climate, soil properties, C inputs and soil C pools in determining r C . The direct correlation of r C with climate was significantly weakened if removing the effects of soil properties and C pools, and vice versa. These results reveal the relative importance of climate, soil properties, C inputs and C pools and their complex interconnections in regulating SOC dynamics. Ignorance of the impact of changes in soil properties, C pool composition and C input (quantity and quality) on SOC dynamics is likely one of the main sources of uncertainty in SOC predictions from the process-based SOC models. © 2017 John Wiley & Sons Ltd.
Voit, Eberhard O
2014-01-01
Probably the most prominent expectation associated with systems biology is the computational support of personalized medicine and predictive health. At least some of this anticipated support is envisioned in the form of disease simulators that will take hundreds of personalized biomarker data as input and allow the physician to explore and optimize possible treatment regimens on a computer before the best treatment is applied to the actual patient in a custom-tailored manner. The key prerequisites for such simulators are mathematical and computational models that not only manage the input data and implement the general physiological and pathological principles of organ systems but also integrate the myriads of details that affect their functionality to a significant degree. Obviously, the construction of such models is an overwhelming task that suggests the long-term development of hierarchical or telescopic approaches representing the physiology of organs and their diseases, first coarsely and over time with increased granularity. This article illustrates the rudiments of such a strategy in the context of cystic fibrosis (CF) of the lung. The starting point is a very simplistic, generic model of inflammation, which has been shown to capture the principles of infection, trauma, and sepsis surprisingly well. The adaptation of this model to CF contains as variables healthy and damaged cells, as well as different classes of interacting cytokines and infectious microbes that are affected by mucus formation, which is the hallmark symptom of the disease (Perez-Vilar and Boucher, 2004) [1]. The simple model represents the overall dynamics of the disease progression, including so-called acute pulmonary exacerbations, quite well, but of course does not provide much detail regarding the specific processes underlying the disease. In order to launch the next level of modeling with finer granularity, it is desirable to determine which components of the coarse model contribute most to the disease dynamics. The article introduces for this purpose the concept of module gains or ModGains, which quantify the sensitivity of key disease variables in the higher-level system. In reality, these variables represent complex modules at the next level of granularity, and the computation of ModGains therefore allows an importance ranking of variables that should be replaced with more detailed models. The "hot-swapping" of such detailed modules for former variables is greatly facilitated by the architecture and implementation of the overarching, coarse model structure, which is here formulated with methods of biochemical systems theory (BST). This article is part of a Special Issue entitled: Computational Proteomics, Systems Biology & Clinical Implications. Guest Editor: Yudong Cai. Copyright © 2013 Elsevier B.V. All rights reserved.
A high speed sequential decoder
NASA Technical Reports Server (NTRS)
Lum, H., Jr.
1972-01-01
The performance and theory of operation for the High Speed Hard Decision Sequential Decoder are delineated. The decoder is a forward error correction system which is capable of accepting data from binary-phase-shift-keyed and quadriphase-shift-keyed modems at input data rates up to 30 megabits per second. Test results show that the decoder is capable of maintaining a composite error rate of 0.00001 at an input E sub b/N sub o of 5.6 db. This performance has been obtained with minimum circuit complexity.
Stable isotopes and Digital Elevation Models to study nutrient inputs in high-Arctic lakes
NASA Astrophysics Data System (ADS)
Calizza, Edoardo; Rossi, David; Costantini, Maria Letizia; Careddu, Giulio; Rossi, Loreto
2016-04-01
Ice cover, run-off from the watershed, aquatic and terrestrial primary productivity, guano deposition from birds are key factors controlling nutrient and organic matter inputs in high-Arctic lakes. All these factors are expected to be significantly affected by climate change. Quantifying these controls is a key baseline step to understand what combination of factors subtends the biological productivity in Arctic lakes and will drive their ecological response to environmental change. Basing on Digital Elevation Models, drainage maps, and C and N elemental content and stable isotope analysis in sediments, aquatic vegetation and a dominant macroinvertebrate species (Lepidurus arcticus Pallas 1973) belonging to Tvillingvatnet, Storvatnet and Kolhamna, three lakes located in North Spitsbergen (Svalbard), we propose an integrated approach for the analysis of (i) nutrient and organic matter inputs in lakes; (ii) the role of catchment hydro-geomorphology in determining inter-lake differences in the isotopic composition of sediments; (iii) effects of diverse nutrient inputs on the isotopic niche of Lepidurus arcticus. Given its high run-off and large catchment, organic deposits in Tvillingvatnet where dominated by terrestrial inputs, whereas inputs were mainly of aquatic origin in Storvatnet, a lowland lake with low potential run-off. In Kolhamna, organic deposits seem to be dominated by inputs from birds, which actually colonise the area. Isotopic signatures were similar between samples within each lake, representing precise tracers for studies on the effect of climate change on biogeochemical cycles in lakes. The isotopic niche of L. aricticus reflected differences in sediments between lakes, suggesting a bottom-up effect of hydro-geomorphology characterizing each lake on nutrients assimilated by this species. The presented approach proven to be an effective research pathway for the identification of factors subtending to nutrient and organic matter inputs and transfer within each water body, as well as for the modelling of expected changes in nutrient content associated to changes in isotopic composition of sediments. Key words: nitrogen; carbon, sediment; biogeochemical cycle; climate change; hydro-ecology; isotopic niche; Svalbard
Retrospective Mining of Toxicology Data to Discover ...
In vivo toxicology data is subject to multiple sources of uncertainty: observer severity bias (a pathologist may record only more severe effects and ignore less severe ones); dose spacing issues (this can lead to missing data, e.g. if a severe effect has a less severe precursor, but both occur at the same tested dose); imperfect control of key independent variables (in databases, one can rarely control key input variables such as animal strain or dosing schedules); effect description heterogeneity (terminology changes over time which can lead to information loss); statistical issues (too few chemicals with a given phenotype, or too few animals in dose groups). These issues directly contribute to uncertainties in models built from the data. We are investigating the use of collections of endpoints (toxicity syndromes) to address these issues. These are identical in concept to medical syndromes which allow a physician to diagnose an underlying disease more accurately than can be done when relying on examination of one symptom at a time. Our test case is anemia, for several reasons: most of the phenotypes (e.g. cell counts) are quantitative; related effects are measured in an automated way; anemia is relatively common, at least at high doses (~30% of chemicals in our database show significant drops in red cell count); the causes of anemia are well understood; and, there is a standard clinical decision tree to classify anemia. Using a database of 658 chemicals, we ha
Singh, Kavita; Haney, Erica; Olorunsaiye, Comfort
2014-01-01
Objectives Globally 2.5 million children under-five die from vaccine preventable diseases, and in Nigeria only 23% of children ages 12–23 months are fully immunized. The international community is promoting gender equality as a means to improve the health and well-being of women and their children. This paper looks at whether measures of gender equality, autonomy and individual attitudes towards gender norms, are associated with a child being fully immunized in Nigeria. Methods Data from currently married women with a child 12–23 months from the 2008 Nigeria Demographic and Health Survey (DHS) were used to study the influence of autonomy and gender attitudes on whether or not a child is fully immunized. Multivariate logistic regression was used and several key socioeconomic variables were controlled for including wealth and education, which are considered key inputs into gender equality. Results Findings indicated that household decision-making and attitudes towards wife beating were significantly associated with a child being fully immunized after controlling for socioeconomic variables. Ethnicity, wealth and education were also significant factors. Conclusions Programmatic and policy implications indicate the potential for the promotion of gender equality as a means to improve child health. Gender equality can be seen as a means to enable women to access life-saving services for their children. PMID:22696106
Economics of movable interior blankets for greenhouses
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, G.B.; Fohner, G.R.; Albright, L.D.
1981-01-01
A model for evaluating the economic impact of investment in a movable interior blanket was formulated. The method of analysis was net present value (NPV), in which the discounted, after-tax cash flow of costs and benefits was computed for the useful life of the system. An added feature was a random number component which permitted any or all of the input parameters to be varied within a specified range. Results from 100 computer runs indicated that all of the NPV estimates generated were positive, showing that the investment was profitable. However, there was a wide range of NPV estimates, frommore » $16.00/m/sup 2/ to $86.40/m/sup 2/, with a median value of $49.34/m/sup 2/. Key variables allowed to range in the analysis were: (1) the cost of fuel before the blanket is installed; (2) the percent fuel savings resulting from use of the blanket; (3) the annual real increase in the cost of fuel; and (4) the change in the annual value of the crop. The wide range in NPV estimates indicates the difficulty in making general recommendations regarding the economic feasibility of the investment when uncertainty exists as to the correct values for key variables in commercial settings. The results also point out needed research into the effect of the blanket on the crop, and on performance characteristics of the blanket.« less
Innovations in Basic Flight Training for the Indonesian Air Force
1990-12-01
microeconomic theory that could approximate the optimum mix of training hours between an aircraft and simulator, and therefore improve cost effectiveness...The microeconomic theory being used is normally employed when showing production with two variable inputs. An example of variable inputs would be labor...NAS Corpus Christi, Texas, Aerodynamics of the T-34C, 1989. 26. Naval Air Training Command, NAS Corpus Christi, Texas, Meteorological Theory Workbook
A Bayesian approach to model structural error and input variability in groundwater modeling
NASA Astrophysics Data System (ADS)
Xu, T.; Valocchi, A. J.; Lin, Y. F. F.; Liang, F.
2015-12-01
Effective water resource management typically relies on numerical models to analyze groundwater flow and solute transport processes. Model structural error (due to simplification and/or misrepresentation of the "true" environmental system) and input forcing variability (which commonly arises since some inputs are uncontrolled or estimated with high uncertainty) are ubiquitous in groundwater models. Calibration that overlooks errors in model structure and input data can lead to biased parameter estimates and compromised predictions. We present a fully Bayesian approach for a complete assessment of uncertainty for spatially distributed groundwater models. The approach explicitly recognizes stochastic input and uses data-driven error models based on nonparametric kernel methods to account for model structural error. We employ exploratory data analysis to assist in specifying informative prior for error models to improve identifiability. The inference is facilitated by an efficient sampling algorithm based on DREAM-ZS and a parameter subspace multiple-try strategy to reduce the required number of forward simulations of the groundwater model. We demonstrate the Bayesian approach through a synthetic case study of surface-ground water interaction under changing pumping conditions. It is found that explicit treatment of errors in model structure and input data (groundwater pumping rate) has substantial impact on the posterior distribution of groundwater model parameters. Using error models reduces predictive bias caused by parameter compensation. In addition, input variability increases parametric and predictive uncertainty. The Bayesian approach allows for a comparison among the contributions from various error sources, which could inform future model improvement and data collection efforts on how to best direct resources towards reducing predictive uncertainty.
NASA Astrophysics Data System (ADS)
Zounemat-Kermani, Mohammad
2012-08-01
In this study, the ability of two models of multi linear regression (MLR) and Levenberg-Marquardt (LM) feed-forward neural network was examined to estimate the hourly dew point temperature. Dew point temperature is the temperature at which water vapor in the air condenses into liquid. This temperature can be useful in estimating meteorological variables such as fog, rain, snow, dew, and evapotranspiration and in investigating agronomical issues as stomatal closure in plants. The availability of hourly records of climatic data (air temperature, relative humidity and pressure) which could be used to predict dew point temperature initiated the practice of modeling. Additionally, the wind vector (wind speed magnitude and direction) and conceptual input of weather condition were employed as other input variables. The three quantitative standard statistical performance evaluation measures, i.e. the root mean squared error, mean absolute error, and absolute logarithmic Nash-Sutcliffe efficiency coefficient ( {| {{{Log}}({{NS}})} |} ) were employed to evaluate the performances of the developed models. The results showed that applying wind vector and weather condition as input vectors along with meteorological variables could slightly increase the ANN and MLR predictive accuracy. The results also revealed that LM-NN was superior to MLR model and the best performance was obtained by considering all potential input variables in terms of different evaluation criteria.
Rowe, Meredith L; Levine, Susan C; Fisher, Joan A; Goldin-Meadow, Susan
2009-01-01
Children with unilateral pre- or perinatal brain injury (BI) show remarkable plasticity for language learning. Previous work highlights the important role that lesion characteristics play in explaining individual variation in plasticity in the language development of children with BI. The current study examines whether the linguistic input that children with BI receive from their caregivers also contributes to this early plasticity, and whether linguistic input plays a similar role in children with BI as it does in typically developing (TD) children. Growth in vocabulary and syntactic production is modeled for 80 children (53 TD, 27 BI) between 14 and 46 months. Findings indicate that caregiver input is an equally potent predictor of vocabulary growth in children with BI and in TD children. In contrast, input is a more potent predictor of syntactic growth for children with BI than for TD children. Controlling for input, lesion characteristics (lesion size, type, seizure history) also affect the language trajectories of children with BI. Thus, findings illustrate how both variability in the environment (linguistic input) and variability in the organism (lesion characteristics) work together to contribute to plasticity in language learning.
NASA Astrophysics Data System (ADS)
Handley, Heather K.; Turner, Simon; Afonso, Juan C.; Dosseto, Anthony; Cohen, Tim
2013-02-01
Quantifying the rates of landscape evolution in response to climate change is inhibited by the difficulty of dating the formation of continental detrital sediments. We present uranium isotope data for Cooper Creek palaeochannel sediments from the Lake Eyre Basin in semi-arid South Australia in order to attempt to determine the formation ages and hence residence times of the sediments. To calculate the amount of recoil loss of 234U, a key input parameter used in the comminution approach, we use two suggested methods (weighted geometric and surface area measurement with an incorporated fractal correction) and typical assumed input parameter values found in the literature. The calculated recoil loss factors and comminution ages are highly dependent on the method of recoil loss factor determination used and the chosen assumptions. To appraise the ramifications of the assumptions inherent in the comminution age approach and determine individual and combined comminution age uncertainties associated to each variable, Monte Carlo simulations were conducted for a synthetic sediment sample. Using a reasonable associated uncertainty for each input factor and including variations in the source rock and measured (234U/238U) ratios, the total combined uncertainty on comminution age in our simulation (for both methods of recoil loss factor estimation) can amount to ±220-280 ka. The modelling shows that small changes in assumed input values translate into large effects on absolute comminution age. To improve the accuracy of the technique and provide meaningful absolute comminution ages, much tighter constraints are required on the assumptions for input factors such as the fraction of α-recoil lost 234Th and the initial (234U/238U) ratio of the source material. In order to be able to directly compare calculated comminution ages produced by different research groups, the standardisation of pre-treatment procedures, recoil loss factor estimation and assumed input parameter values is required. We suggest a set of input parameter values for such a purpose. Additional considerations for calculating comminution ages of sediments deposited within large, semi-arid drainage basins are discussed.
NASA Astrophysics Data System (ADS)
Peckham, S. D.
2013-12-01
Model coupling frameworks like CSDMS (Community Surface Dynamics Modeling System) and ESMF (Earth System Modeling Framework) have developed mechanisms that allow heterogeneous sets of process models to be assembled in a plug-and-play manner to create composite "system models". These mechanisms facilitate code reuse, but must simultaneously satisfy many different design criteria. They must be able to mediate or compensate for differences between the process models, such as their different programming languages, computational grids, time-stepping schemes, variable names and variable units. However, they must achieve this interoperability in a way that: (1) is noninvasive, requiring only relatively small and isolated changes to the original source code, (2) does not significantly reduce performance, (3) is not time-consuming or confusing for a model developer to implement, (4) can very easily be updated to accommodate new versions of a given process model and (5) does not shift the burden of providing model interoperability to the model developers, e.g. by requiring them to provide their output in specific forms that meet the input requirements of other models. In tackling these design challenges, model framework developers have learned that the best solution is to provide each model with a simple, standardized interface, i.e. a set of standardized functions that make the model: (1) fully-controllable by a caller (e.g. a model framework) and (2) self-describing. Model control functions are separate functions that allow a caller to initialize the model, advance the model's state variables in time and finalize the model. Model description functions allow a caller to retrieve detailed information on the model's input and output variables, its computational grid and its timestepping scheme. If the caller is a modeling framework, it can compare the answers to these queries with similar answers from other process models in a collection and then automatically call framework service components as necessary to mediate the differences between the coupled models. This talk will first review two key products of the CSDMS project, namely a standardized model interface called the Basic Model Interface (BMI) and the CSDMS Standard Names. The standard names are used in conjunction with BMI to provide a semantic matching mechanism that allows output variables from one process model to be reliably used as input variables to other process models in a collection. They include not just a standardized naming scheme for model variables, but also a standardized set of terms for describing the attributes and assumptions of a given model. To illustrate the power of standardized model interfaces and metadata, a smart, light-weight modeling framework written in Python will be introduced that can automatically (without user intervention) couple a set of BMI-enabled hydrologic process components together to create a spatial hydrologic model. The same mechanisms could also be used to provide seamless integration (import/export) of data and models.
Variable input observer for state estimation of high-rate dynamics
NASA Astrophysics Data System (ADS)
Hong, Jonathan; Cao, Liang; Laflamme, Simon; Dodson, Jacob
2017-04-01
High-rate systems operating in the 10 μs to 10 ms timescale are likely to experience damaging effects due to rapid environmental changes (e.g., turbulence, ballistic impact). Some of these systems could benefit from real-time state estimation to enable their full potential. Examples of such systems include blast mitigation strategies, automotive airbag technologies, and hypersonic vehicles. Particular challenges in high-rate state estimation include: 1) complex time varying nonlinearities of system (e.g. noise, uncertainty, and disturbance); 2) rapid environmental changes; 3) requirement of high convergence rate. Here, we propose using a Variable Input Observer (VIO) concept to vary the input space as the event unfolds. When systems experience high-rate dynamics, rapid changes in the system occur. To investigate the VIO's potential, a VIO-based neuro-observer is constructed and studied using experimental data collected from a laboratory impact test. Results demonstrate that the input space is unique to different impact conditions, and that adjusting the input space throughout the dynamic event produces better estimations than using a traditional fixed input space strategy.
Nursing acceptance of a speech-input interface: a preliminary investigation.
Dillon, T W; McDowell, D; Norcio, A F; DeHaemer, M J
1994-01-01
Many new technologies are being developed to improve the efficiency and productivity of nursing staffs. User acceptance is a key to the success of these technologies. In this article, the authors present a discussion of nursing acceptance of computer systems, review the basic design issues for creating a speech-input interface, and report preliminary findings of a study of nursing acceptance of a prototype speech-input interface. Results of the study showed that the 19 nursing subjects expressed acceptance of the prototype speech-input interface.
Hu, Qinglei
2007-10-01
This paper presents a dual-stage control system design method for the flexible spacecraft attitude maneuvering control by use of on-off thrusters and active vibration control by input shaper. In this design approach, attitude control system and vibration suppression were designed separately using lower order model. As a stepping stone, an integral variable structure controller with the assumption of knowing the upper bounds of the mismatched lumped perturbation has been designed which ensures exponential convergence of attitude angle and angular velocity in the presence of bounded uncertainty/disturbances. To reconstruct estimates of the system states for use in a full information variable structure control law, an asymptotic variable structure observer is also employed. In addition, the thruster output is modulated in pulse-width pulse-frequency so that the output profile is similar to the continuous control histories. For actively suppressing the induced vibration, the input shaping technique is used to modify the existing command so that less vibration will be caused by the command itself, which only requires information about the vibration frequency and damping of the closed-loop system. The rationale behind this hybrid control scheme is that the integral variable structure controller can achieve good precision pointing, even in the presence of uncertainties/disturbances, whereas the shaped input attenuator is applied to actively suppress the undesirable vibrations excited by the rapid maneuvers. Simulation results for the spacecraft model show precise attitude control and vibration suppression.
A waste characterisation procedure for ADM1 implementation based on degradation kinetics.
Girault, R; Bridoux, G; Nauleau, F; Poullain, C; Buffet, J; Steyer, J-P; Sadowski, A G; Béline, F
2012-09-01
In this study, a procedure accounting for degradation kinetics was developed to split the total COD of a substrate into each input state variable required for Anaerobic Digestion Model n°1. The procedure is based on the combination of batch experimental degradation tests ("anaerobic respirometry") and numerical interpretation of the results obtained (optimisation of the ADM1 input state variable set). The effects of the main operating parameters, such as the substrate to inoculum ratio in batch experiments and the origin of the inoculum, were investigated. Combined with biochemical fractionation of the total COD of substrates, this method enabled determination of an ADM1-consistent input state variable set for each substrate with affordable identifiability. The substrate to inoculum ratio in the batch experiments and the origin of the inoculum influenced input state variables. However, based on results modelled for a CSTR fed with the substrate concerned, these effects were not significant. Indeed, if the optimal ranges of these operational parameters are respected, uncertainty in COD fractionation is mainly limited to temporal variability of the properties of the substrates. As the method is based on kinetics and is easy to implement for a wide range of substrates, it is a very promising way to numerically predict the effect of design parameters on the efficiency of an anaerobic CSTR. This method thus promotes the use of modelling for the design and optimisation of anaerobic processes. Copyright © 2012 Elsevier Ltd. All rights reserved.
Probabilistic dose-response modeling: case study using dichloromethane PBPK model results.
Marino, Dale J; Starr, Thomas B
2007-12-01
A revised assessment of dichloromethane (DCM) has recently been reported that examines the influence of human genetic polymorphisms on cancer risks using deterministic PBPK and dose-response modeling in the mouse combined with probabilistic PBPK modeling in humans. This assessment utilized Bayesian techniques to optimize kinetic variables in mice and humans with mean values from posterior distributions used in the deterministic modeling in the mouse. To supplement this research, a case study was undertaken to examine the potential impact of probabilistic rather than deterministic PBPK and dose-response modeling in mice on subsequent unit risk factor (URF) determinations. Four separate PBPK cases were examined based on the exposure regimen of the NTP DCM bioassay. These were (a) Same Mouse (single draw of all PBPK inputs for both treatment groups); (b) Correlated BW-Same Inputs (single draw of all PBPK inputs for both treatment groups except for bodyweights (BWs), which were entered as correlated variables); (c) Correlated BW-Different Inputs (separate draws of all PBPK inputs for both treatment groups except that BWs were entered as correlated variables); and (d) Different Mouse (separate draws of all PBPK inputs for both treatment groups). Monte Carlo PBPK inputs reflect posterior distributions from Bayesian calibration in the mouse that had been previously reported. A minimum of 12,500 PBPK iterations were undertaken, in which dose metrics, i.e., mg DCM metabolized by the GST pathway/L tissue/day for lung and liver were determined. For dose-response modeling, these metrics were combined with NTP tumor incidence data that were randomly selected from binomial distributions. Resultant potency factors (0.1/ED(10)) were coupled with probabilistic PBPK modeling in humans that incorporated genetic polymorphisms to derive URFs. Results show that there was relatively little difference, i.e., <10% in central tendency and upper percentile URFs, regardless of the case evaluated. Independent draws of PBPK inputs resulted in the slightly higher URFs. Results were also comparable to corresponding values from the previously reported deterministic mouse PBPK and dose-response modeling approach that used LED(10)s to derive potency factors. This finding indicated that the adjustment from ED(10) to LED(10) in the deterministic approach for DCM compensated for variability resulting from probabilistic PBPK and dose-response modeling in the mouse. Finally, results show a similar degree of variability in DCM risk estimates from a number of different sources including the current effort even though these estimates were developed using very different techniques. Given the variety of different approaches involved, 95th percentile-to-mean risk estimate ratios of 2.1-4.1 represent reasonable bounds on variability estimates regarding probabilistic assessments of DCM.
Spatio-temporal variation of coarse woody debris input in woodland key habitats in central Sweden
Mari Jonsson; Shawn Fraver; Bengt Gunnar. Jonsson
2011-01-01
The persistence of many saproxylic (wood-living) species depends on a readily available supply of coarse woody debris (CWD). Most studies of CWD inputs address stand-level patterns, despite the fact that many saproxylic species depend on landscape-level supplies of CWD. In the present study we used dated CWD inputs (tree mortality events) at each of 14 Norway spruce (...
Variable Delay Element For Jitter Control In High Speed Data Links
Livolsi, Robert R.
2002-06-11
A circuit and method for decreasing the amount of jitter present at the receiver input of high speed data links which uses a driver circuit for input from a high speed data link which comprises a logic circuit having a first section (1) which provides data latches, a second section (2) which provides a circuit generates a pre-destorted output and for compensating for level dependent jitter having an OR function element and a NOR function element each of which is coupled to two inputs and to a variable delay element as an input which provides a bi-modal delay for pulse width pre-distortion, a third section (3) which provides a muxing circuit, and a forth section (4) for clock distribution in the driver circuit. A fifth section is used for logic testing the driver circuit.
A liquid lens switching-based motionless variable fiber-optic delay line
NASA Astrophysics Data System (ADS)
Khwaja, Tariq Shamim; Reza, Syed Azer; Sheikh, Mumtaz
2018-05-01
We present a Variable Fiber-Optic Delay Line (VFODL) module capable of imparting long variable delays by switching an input optical/RF signal between Single Mode Fiber (SMF) patch cords of different lengths through a pair of Electronically Controlled Tunable Lenses (ECTLs) resulting in a polarization-independent operation. Depending on intended application, the lengths of the SMFs can be chosen accordingly to achieve the desired VFODL operation dynamic range. If so desired, the state of the input signal polarization can be preserved with the use of commercially available polarization-independent ECTLs along with polarization-maintaining SMFs (PM-SMFs), resulting in an output polarization that is identical to the input. An ECTL-based design also improves power consumption and repeatability. The delay switching mechanism is electronically-controlled, involves no bulk moving parts, and can be fully-automated. The VFODL module is compact due to the use of small optical components and SMFs that can be packaged compactly.
Sensitivity analysis and nonlinearity assessment of steam cracking furnace process
NASA Astrophysics Data System (ADS)
Rosli, M. N.; Sudibyo, Aziz, N.
2017-11-01
In this paper, sensitivity analysis and nonlinearity assessment of cracking furnace process are presented. For the sensitivity analysis, the fractional factorial design method is employed as a method to analyze the effect of input parameters, which consist of four manipulated variables and two disturbance variables, to the output variables and to identify the interaction between each parameter. The result of the factorial design method is used as a screening method to reduce the number of parameters, and subsequently, reducing the complexity of the model. It shows that out of six input parameters, four parameters are significant. After the screening is completed, step test is performed on the significant input parameters to assess the degree of nonlinearity of the system. The result shows that the system is highly nonlinear with respect to changes in an air-to-fuel ratio (AFR) and feed composition.
NASA Technical Reports Server (NTRS)
Wilson, Emily L.; DiGregorio, A. J.; Riot, Vincent J.; Ammons, Mark S.; Bruner, WIlliam W.; Carter, Darrell; Mao, Jianping; Ramanathan, Anand; Strahan, Susan E.; Oman, Luke D.;
2017-01-01
We present a design for a 4 U (20 cm 20 cm 10 cm) occultation-viewing laser heterodyne radiometer (LHR) that measures methane (CH4), carbon dioxide (CO2) and water vapor(H2O) in the limb that is designed for deployment on a 6 U CubeSat. The LHR design collects sunlight that has undergone absorption by the trace gas and mixes it with a distributive feedback (DFB) laser centered at 1640 nm that scans across CO2, CH4, and H2O absorption features. Upper troposphere lower stratosphere measurements of these gases provide key inputs to stratospheric circulation models: measuring stratospheric circulation and its variability is essential for projecting how climate change will affect stratospheric ozone.
Method and system to estimate variables in an integrated gasification combined cycle (IGCC) plant
Kumar, Aditya; Shi, Ruijie; Dokucu, Mustafa
2013-09-17
System and method to estimate variables in an integrated gasification combined cycle (IGCC) plant are provided. The system includes a sensor suite to measure respective plant input and output variables. An extended Kalman filter (EKF) receives sensed plant input variables and includes a dynamic model to generate a plurality of plant state estimates and a covariance matrix for the state estimates. A preemptive-constraining processor is configured to preemptively constrain the state estimates and covariance matrix to be free of constraint violations. A measurement-correction processor may be configured to correct constrained state estimates and a constrained covariance matrix based on processing of sensed plant output variables. The measurement-correction processor is coupled to update the dynamic model with corrected state estimates and a corrected covariance matrix. The updated dynamic model may be configured to estimate values for at least one plant variable not originally sensed by the sensor suite.
Transported Geothermal Energy Technoeconomic Screening Tool - Calculation Engine
Liu, Xiaobing
2016-09-21
This calculation engine estimates technoeconomic feasibility for transported geothermal energy projects. The TGE screening tool (geotool.exe) takes input from input file (input.txt), and list results into output file (output.txt). Both the input and ouput files are in the same folder as the geotool.exe. To use the tool, the input file containing adequate information of the case should be prepared in the format explained below, and the input file should be put into the same folder as geotool.exe. Then the geotool.exe can be executed, which will generate a output.txt file in the same folder containing all key calculation results. The format and content of the output file is explained below as well.
Metamodeling and mapping of nitrate flux in the unsaturated zone and groundwater, Wisconsin, USA
Nolan, Bernard T.; Green, Christopher T.; Juckem, Paul F.; Liao, Lixia; Reddy, James E.
2018-01-01
Nitrate contamination of groundwater in agricultural areas poses a major challenge to the sustainability of water resources. Aquifer vulnerability models are useful tools that can help resource managers identify areas of concern, but quantifying nitrogen (N) inputs in such models is challenging, especially at large spatial scales. We sought to improve regional nitrate (NO3−) input functions by characterizing unsaturated zone NO3− transport to groundwater through use of surrogate, machine-learning metamodels of a process-based N flux model. The metamodels used boosted regression trees (BRTs) to relate mappable landscape variables to parameters and outputs of a previous “vertical flux method” (VFM) applied at sampled wells in the Fox, Wolf, and Peshtigo (FWP) river basins in northeastern Wisconsin. In this context, the metamodels upscaled the VFM results throughout the region, and the VFM parameters and outputs are the metamodel response variables. The study area encompassed the domain of a detailed numerical model that provided additional predictor variables, including groundwater recharge, to the metamodels. We used a statistical learning framework to test a range of model complexities to identify suitable hyperparameters of the six BRT metamodels corresponding to each response variable of interest: NO3− source concentration factor (which determines the local NO3− input concentration); unsaturated zone travel time; NO3− concentration at the water table in 1980, 2000, and 2020 (three separate metamodels); and NO3− “extinction depth”, the eventual steady state depth of the NO3−front. The final metamodels were trained to 129 wells within the active numerical flow model area, and considered 58 mappable predictor variables compiled in a geographic information system (GIS). These metamodels had training and cross-validation testing R2 values of 0.52 – 0.86 and 0.22 – 0.38, respectively, and predictions were compiled as maps of the above response variables. Testing performance was reasonable, considering that we limited the metamodel predictor variables to mappable factors as opposed to using all available VFM input variables. Relationships between metamodel predictor variables and mapped outputs were generally consistent with expectations, e.g. with greater source concentrations and NO3− at the groundwater table in areas of intensive crop use and well drained soils. Shorter unsaturated zone travel times in poorly drained areas likely indicated preferential flow through clay soils, and a tendency for fine grained deposits to collocate with areas of shallower water table. Numerical estimates of groundwater recharge were important in the metamodels and may have been a proxy for N input and redox conditions in the northern FWP, which had shallow predicted NO3− extinction depth. The metamodel results provide proof-of-concept for regional characterization of unsaturated zone NO3− transport processes in a statistical framework based on readily mappable GIS input variables.
Fichez, R; Chifflet, S; Douillet, P; Gérard, P; Gutierrez, F; Jouon, A; Ouillon, S; Grenz, C
2010-01-01
Considering the growing concern about the impact of anthropogenic inputs on coral reefs and coral reef lagoons, surprisingly little attention has been given to the relationship between those inputs and the trophic status of lagoon waters. The present paper describes the distribution of biogeochemical parameters in the coral reef lagoon of New Caledonia where environmental conditions allegedly range from pristine oligotrophic to anthropogenically influenced. The study objectives were to: (i) identify terrigeneous and anthropogenic inputs and propose a typology of lagoon waters, (ii) determine temporal variability of water biogeochemical parameters at time-scales ranging from hours to seasons. Combined ACP-cluster analyses revealed that over the 2000 km(2) lagoon area around the city of Nouméa, "natural" terrigeneous versus oceanic influences affecting all stations only accounted for less than 20% of the spatial variability whereas 60% of that spatial variability could be attributed to significant eutrophication of a limited number of inshore stations. ACP analysis allowed to unambiguously discriminating between the natural trophic enrichment along the offshore-inshore gradient and anthropogenically induced eutrophication. High temporal variability in dissolved inorganic nutrients concentrations strongly hindered their use as indicators of environmental status. Due to longer turn over time, particulate organic material and more specifically chlorophyll a appeared as more reliable nonconservative tracer of trophic status. Results further provided evidence that ENSO occurrences might temporarily lower the trophic status of the New Caledonia lagoon. It is concluded that, due to such high frequency temporal variability, the use of biogeochemical parameters in environmental surveys require adapted sampling strategies, data management and environmental alert methods. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
A Methodological Review of US Budget-Impact Models for New Drugs.
Mauskopf, Josephine; Earnshaw, Stephanie
2016-11-01
A budget-impact analysis is required by many jurisdictions when adding a new drug to the formulary. However, previous reviews have indicated that adherence to methodological guidelines is variable. In this methodological review, we assess the extent to which US budget-impact analyses for new drugs use recommended practices. We describe recommended practice for seven key elements in the design of a budget-impact analysis. Targeted literature searches for US studies reporting estimates of the budget impact of a new drug were performed and we prepared a summary of how each study addressed the seven key elements. The primary finding from this review is that recommended practice is not followed in many budget-impact analyses. For example, we found that growth in the treated population size and/or changes in disease-related costs expected during the model time horizon for more effective treatments was not included in several analyses for chronic conditions. In addition, all drug-related costs were not captured in the majority of the models. Finally, for most studies, one-way sensitivity and scenario analyses were very limited, and the ranges used in one-way sensitivity analyses were frequently arbitrary percentages rather than being data driven. The conclusions from our review are that changes in population size, disease severity mix, and/or disease-related costs should be properly accounted for to avoid over- or underestimating the budget impact. Since each budget holder might have different perspectives and different values for many of the input parameters, it is also critical for published budget-impact analyses to include extensive sensitivity and scenario analyses based on realistic input values.
Prediction of surface distress using neural networks
NASA Astrophysics Data System (ADS)
Hamdi, Hadiwardoyo, Sigit P.; Correia, A. Gomes; Pereira, Paulo; Cortez, Paulo
2017-06-01
Road infrastructures contribute to a healthy economy throughout a sustainable distribution of goods and services. A road network requires appropriately programmed maintenance treatments in order to keep roads assets in good condition, providing maximum safety for road users under a cost-effective approach. Surface Distress is the key element to identify road condition and may be generated by many different factors. In this paper, a new approach is aimed to predict Surface Distress Index (SDI) values following a data-driven approach. Later this model will be accordingly applied by using data obtained from the Integrated Road Management System (IRMS) database. Artificial Neural Networks (ANNs) are used to predict SDI index using input variables related to the surface of distress, i.e., crack area and width, pothole, rutting, patching and depression. The achieved results show that ANN is able to predict SDI with high correlation factor (R2 = 0.996%). Moreover, a sensitivity analysis was applied to the ANN model, revealing the influence of the most relevant input parameters for SDI prediction, namely rutting (59.8%), crack width (29.9%) and crack area (5.0%), patching (3.0%), pothole (1.7%) and depression (0.3%).
Implementation of a Trailing-Edge Flap Analysis Model in the NASA Langley CAMRAD.MOD1/Hires Program
NASA Technical Reports Server (NTRS)
Charles, Bruce
1999-01-01
Continual advances in rotorcraft performance, vibration and acoustic characteristics are being sought by rotary-wing vehicle manufacturers to improve efficiency, handling qualities and community noise acceptance of their products. The rotor system aerodynamic and dynamic behavior are among the key factors which must be addressed to meet the desired goals. Rotor aerodynamicists study how airload redistribution impacts performance and noise, and seek ways to achieve better airload distribution through changes in local aerodynamic response characteristics. One method currently receiving attention is the use of trailing-edge flaps mounted on the rotor blades to provide direct control of a portion of the spanwise lift characteristics. The following work describes the incorporation of a trailing-edge flap model in the CAMRAD.Mod1/FHUS comprehensive rotorcraft analysis code. The CAM-RAD.Mod1/HIRES analysis consists of three separate executable codes. These include the comprehensive trim analysis, CAMRAD.Mod1, the Indicial Post-Processor, IPP, for high resolution airloads, and AIRFOIL, which produces the rotor airfoil tables from input airfoil section characteristics. The modifications made to these components permitting analysis of flapped rotor configurations are documented herein along with user instructions detailing the new input variables and operational notes.
Measurement of characteristic parameters of 10 Gb/s bidirectional optical amplifier for XG-PON
NASA Astrophysics Data System (ADS)
Rakkammee, Suchaj; Boriboon, Budsara; Worasucheep, Duang-rudee; Wada, Naoya
2018-03-01
This research experimentally measured the characteristic parameters of 10 Gb/s bidirectional optical amplifier: (1) operating wavelength range, (2) small signal gain, (3) Polarization Dependent Loss (PDL), and (4) power consumption. Bidirectional amplifiers are the key component to extend coverage area as well as increase a number of users in Passive Optical Networks (PON). According to 10-Gigabit-capable PON or XG-PON standard, the downstream and upstream wavelengths are 1577 nm and 1270 nm respectively. Thus, our bidirectional amplifier consists of an Erbium Doped Fiber Amplifier (EDFA) and a Semiconductor Optical Amplifier (SOA) for downstream and upstream wavelength transmissions respectively. The operating wavelengths of EDFA and SOA are measured to be from 1570 nm to 1588 nm and 1263 nm to 1280 nm respectively. To measure gain, the input wavelengths of EDFA and SOA were fixed at 1577 nm and 1271 nm respectively, while their input powers were reduced by a variable optical attenuator. The small signal gain of EDFA is 22.5 dB at 0.15 Ampere pump current, whereas the small signal gain of SOA is 7.06 dB at 0.325 Ampere pump current. To measure PDL, which is a difference in output powers at various State of Polarization (SoP) of input signal, a polarization controller was inserted before amplifier to alter input SoP. The measured PDL of EDFA is insignificant with less than 0.1 dB. In contrast, the measured PDL of SOA is as large as 33 dB, indicating its strong polarization dependence. The total power consumptions were measured to be 1.5675 Watt.
Binary full adder, made of fusion gates, in a subexcitable Belousov-Zhabotinsky system
NASA Astrophysics Data System (ADS)
Adamatzky, Andrew
2015-09-01
In an excitable thin-layer Belousov-Zhabotinsky (BZ) medium a localized perturbation leads to the formation of omnidirectional target or spiral waves of excitation. A subexcitable BZ medium responds to asymmetric local perturbation by producing traveling localized excitation wave-fragments, distant relatives of dissipative solitons. The size and life span of an excitation wave-fragment depend on the illumination level of the medium. Under the right conditions the wave-fragments conserve their shape and velocity vectors for extended time periods. I interpret the wave-fragments as values of Boolean variables. When two or more wave-fragments collide they annihilate or merge into a new wave-fragment. States of the logic variables, represented by the wave-fragments, are changed in the result of the collision between the wave-fragments. Thus, a logical gate is implemented. Several theoretical designs and experimental laboratory implementations of Boolean logic gates have been proposed in the past but little has been done cascading the gates into binary arithmetical circuits. I propose a unique design of a binary one-bit full adder based on a fusion gate. A fusion gate is a two-input three-output logical device which calculates the conjunction of the input variables and the conjunction of one input variable with the negation of another input variable. The gate is made of three channels: two channels cross each other at an angle, a third channel starts at the junction. The channels contain a BZ medium. When two excitation wave-fragments, traveling towards each other along input channels, collide at the junction they merge into a single wave-front traveling along the third channel. If there is just one wave-front in the input channel, the front continues its propagation undisturbed. I make a one-bit full adder by cascading two fusion gates. I show how to cascade the adder blocks into a many-bit full adder. I evaluate the feasibility of my designs by simulating the evolution of excitation in the gates and adders using the numerical integration of Oregonator equations.
Practical somewhat-secure quantum somewhat-homomorphic encryption with coherent states
NASA Astrophysics Data System (ADS)
Tan, Si-Hui; Ouyang, Yingkai; Rohde, Peter P.
2018-04-01
We present a scheme for implementing homomorphic encryption on coherent states encoded using phase-shift keys. The encryption operations require only rotations in phase space, which commute with computations in the code space performed via passive linear optics, and with generalized nonlinear phase operations that are polynomials of the photon-number operator in the code space. This encoding scheme can thus be applied to any computation with coherent-state inputs, and the computation proceeds via a combination of passive linear optics and generalized nonlinear phase operations. An example of such a computation is matrix multiplication, whereby a vector representing coherent-state amplitudes is multiplied by a matrix representing a linear optics network, yielding a new vector of coherent-state amplitudes. By finding an orthogonal partitioning of the support of our encoded states, we quantify the security of our scheme via the indistinguishability of the encrypted code words. While we focus on coherent-state encodings, we expect that this phase-key encoding technique could apply to any continuous-variable computation scheme where the phase-shift operator commutes with the computation.
Panda, Bhuputra; Thakur, Harshad P
2016-10-31
One of the principal goals of any health care system is to improve health through the provision of clinical and public health services. Decentralization as a reform measure aims to improve inputs, management processes and health outcomes, and has political, administrative and financial connotations. It is argued that the robustness of a health system in achieving desirable outcomes is contingent upon the width and depth of 'decision space' at the local level. Studies have used different approaches to examine one or more facets of decentralization and its effect on health system functioning; however, lack of consensus on an acceptable framework is a critical gap in determining its quantum and quality. Theorists have resorted to concepts of 'trust', 'convenience' and 'mutual benefits' to explain, define and measure components of governance in health. In the emerging 'continuum of health services' model, the challenge lies in identifying variables of performance (fiscal allocation, autonomy at local level, perception of key stakeholders, service delivery outputs, etc.) through the prism of decentralization in the first place, and in establishing directed relationships among them. This focused review paper conducted extensive web-based literature search, using PubMed and Google Scholar search engines. After screening of key words and study objectives, we retrieved 180 articles for next round of screening. One hundred and four full articles (three working papers and 101 published papers) were reviewed in totality. We attempted to summarize existing literature on decentralization and health systems performance, explain key concepts and essential variables, and develop a framework for further scientific scrutiny. Themes are presented in three separate segments of dimensions, difficulties and derivatives. Evaluation of local decision making and its effect on health system performance has been studied in a compartmentalized manner. There is sparse evidence about innovations attributable to decentralization. We observed that in India, there is very scant evaluative study on the subject. We didn't come across a single study examining the perception and experiences of local decision makers about the opportunities and challenges they faced. The existing body of evidences may be inadequate to feed into sound policy making. The principles of management hinge on measurement of inputs, processes and outputs. In the conceptual framework we propose three levels of functions (health systems functions, management functions and measurement functions) being intricately related to inputs, processes and outputs. Each level of function encompasses essential elements derived from the synthesis of information gathered through literature review and non-participant observation. We observed that it is difficult to quantify characteristics of governance at institutional, system and individual levels except through proxy means. There is an urgent need to sensitize governments and academia about how best more objective evaluation of 'shared governance' can be undertaken to benefit policy making. The future direction of enquiry should focus on context-specific evidence of its effect on the entire spectrum of health system, with special emphasis on efficiency, community participation, human resource management and quality of services.
UWB delay and multiply receiver
Dallum, Gregory E.; Pratt, Garth C.; Haugen, Peter C.; Romero, Carlos E.
2013-09-10
An ultra-wideband (UWB) delay and multiply receiver is formed of a receive antenna; a variable gain attenuator connected to the receive antenna; a signal splitter connected to the variable gain attenuator; a multiplier having one input connected to an undelayed signal from the signal splitter and another input connected to a delayed signal from the signal splitter, the delay between the splitter signals being equal to the spacing between pulses from a transmitter whose pulses are being received by the receive antenna; a peak detection circuit connected to the output of the multiplier and connected to the variable gain attenuator to control the variable gain attenuator to maintain a constant amplitude output from the multiplier; and a digital output circuit connected to the output of the multiplier.
NASA Astrophysics Data System (ADS)
Yoon, J.; Zeng, N.; Mariotti, A.; Swenson, S.
2007-12-01
In an approach termed the P-E-R (or simply PER) method, we apply the basin water budget equation to diagnose the long-term variability of the total terrestrial water storage (TWS). The key input variables are observed precipitation (P) and runoff (R), and estimated evaporation (E). Unlike typical offline land-surface model estimate where only atmospheric variables are used as input, the direct use of observed runoff in the PER method imposes an important constraint on the diagnosed TWS. Although there lack basin-scale observations of evaporation, the tendency of E to have significantly less variability than the difference between precipitation and runoff (P-R) minimizes the uncertainties originating from estimated evaporation. Compared to the more traditional method using atmospheric moisture convergence (MC) minus R (MCR method), the use of observed precipitation in PER method is expected to lead to general improvement, especially in regions atmospheric radiosonde data are too sparse to constrain the atmospheric model analyzed MC such as in the remote tropics. TWS was diagnosed using the PER method for the Amazon (1970-2006) and the Mississippi Basin (1928-2006), and compared with MCR method, land-surface model and reanalyses, and NASA's GRACE satellite gravity data. The seasonal cycle of diagnosed TWS over the Amazon is about 300 mm. The interannual TWS variability in these two basins are 100-200 mm, but multi-dacadal changes can be as large as 600-800 mm. Major droughts such as the Dust Bowl period had large impact with water storage depleted by 500 mm over a decade. Within the short period 2003-2006 when GRACE data was available, PER and GRACE show good agreement both for seasonal cycle and interannual variability, providing potential to cross-validate each other. In contrast, land-surface model results are significantly smaller than PER and GRACE, especially towards longer timescales. While we currently lack independent means to verify these long-term changes, simple error analysis using 3 precipitation datasets and 3 evaporation estimates suggest that the multi-decadal amplitude can be uncertain up to a factor of 2, while the agreement is high on interannual timescales. The large TWS variability implies the remarkable capacity of land-surface in storing and taking up water that may be under-represented in models. The results also suggest the existence of water storage memories on multi-year time scales, significantly longer than typically assumed seasonal timescales associated with surface soil moisture.
DOT National Transportation Integrated Search
2000-03-01
Seven Key ITS Application Goals emerged from the document review, key contact interviews and input from attendees at ITS committee meetings. They were to use the ITS applications to improve the overall safety of the transportation network, to improve...
Alpha1 LASSO data bundles Lamont, OK
Gustafson, William Jr; Vogelmann, Andrew; Endo, Satoshi; Toto, Tami; Xiao, Heng; Li, Zhijin; Cheng, Xiaoping; Krishna, Bhargavi (ORCID:000000018828528X)
2016-08-03
A data bundle is a unified package consisting of LASSO LES input and output, observations, evaluation diagnostics, and model skill scores. LES input includes model configuration information and forcing data. LES output includes profile statistics and full domain fields of cloud and environmental variables. Model evaluation data consists of LES output and ARM observations co-registered on the same grid and sampling frequency. Model performance is quantified by skill scores and diagnostics in terms of cloud and environmental variables.
A Multivariate Analysis of the Early Dropout Process
ERIC Educational Resources Information Center
Fiester, Alan R.; Rudestam, Kjell E.
1975-01-01
Principal-component factor analyses were performed on patient input (demographic and pretherapy expectations), therapist input (demographic), and patient perspective therapy process variables that significantly differentiated early dropout from nondropout outpatients at two community mental health centers. (Author)
Distiller, Larry A; Cranston, Iain; Mazze, Roger
2016-11-01
In 2014, an innovative blinded continuous glucose monitoring system was introduced with automated ambulatory glucose profile (AGP) reporting. The clinical use and interpretation of this new technology has not previously been described. Therefore we wanted to understand its use in characterizing key factors related to glycemic control: glucose exposure, variability, and stability, and risk of hypoglycemia in clinical practice. Clinicians representing affiliated diabetes centers throughout South Africa were trained and subsequently were given flash glucose monitoring readers and 2-week glucose sensors to use at their discretion. After patient use, sensor data were collected and uploaded for AGP reporting. Complete data (sensor AGP with corresponding clinical information) were obtained for 50 patients with type 1 (70%) and type 2 diabetes (30%), irrespective of therapy. Aggregated analysis of AGP data comparing patients with type 1 versus type 2 diabetes, revealed that despite similar HbA1c values between both groups (8.4 ± 2 vs 8.6 ± 1.7%, respectively), those with type 2 diabetes had lower mean glucose levels (9.2 ± 3 vs 10.3 mmol/l [166 ± 54 vs 185 mg/dl]) and lower indices of glucose variability (3.0 ± 1.5 vs 5.0 ± 1.9 mmol/l [54 ± 27 vs 90 ± 34.2 mg/dl]). This highlights key areas for future focus. Using AGP, the characteristics of glucose exposure, variability, stability, and hypoglycemia risk and occurrence were obtained within a short time and with minimal provider and patient input. In a survey at the time of the follow-up visit, clinicians indicated that aggregated AGP data analysis provided important new clinical information and insights. © 2016 Diabetes Technology Society.
Scenario Development for the Southwestern United States
NASA Astrophysics Data System (ADS)
Mahmoud, M.; Gupta, H.; Stewart, S.; Liu, Y.; Hartmann, H.; Wagener, T.
2006-12-01
The primary goal of employing a scenario development approach for the U.S. southwest is to inform regional policy by examining future possibilities related to regional vegetation change, water-leasing, and riparian restoration. This approach is necessary due to a lack of existing explicit water resources application of scenarios to the entire southwest region. A formal approach for scenario development is adopted and applied towards water resources issues within the arid and semi-arid regions of the U.S. southwest following five progressive and reiterative phases: scenario definition, scenario construction, scenario analysis, scenario assessment, and risk management. In the scenario definition phase, the inputs of scientists, modelers, and stakeholders were collected in order to define and construct relevant scenarios to the southwest and its water sustainability needs. From stakeholder-driven scenario workshops and breakout sessions, the three main axes of principal change were identified to be climate change, population development patterns, and quality of information monitoring technology. Based on the extreme and varying conditions of these three main axes, eight scenario narratives were drafted to describe the state of each scenario's respective future and the events which led to it. Events and situations are described within each scenario narrative with respect to key variables; variables that are both important to regional water resources (as distinguished by scientists and modelers), and are good tracking and monitoring indicators of change. The current phase consists of scenario construction, where the drafted scenarios are re-presented to regional scientists and modelers to verify that proper key variables are included (or excluded) from the eight narratives. The next step is to construct the data sets necessary to implement the eight scenarios on the respective computational models of modelers investigating vegetation change, water-leasing, and riparian restoration in the southwest
First Clinical Experience with Retrospective Flash Glucose Monitoring (FGM) Analysis in South Africa
Distiller, Larry A.; Cranston, Iain; Mazze, Roger
2016-01-01
Background: In 2014, an innovative blinded continuous glucose monitoring system was introduced with automated ambulatory glucose profile (AGP) reporting. The clinical use and interpretation of this new technology has not previously been described. Therefore we wanted to understand its use in characterizing key factors related to glycemic control: glucose exposure, variability, and stability, and risk of hypoglycemia in clinical practice. Methods: Clinicians representing affiliated diabetes centers throughout South Africa were trained and subsequently were given flash glucose monitoring readers and 2-week glucose sensors to use at their discretion. After patient use, sensor data were collected and uploaded for AGP reporting. Results: Complete data (sensor AGP with corresponding clinical information) were obtained for 50 patients with type 1 (70%) and type 2 diabetes (30%), irrespective of therapy. Aggregated analysis of AGP data comparing patients with type 1 versus type 2 diabetes, revealed that despite similar HbA1c values between both groups (8.4 ± 2 vs 8.6 ± 1.7%, respectively), those with type 2 diabetes had lower mean glucose levels (9.2 ± 3 vs 10.3 mmol/l [166 ± 54 vs 185 mg/dl]) and lower indices of glucose variability (3.0 ± 1.5 vs 5.0 ± 1.9 mmol/l [54 ± 27 vs 90 ± 34.2 mg/dl]). This highlights key areas for future focus. Conclusions: Using AGP, the characteristics of glucose exposure, variability, stability, and hypoglycemia risk and occurrence were obtained within a short time and with minimal provider and patient input. In a survey at the time of the follow-up visit, clinicians indicated that aggregated AGP data analysis provided important new clinical information and insights. PMID:27154973
Approach for Input Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives
NASA Technical Reports Server (NTRS)
Putko, Michele M.; Taylor, Arthur C., III; Newman, Perry A.; Green, Lawrence L.
2002-01-01
An implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for quasi 3-D Euler CFD code is presented. Given uncertainties in statistically independent, random, normally distributed input variables, first- and second-order statistical moment procedures are performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, these moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.
Variable ratio regenerative braking device
Hoppie, Lyle O.
1981-12-15
Disclosed is a regenerative braking device (10) for an automotive vehicle. The device includes an energy storage assembly (12) having a plurality of rubber rollers (26, 28) mounted for rotation between an input shaft (36) and an output shaft (42), clutches (38, 46) and brakes (40, 48) associated with each shaft, and a continuously variable transmission (22) connectable to a vehicle drivetrain and to the input and output shafts by the respective clutches. The rubber rollers are torsionally stressed to accumulate energy from the vehicle when the input shaft is clutched to the transmission while the brake on the output shaft is applied, and are torsionally relaxed to deliver energy to the vehicle when the output shaft is clutched to the transmission while the brake on the input shaft is applied. The transmission ratio is varied to control the rate of energy accumulation and delivery for a given rotational speed of the vehicle drivetrain.
NASA Technical Reports Server (NTRS)
Chen, B. M.; Saber, A.
1993-01-01
A simple and noniterative procedure for the computation of the exact value of the infimum in the singular H(infinity)-optimization problem is presented, as a continuation of our earlier work. Our problem formulation is general and we do not place any restrictions in the finite and infinite zero structures of the system, and the direct feedthrough terms between the control input and the controlled output variables and between the disturbance input and the measurement output variables. Our method is applicable to a class of singular H(infinity)-optimization problems for which the transfer functions from the control input to the controlled output and from the disturbance input to the measurement output satisfy certain geometric conditions. In particular, the paper extends the result of earlier work by allowing these two transfer functions to have invariant zeros on the j(omega) axis.
NASA Astrophysics Data System (ADS)
Yadav, D.; Upadhyay, H. C.
1992-07-01
Vehicles obtain track-induced input through the wheels, which commonly number more than one. Analysis available for the vehicle response in a variable velocity run on a non-homogeneously profiled flexible track supported by compliant inertial foundation is for a linear heave model having a single ground input. This analysis is being extended to two point input models with heave-pitch and heave-roll degrees of freedom. Closed form expressions have been developed for the system response statistics. Results are presented for a railway coach and track/foundation problem, and the performances of heave, heave-pitch and heave-roll models have been compared. The three models are found to agree in describing the track response. However, the vehicle sprung mass behaviour is predicted to be different by these models, indicating the strong effect of coupling on the vehicle vibration.
Solis-Paredes, Mario; Estrada-Gutierrez, Guadalupe; Perichart-Perera, Otilia; Montoya-Estrada, Araceli; Guzmán-Huerta, Mario; Borboa-Olivares, Héctor; Bravo-Flores, Eyerahi; Cardona-Pérez, Arturo; Zaga-Clavellina, Veronica; Garcia-Latorre, Ethel; Gonzalez-Perez, Gabriela; Hernández-Pérez, José Alfredo; Irles, Claudine
2017-12-28
Maternal obesity has been related to adverse neonatal outcomes and fetal programming. Oxidative stress and adipokines are potential biomarkers in such pregnancies; thus, the measurement of these molecules has been considered critical. Therefore, we developed artificial neural network (ANN) models based on maternal weight status and clinical data to predict reliable maternal blood concentrations of these biomarkers at the end of pregnancy. Adipokines (adiponectin, leptin, and resistin), and DNA, lipid and protein oxidative markers (8-oxo-2'-deoxyguanosine, malondialdehyde and carbonylated proteins, respectively) were assessed in blood of normal weight, overweight and obese women in the third trimester of pregnancy. A Back-propagation algorithm was used to train ANN models with four input variables (age, pre-gestational body mass index (p-BMI), weight status and gestational age). ANN models were able to accurately predict all biomarkers with regression coefficients greater than R² = 0.945. P-BMI was the most significant variable for estimating adiponectin and carbonylated proteins concentrations (37%), while gestational age was the most relevant variable to predict resistin and malondialdehyde (34%). Age, gestational age and p-BMI had the same significance for leptin values. Finally, for 8-oxo-2'-deoxyguanosine prediction, the most significant variable was age (37%). These models become relevant to improve clinical and nutrition interventions in prenatal care.
Improved assessment of gross and net primary productivity of Canada's landmass
NASA Astrophysics Data System (ADS)
Gonsamo, Alemu; Chen, Jing M.; Price, David T.; Kurz, Werner A.; Liu, Jane; Boisvenue, Céline; Hember, Robbie A.; Wu, Chaoyang; Chang, Kuo-Hsien
2013-12-01
assess Canada's gross primary productivity (GPP) and net primary productivity (NPP) using boreal ecosystem productivity simulator (BEPS) at 250 m spatial resolution with improved input parameter and driver fields and phenology and nutrient release parameterization schemes. BEPS is a process-based two-leaf enzyme kinetic terrestrial ecosystem model designed to simulate energy, water, and carbon (C) fluxes using spatial data sets of meteorology, remotely sensed land surface variables, soil properties, and photosynthesis and respiration rate parameters. Two improved key land surface variables, leaf area index (LAI) and land cover type, are derived at 250 m from Moderate Resolution Imaging Spectroradiometer sensor. For diagnostic error assessment, we use nine forest flux tower sites where all measured C flux, meteorology, and ancillary data sets are available. The errors due to input drivers and parameters are then independently corrected for Canada-wide GPP and NPP simulations. The optimized LAI use, for example, reduced the absolute bias in GPP from 20.7% to 1.1% for hourly BEPS simulations. Following the error diagnostics and corrections, daily GPP and NPP are simulated over Canada at 250 m spatial resolution, the highest resolution simulation yet for the country or any other comparable region. Total NPP (GPP) for Canada's land area was 1.27 (2.68) Pg C for 2008, with forests contributing 1.02 (2.2) Pg C. The annual comparisons between measured and simulated GPP show that the mean differences are not statistically significant (p > 0.05, paired t test). The main BEPS simulation error sources are from the driver fields.
User Guidelines and Best Practices for CASL VUQ Analysis Using Dakota
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Brian M.; Coleman, Kayla; Gilkey, Lindsay N.
Sandia’s Dakota software (available at http://dakota.sandia.gov) supports science and engineering transformation through advanced exploration of simulations. Specifically it manages and analyzes ensembles of simulations to provide broader and deeper perspective for analysts and decision makers. This enables them to enhance understanding of risk, improve products, and assess simulation credibility. In its simplest mode, Dakota can automate typical parameter variation studies through a generic interface to a physics-based computational model. This can lend efficiency and rigor to manual parameter perturbation studies already being conducted by analysts. However, Dakota also delivers advanced parametric analysis techniques enabling design exploration, optimization, model calibration, riskmore » analysis, and quantification of margins and uncertainty with such models. It directly supports verification and validation activities. Dakota algorithms enrich complex science and engineering models, enabling an analyst to answer crucial questions of - Sensitivity: Which are the most important input factors or parameters entering the simulation, and how do they influence key outputs?; Uncertainty: What is the uncertainty or variability in simulation output, given uncertainties in input parameters? How safe, reliable, robust, or variable is my system? (Quantification of margins and uncertainty, QMU); Optimization: What parameter values yield the best performing design or operating condition, given constraints? Calibration: What models and/or parameters best match experimental data? In general, Dakota is the Consortium for Advanced Simulation of Light Water Reactors (CASL) delivery vehicle for verification, validation, and uncertainty quantification (VUQ) algorithms. It permits ready application of the VUQ methods described above to simulation codes by CASL researchers, code developers, and application engineers.« less
Real-time plasma control in a dual-frequency, confined plasma etcher
NASA Astrophysics Data System (ADS)
Milosavljević, V.; Ellingboe, A. R.; Gaman, C.; Ringwood, J. V.
2008-04-01
The physics issues of developing model-based control of plasma etching are presented. A novel methodology for incorporating real-time model-based control of plasma processing systems is developed. The methodology is developed for control of two dependent variables (ion flux and chemical densities) by two independent controls (27 MHz power and O2 flow). A phenomenological physics model of the nonlinear coupling between the independent controls and the dependent variables of the plasma is presented. By using a design of experiment, the functional dependencies of the response surface are determined. In conjunction with the physical model, the dependencies are used to deconvolve the sensor signals onto the control inputs, allowing compensation of the interaction between control paths. The compensated sensor signals and compensated set-points are then used as inputs to proportional-integral-derivative controllers to adjust radio frequency power and oxygen flow to yield the desired ion flux and chemical density. To illustrate the methodology, model-based real-time control is realized in a commercial semiconductor dielectric etch chamber. The two radio frequency symmetric diode operates with typical commercial fluorocarbon feed-gas mixtures (Ar/O2/C4F8). Key parameters for dielectric etching are known to include ion flux to the surface and surface flux of oxygen containing species. Control is demonstrated using diagnostics of electrode-surface ion current, and chemical densities of O, O2, and CO measured by optical emission spectrometry and/or mass spectrometry. Using our model-based real-time control, the set-point tracking accuracy to changes in chemical species density and ion flux is enhanced.
Electrical Advantages of Dendritic Spines
Gulledge, Allan T.; Carnevale, Nicholas T.; Stuart, Greg J.
2012-01-01
Many neurons receive excitatory glutamatergic input almost exclusively onto dendritic spines. In the absence of spines, the amplitudes and kinetics of excitatory postsynaptic potentials (EPSPs) at the site of synaptic input are highly variable and depend on dendritic location. We hypothesized that dendritic spines standardize the local geometry at the site of synaptic input, thereby reducing location-dependent variability of local EPSP properties. We tested this hypothesis using computational models of simplified and morphologically realistic spiny neurons that allow direct comparison of EPSPs generated on spine heads with EPSPs generated on dendritic shafts at the same dendritic locations. In all morphologies tested, spines greatly reduced location-dependent variability of local EPSP amplitude and kinetics, while having minimal impact on EPSPs measured at the soma. Spine-dependent standardization of local EPSP properties persisted across a range of physiologically relevant spine neck resistances, and in models with variable neck resistances. By reducing the variability of local EPSPs, spines standardized synaptic activation of NMDA receptors and voltage-gated calcium channels. Furthermore, spines enhanced activation of NMDA receptors and facilitated the generation of NMDA spikes and axonal action potentials in response to synaptic input. Finally, we show that dynamic regulation of spine neck geometry can preserve local EPSP properties following plasticity-driven changes in synaptic strength, but is inefficient in modifying the amplitude of EPSPs in other cellular compartments. These observations suggest that one function of dendritic spines is to standardize local EPSP properties throughout the dendritic tree, thereby allowing neurons to use similar voltage-sensitive postsynaptic mechanisms at all dendritic locations. PMID:22532875
Nonlinear Dynamic Models in Advanced Life Support
NASA Technical Reports Server (NTRS)
Jones, Harry
2002-01-01
To facilitate analysis, ALS systems are often assumed to be linear and time invariant, but they usually have important nonlinear and dynamic aspects. Nonlinear dynamic behavior can be caused by time varying inputs, changes in system parameters, nonlinear system functions, closed loop feedback delays, and limits on buffer storage or processing rates. Dynamic models are usually cataloged according to the number of state variables. The simplest dynamic models are linear, using only integration, multiplication, addition, and subtraction of the state variables. A general linear model with only two state variables can produce all the possible dynamic behavior of linear systems with many state variables, including stability, oscillation, or exponential growth and decay. Linear systems can be described using mathematical analysis. Nonlinear dynamics can be fully explored only by computer simulations of models. Unexpected behavior is produced by simple models having only two or three state variables with simple mathematical relations between them. Closed loop feedback delays are a major source of system instability. Exceeding limits on buffer storage or processing rates forces systems to change operating mode. Different equilibrium points may be reached from different initial conditions. Instead of one stable equilibrium point, the system may have several equilibrium points, oscillate at different frequencies, or even behave chaotically, depending on the system inputs and initial conditions. The frequency spectrum of an output oscillation may contain harmonics and the sums and differences of input frequencies, but it may also contain a stable limit cycle oscillation not related to input frequencies. We must investigate the nonlinear dynamic aspects of advanced life support systems to understand and counter undesirable behavior.
Predicting language outcomes for children learning AAC: Child and environmental factors
Brady, Nancy C.; Thiemann-Bourque, Kathy; Fleming, Kandace; Matthews, Kris
2014-01-01
Purpose To investigate a model of language development for nonverbal preschool age children learning to communicate with AAC. Method Ninety-three preschool children with intellectual disabilities were assessed at Time 1, and 82 of these children were assessed one year later at Time 2. The outcome variable was the number of different words the children produced (with speech, sign or SGD). Children’s intrinsic predictor for language was modeled as a latent variable consisting of cognitive development, comprehension, play, and nonverbal communication complexity. Adult input at school and home, and amount of AAC instruction were proposed mediators of vocabulary acquisition. Results A confirmatory factor analysis revealed that measures converged as a coherent construct and an SEM model indicated that the intrinsic child predictor construct predicted different words children produced. The amount of input received at home but not at school was a significant mediator. Conclusions Our hypothesized model accurately reflected a latent construct of Intrinsic Symbolic Factor (ISF). Children who evidenced higher initial levels of ISF and more adult input at home produced more words one year later. Findings support the need to assess multiple child variables, and suggest interventions directed to the indicators of ISF and input. PMID:23785187
Zuhtuogullari, Kursat; Allahverdi, Novruz; Arikan, Nihat
2013-01-01
The systems consisting high input spaces require high processing times and memory usage. Most of the attribute selection algorithms have the problems of input dimensions limits and information storage problems. These problems are eliminated by means of developed feature reduction software using new modified selection mechanism with middle region solution candidates adding. The hybrid system software is constructed for reducing the input attributes of the systems with large number of input variables. The designed software also supports the roulette wheel selection mechanism. Linear order crossover is used as the recombination operator. In the genetic algorithm based soft computing methods, locking to the local solutions is also a problem which is eliminated by using developed software. Faster and effective results are obtained in the test procedures. Twelve input variables of the urological system have been reduced to the reducts (reduced input attributes) with seven, six, and five elements. It can be seen from the obtained results that the developed software with modified selection has the advantages in the fields of memory allocation, execution time, classification accuracy, sensitivity, and specificity values when compared with the other reduction algorithms by using the urological test data.
Zuhtuogullari, Kursat; Allahverdi, Novruz; Arikan, Nihat
2013-01-01
The systems consisting high input spaces require high processing times and memory usage. Most of the attribute selection algorithms have the problems of input dimensions limits and information storage problems. These problems are eliminated by means of developed feature reduction software using new modified selection mechanism with middle region solution candidates adding. The hybrid system software is constructed for reducing the input attributes of the systems with large number of input variables. The designed software also supports the roulette wheel selection mechanism. Linear order crossover is used as the recombination operator. In the genetic algorithm based soft computing methods, locking to the local solutions is also a problem which is eliminated by using developed software. Faster and effective results are obtained in the test procedures. Twelve input variables of the urological system have been reduced to the reducts (reduced input attributes) with seven, six, and five elements. It can be seen from the obtained results that the developed software with modified selection has the advantages in the fields of memory allocation, execution time, classification accuracy, sensitivity, and specificity values when compared with the other reduction algorithms by using the urological test data. PMID:23573172
To twist, roll, stroke or poke? A study of input devices for menu navigation in the cockpit.
Stanton, Neville A; Harvey, Catherine; Plant, Katherine L; Bolton, Luke
2013-01-01
Modern interfaces within the aircraft cockpit integrate many flight management system (FMS) functions into a single system. The success of a user's interaction with an interface depends upon the optimisation between the input device, tasks and environment within which the system is used. In this study, four input devices were evaluated using a range of Human Factors methods, in order to assess aspects of usability including task interaction times, error rates, workload, subjective usability and physical discomfort. The performance of the four input devices was compared using a holistic approach and the findings showed that no single input device produced consistently high performance scores across all of the variables evaluated. The touch screen produced the highest number of 'best' scores; however, discomfort ratings for this device were high, suggesting that it is not an ideal solution as both physical and cognitive aspects of performance must be accounted for in design. This study evaluated four input devices for control of a screen-based flight management system. A holistic approach was used to evaluate both cognitive and physical performance. Performance varied across the dependent variables and between the devices; however, the touch screen produced the largest number of 'best' scores.
Interacting with notebook input devices: an analysis of motor performance and users' expertise.
Sutter, Christine; Ziefle, Martina
2005-01-01
In the present study the usability of two different types of notebook input devices was examined. The independent variables were input device (touchpad vs. mini-joystick) and user expertise (expert vs. novice state). There were 30 participants, of whom 15 were touchpad experts and the other 15 were mini-joystick experts. The experimental tasks were a point-click task (Experiment 1) and a point-drag-drop task (Experiment 2). Dependent variables were the time and accuracy of cursor control. To assess carryover effects, we had the participants complete both experiments, using not only the input device for which they were experts but also the device for which they were novices. Results showed the touchpad performance to be clearly superior to mini-joystick performance. Overall, experts showed better performance than did novices. The significant interaction of input device and expertise showed that the use of an unknown device is difficult, but only for touchpad experts, who were remarkably slower and less accurate when using a mini-joystick. Actual and potential applications of this research include an evaluation of current notebook input devices. The outcomes allow ergonomic guidelines to be derived for optimized usage and design of the mini-joystick and touchpad devices.
Socio-economic exposure to natural disasters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marin, Giovanni, E-mail: giovanni.marin@uniurb.it; IRCrES - CNR, Research Institute on Sustainable Economic Growth, Via Corti 12, 20133 - Milano; SEEDS, Ferrara
Even though the correct assessment of risks is a key aspect of the risk management analysis, we argue that limited effort has been devoted in the assessment of comprehensive measures of economic exposure at very low scale. For this reason, we aim at providing a series of suitable methodologies to provide a complete and detailed list of the exposure of economic activities to natural disasters. We use Input-Output models to provide information about several socio-economic variables, such as population density, employment density, firms' turnover and capital stock, that can be seen as direct and indirect socio-economic exposure to natural disasters.more » We then provide an application to the Italian context. These measures can be easily incorporated into risk assessment models to provide a clear picture of the disaster risk for local areas. - Highlights: • Ex ante assessment of economic exposure to disasters at very low geographical scale • Assessment of the cost of natural disasters in ex-post perspective • IO model and spatial autocorrelation to get information on socio-economic variables • Indicators supporting risk assessment and risk management models.« less
NASA Astrophysics Data System (ADS)
Milovančević, Miloš; Nikolić, Vlastimir; Anđelković, Boban
2017-01-01
Vibration-based structural health monitoring is widely recognized as an attractive strategy for early damage detection in civil structures. Vibration monitoring and prediction is important for any system since it can save many unpredictable behaviors of the system. If the vibration monitoring is properly managed, that can ensure economic and safe operations. Potentials for further improvement of vibration monitoring lie in the improvement of current control strategies. One of the options is the introduction of model predictive control. Multistep ahead predictive models of vibration are a starting point for creating a successful model predictive strategy. For the purpose of this article, predictive models of are created for vibration monitoring of planetary power transmissions in pellet mills. The models were developed using the novel method based on ANFIS (adaptive neuro fuzzy inference system). The aim of this study is to investigate the potential of ANFIS for selecting the most relevant variables for predictive models of vibration monitoring of pellet mills power transmission. The vibration data are collected by PIC (Programmable Interface Controller) microcontrollers. The goal of the predictive vibration monitoring of planetary power transmissions in pellet mills is to indicate deterioration in the vibration of the power transmissions before the actual failure occurs. The ANFIS process for variable selection was implemented in order to detect the predominant variables affecting the prediction of vibration monitoring. It was also used to select the minimal input subset of variables from the initial set of input variables - current and lagged variables (up to 11 steps) of vibration. The obtained results could be used for simplification of predictive methods so as to avoid multiple input variables. It was preferable to used models with less inputs because of overfitting between training and testing data. While the obtained results are promising, further work is required in order to get results that could be directly applied in practice.
Using a Bayesian network to predict barrier island geomorphologic characteristics
Gutierrez, Ben; Plant, Nathaniel G.; Thieler, E. Robert; Turecek, Aaron
2015-01-01
Quantifying geomorphic variability of coastal environments is important for understanding and describing the vulnerability of coastal topography, infrastructure, and ecosystems to future storms and sea level rise. Here we use a Bayesian network (BN) to test the importance of multiple interactions between barrier island geomorphic variables. This approach models complex interactions and handles uncertainty, which is intrinsic to future sea level rise, storminess, or anthropogenic processes (e.g., beach nourishment and other forms of coastal management). The BN was developed and tested at Assateague Island, Maryland/Virginia, USA, a barrier island with sufficient geomorphic and temporal variability to evaluate our approach. We tested the ability to predict dune height, beach width, and beach height variables using inputs that included longer-term, larger-scale, or external variables (historical shoreline change rates, distances to inlets, barrier width, mean barrier elevation, and anthropogenic modification). Data sets from three different years spanning nearly a decade sampled substantial temporal variability and serve as a proxy for analysis of future conditions. We show that distinct geomorphic conditions are associated with different long-term shoreline change rates and that the most skillful predictions of dune height, beach width, and beach height depend on including multiple input variables simultaneously. The predictive relationships are robust to variations in the amount of input data and to variations in model complexity. The resulting model can be used to evaluate scenarios related to coastal management plans and/or future scenarios where shoreline change rates may differ from those observed historically.
NASA Astrophysics Data System (ADS)
Pozzi, W.; Fekete, B.; Piasecki, M.; McGuinness, D.; Fox, P.; Lawford, R.; Vorosmarty, C.; Houser, P.; Imam, B.
2008-12-01
The inadequacies of water cycle observations for monitoring long-term changes in the global water system, as well as their feedback into the climate system, poses a major constraint on sustainable development of water resources and improvement of water management practices. Hence, The Group on Earth Observations (GEO) has established Task WA-08-01, "Integration of in situ and satellite data for water cycle monitoring," an integrative initiative combining different types of satellite and in situ observations related to key variables of the water cycle with model outputs for improved accuracy and global coverage. This presentation proposes development of the Rapid, Integrated Monitoring System for the Water Cycle (Global-RIMS)--already employed by the GEO Global Terrestrial Network for Hydrology (GTN-H)--as either one of the main components or linked with the Asian system to constitute the modeling system of GEOSS for water cycle monitoring. We further propose expanded, augmented capability to run multiple grids to embrace some of the heterogeneous methods and formats of the Earth Science, Hydrology, and Hydraulic Engineering communities. Different methodologies are employed by the Earth Science (land surface modeling), the Hydrological (GIS), and the Hydraulic Engineering Communities; with each community employing models that require different input data. Data will be routed as input variables to the models through web services, allowing satellite and in situ data to be integrated together within the modeling framework. Semantic data integration will provide the automation to enable this system to operate in near-real-time. Multiple data collections for ground water, precipitation, soil moisture satellite data, such as SMAP, and lake data will require multiple low level ontologies, and an upper level ontology will permit user-friendly water management knowledge to be synthesized. These ontologies will have to have overlapping terms mapped and linked together. so that they can cover an even wider net of data sources. The goal is to develop the means to link together the upper level and lower level ontologies and to have these registered within the GEOSS Registry. Actual operational ontologies that would link to models or link to data collections containing input variables required by models would have to be nested underneath this top level ontology, analogous to the mapping that has been carried out among ontologies within GEON.
Symbolic PathFinder: Symbolic Execution of Java Bytecode
NASA Technical Reports Server (NTRS)
Pasareanu, Corina S.; Rungta, Neha
2010-01-01
Symbolic Pathfinder (SPF) combines symbolic execution with model checking and constraint solving for automated test case generation and error detection in Java programs with unspecified inputs. In this tool, programs are executed on symbolic inputs representing multiple concrete inputs. Values of variables are represented as constraints generated from the analysis of Java bytecode. The constraints are solved using off-the shelf solvers to generate test inputs guaranteed to achieve complex coverage criteria. SPF has been used successfully at NASA, in academia, and in industry.
Role of Updraft Velocity in Temporal Variability of Global Cloud Hydrometeor Number
NASA Technical Reports Server (NTRS)
Sullivan, Sylvia C.; Lee, Dong Min; Oreopoulos, Lazaros; Nenes, Athanasios
2016-01-01
Understanding how dynamical and aerosol inputs affect the temporal variability of hydrometeor formation in climate models will help to explain sources of model diversity in cloud forcing, to provide robust comparisons with data, and, ultimately, to reduce the uncertainty in estimates of the aerosol indirect effect. This variability attribution can be done at various spatial and temporal resolutions with metrics derived from online adjoint sensitivities of droplet and crystal number to relevant inputs. Such metrics are defined and calculated from simulations using the NASA Goddard Earth Observing System Model, Version 5 (GEOS-5) and the National Center for Atmospheric Research Community Atmosphere Model Version 5.1 (CAM5.1). Input updraft velocity fluctuations can explain as much as 48% of temporal variability in output ice crystal number and 61% in droplet number in GEOS-5 and up to 89% of temporal variability in output ice crystal number in CAM5.1. In both models, this vertical velocity attribution depends strongly on altitude. Despite its importance for hydrometeor formation, simulated vertical velocity distributions are rarely evaluated against observations due to the sparsity of relevant data. Coordinated effort by the atmospheric community to develop more consistent, observationally based updraft treatments will help to close this knowledge gap.
Role of updraft velocity in temporal variability of global cloud hydrometeor number
Sullivan, Sylvia C.; Lee, Dongmin; Oreopoulos, Lazaros; ...
2016-05-16
Understanding how dynamical and aerosol inputs affect the temporal variability of hydrometeor formation in climate models will help to explain sources of model diversity in cloud forcing, to provide robust comparisons with data, and, ultimately, to reduce the uncertainty in estimates of the aerosol indirect effect. This variability attribution can be done at various spatial and temporal resolutions with metrics derived from online adjoint sensitivities of droplet and crystal number to relevant inputs. Such metrics are defined and calculated from simulations using the NASA Goddard Earth Observing System Model, Version 5 (GEOS-5) and the National Center for Atmospheric Research Communitymore » Atmosphere Model Version 5.1 (CAM5.1). Input updraft velocity fluctuations can explain as much as 48% of temporal variability in output ice crystal number and 61% in droplet number in GEOS-5 and up to 89% of temporal variability in output ice crystal number in CAM5.1. In both models, this vertical velocity attribution depends strongly on altitude. Despite its importance for hydrometeor formation, simulated vertical velocity distributions are rarely evaluated against observations due to the sparsity of relevant data. Finally, coordinated effort by the atmospheric community to develop more consistent, observationally based updraft treatments will help to close this knowledge gap.« less
Role of updraft velocity in temporal variability of global cloud hydrometeor number
NASA Astrophysics Data System (ADS)
Sullivan, Sylvia C.; Lee, Dongmin; Oreopoulos, Lazaros; Nenes, Athanasios
2016-05-01
Understanding how dynamical and aerosol inputs affect the temporal variability of hydrometeor formation in climate models will help to explain sources of model diversity in cloud forcing, to provide robust comparisons with data, and, ultimately, to reduce the uncertainty in estimates of the aerosol indirect effect. This variability attribution can be done at various spatial and temporal resolutions with metrics derived from online adjoint sensitivities of droplet and crystal number to relevant inputs. Such metrics are defined and calculated from simulations using the NASA Goddard Earth Observing System Model, Version 5 (GEOS-5) and the National Center for Atmospheric Research Community Atmosphere Model Version 5.1 (CAM5.1). Input updraft velocity fluctuations can explain as much as 48% of temporal variability in output ice crystal number and 61% in droplet number in GEOS-5 and up to 89% of temporal variability in output ice crystal number in CAM5.1. In both models, this vertical velocity attribution depends strongly on altitude. Despite its importance for hydrometeor formation, simulated vertical velocity distributions are rarely evaluated against observations due to the sparsity of relevant data. Coordinated effort by the atmospheric community to develop more consistent, observationally based updraft treatments will help to close this knowledge gap.
WRF/CMAQ AQMEII3 Simulations of U.S. Regional-Scale Ozone: Sensitivity to Processes and Inputs
Chemical boundary conditions are a key input to regional-scale photochemical models. In this study, performed during the third phase of the Air Quality Model Evaluation International Initiative (AQMEII3), we perform annual simulations over North America with chemical boundary con...
A systems thinking approach to analysis of the Patient Protection and Affordable Care Act.
Williams, John C
2015-01-01
The public health community is challenged with understanding the many complexities presented by systems thinking and its applications in systems modeling. The model presented encompasses multiple variables needed (eg, model building) for the construction of a conceptual system model of the Patient Protection and Affordable Care Act (ACA). The model tracks the ACA from inception, through passage, March 2010, to its current state. Justification for the need to reform the current health care system rests, in part, on the heels of social justice. Proponents of the ACA have long argued that health reform was needed by the millions of uninsured person who suffered from health disparities, took little advantage of health prevention advice, and faced issues of access to providers as well as insurers. In addition the ACA seeks to address our uncontrollable spending on health care delivery. This article highlights the ACA from a systems perspective. The conceptual model presented encompasses both health reform variables (eg, health care provisions, key legislative components, system environment) and system variables (eg, inputs, outputs, feedback, and throughput) needed to understand current health care reform efforts from a systems perspective. The model presented shows how the interrelationships and interconnections of elements of a system come together to achieve its purpose or goal.
Carrillo, Presentación; Medina-Sánchez, Juan M; Herrera, Guillermo; Durán, Cristina; Segovia, María; Cortés, Dolores; Salles, Soluna; Korbee, Nathalie; Figueroa, Félix L; Mercado, Jesús M
2015-01-01
Some of the most important effects of global change on coastal marine systems include increasing nutrient inputs and higher levels of ultraviolet radiation (UVR, 280-400 nm), which could affect primary producers, a key trophic link to the functioning of marine food webs. However, interactive effects of both factors on the phytoplankton community have not been assessed for the Mediterranean Sea. An in situ factorial experiment, with two levels of ultraviolet solar radiation (UVR+PAR vs. PAR) and nutrients (control vs. P-enriched), was performed to evaluate single and UVR×P effects on metabolic, enzymatic, stoichiometric and structural phytoplanktonic variables. While most phytoplankton variables were not affected by UVR, dissolved phosphatase (APAEX) and algal P content increased in the presence of UVR, which was interpreted as an acclimation mechanism of algae to oligotrophic marine waters. Synergistic UVR×P interactive effects were positive on photosynthetic variables (i.e., maximal electron transport rate, ETRmax), but negative on primary production and phytoplankton biomass because the pulse of P unmasked the inhibitory effect of UVR. This unmasking effect might be related to greater photodamage caused by an excess of electron flux after a P pulse (higher ETRmax) without an efficient release of carbon as the mechanism to dissipate the reducing power of photosynthetic electron transport.
Folguera-Blasco, Núria; Cuyàs, Elisabet; Menéndez, Javier A; Alarcón, Tomás
2018-03-01
Understanding the control of epigenetic regulation is key to explain and modify the aging process. Because histone-modifying enzymes are sensitive to shifts in availability of cofactors (e.g. metabolites), cellular epigenetic states may be tied to changing conditions associated with cofactor variability. The aim of this study is to analyse the relationships between cofactor fluctuations, epigenetic landscapes, and cell state transitions. Using Approximate Bayesian Computation, we generate an ensemble of epigenetic regulation (ER) systems whose heterogeneity reflects variability in cofactor pools used by histone modifiers. The heterogeneity of epigenetic metabolites, which operates as regulator of the kinetic parameters promoting/preventing histone modifications, stochastically drives phenotypic variability. The ensemble of ER configurations reveals the occurrence of distinct epi-states within the ensemble. Whereas resilient states maintain large epigenetic barriers refractory to reprogramming cellular identity, plastic states lower these barriers, and increase the sensitivity to reprogramming. Moreover, fine-tuning of cofactor levels redirects plastic epigenetic states to re-enter epigenetic resilience, and vice versa. Our ensemble model agrees with a model of metabolism-responsive loss of epigenetic resilience as a cellular aging mechanism. Our findings support the notion that cellular aging, and its reversal, might result from stochastic translation of metabolic inputs into resilient/plastic cell states via ER systems.
Soft sensor modeling based on variable partition ensemble method for nonlinear batch processes
NASA Astrophysics Data System (ADS)
Wang, Li; Chen, Xiangguang; Yang, Kai; Jin, Huaiping
2017-01-01
Batch processes are always characterized by nonlinear and system uncertain properties, therefore, the conventional single model may be ill-suited. A local learning strategy soft sensor based on variable partition ensemble method is developed for the quality prediction of nonlinear and non-Gaussian batch processes. A set of input variable sets are obtained by bootstrapping and PMI criterion. Then, multiple local GPR models are developed based on each local input variable set. When a new test data is coming, the posterior probability of each best performance local model is estimated based on Bayesian inference and used to combine these local GPR models to get the final prediction result. The proposed soft sensor is demonstrated by applying to an industrial fed-batch chlortetracycline fermentation process.
Quantification of uncertainties in global grazing systems assessment
NASA Astrophysics Data System (ADS)
Fetzel, T.; Havlik, P.; Herrero, M.; Kaplan, J. O.; Kastner, T.; Kroisleitner, C.; Rolinski, S.; Searchinger, T.; Van Bodegom, P. M.; Wirsenius, S.; Erb, K.-H.
2017-07-01
Livestock systems play a key role in global sustainability challenges like food security and climate change, yet many unknowns and large uncertainties prevail. We present a systematic, spatially explicit assessment of uncertainties related to grazing intensity (GI), a key metric for assessing ecological impacts of grazing, by combining existing data sets on (a) grazing feed intake, (b) the spatial distribution of livestock, (c) the extent of grazing land, and (d) its net primary productivity (NPP). An analysis of the resulting 96 maps implies that on average 15% of the grazing land NPP is consumed by livestock. GI is low in most of the world's grazing lands, but hotspots of very high GI prevail in 1% of the total grazing area. The agreement between GI maps is good on one fifth of the world's grazing area, while on the remainder, it is low to very low. Largest uncertainties are found in global drylands and where grazing land bears trees (e.g., the Amazon basin or the Taiga belt). In some regions like India or Western Europe, massive uncertainties even result in GI > 100% estimates. Our sensitivity analysis indicates that the input data for NPP, animal distribution, and grazing area contribute about equally to the total variability in GI maps, while grazing feed intake is a less critical variable. We argue that a general improvement in quality of the available global level data sets is a precondition for improving the understanding of the role of livestock systems in the context of global environmental change or food security.
Kantún-Manzano, C A; Herrera-Silveira, J A; Arcega-Cabrera, F
2018-01-01
The influence of coastal submarine groundwater discharges (SGD) on the distribution and abundance of seagrass meadows was investigated. In 2012, hydrological variability, nutrient variability in sediments and the biotic characteristics of two seagrass beds, one with SGD present and one without, were studied. Findings showed that SGD inputs were related with one dominant seagrass species. To further understand this, a generalized additive model (GAM) was used to explore the relationship between seagrass biomass and environment conditions (water and sediment variables). Salinity range (21-35.5 PSU) was the most influential variable (85%), explaining why H. wrightii was the sole plant species present at the SGD site. At the site without SGD, GAM could not be performed since environmental variables could not explain a total variance of > 60%. This research shows the relevance of monitoring SGD inputs in coastal karstic areas since they significantly affect biotic characteristics of seagrass beds.
Nutrient inputs from the watershed and coastal eutrophication in the Florida Keys
DOE Office of Scientific and Technical Information (OSTI.GOV)
LaPointe, B.E.; Clark, M.W.
1992-12-01
Widespread use of septic tanks in the Florida Keys increase the nutrient concentrations of limestone ground waters that discharge into shallow nearshore waters, resulting in coastal eutrophication. This study characterizes watershed nutrient inputs, transformations, and effects along a land-sea gradient stratified into four ecosystems that occur with increasing distance from land: manmade canal systems, seagrass meadows, patch reefs, and offshore bank reefs. Soluble reactive phosphorus (SRP), the primary limiting nutrient, was significantly elevated in canal systems, while dissolved inorganic nitrogen (DIN; NH[sub 4][sup =] and NO[sub 3][sup [minus
Multiple-input multiple-output causal strategies for gene selection.
Bontempi, Gianluca; Haibe-Kains, Benjamin; Desmedt, Christine; Sotiriou, Christos; Quackenbush, John
2011-11-25
Traditional strategies for selecting variables in high dimensional classification problems aim to find sets of maximally relevant variables able to explain the target variations. If these techniques may be effective in generalization accuracy they often do not reveal direct causes. The latter is essentially related to the fact that high correlation (or relevance) does not imply causation. In this study, we show how to efficiently incorporate causal information into gene selection by moving from a single-input single-output to a multiple-input multiple-output setting. We show in synthetic case study that a better prioritization of causal variables can be obtained by considering a relevance score which incorporates a causal term. In addition we show, in a meta-analysis study of six publicly available breast cancer microarray datasets, that the improvement occurs also in terms of accuracy. The biological interpretation of the results confirms the potential of a causal approach to gene selection. Integrating causal information into gene selection algorithms is effective both in terms of prediction accuracy and biological interpretation.
Hammerstrom, Donald J.
2013-10-15
A method for managing the charging and discharging of batteries wherein at least one battery is connected to a battery charger, the battery charger is connected to a power supply. A plurality of controllers in communication with one and another are provided, each of the controllers monitoring a subset of input variables. A set of charging constraints may then generated for each controller as a function of the subset of input variables. A set of objectives for each controller may also be generated. A preferred charge rate for each controller is generated as a function of either the set of objectives, the charging constraints, or both, using an algorithm that accounts for each of the preferred charge rates for each of the controllers and/or that does not violate any of the charging constraints. A current flow between the battery and the battery charger is then provided at the actual charge rate.
NASA Astrophysics Data System (ADS)
Knapp, Alan; Smith, Melinda; Collins, Scott; Blair, John; Briggs, John
2010-05-01
Understanding and predicting the dynamics of ecological systems has always been central to Ecology. Today, ecologists recognize that in addition to natural and human-caused disturbances, a fundamentally different type of ecosystem change is being driven by the combined and cumulative effects of anthropogenic activities affecting earth's climate and biogeochemical cycles. This type of change is historically unprecedented in magnitude, and as a consequence, such alterations are leading to trajectories of change in ecological responses that differ radically from those observed in the past. Through both short- and long-term experiments, we have been trying to better understand the mechanisms and consequences of ecological change in grassland ecosystems likely to result from changes in precipitation regimes. We have manipulated a key resource for most grasslands (water) and modulators of water availability (temperature) in field experiments that vary from 1-17 years in duration, and used even longer-term monitoring data from the Konza Prairie LTER program to assess how grassland communities and ecosystems will respond to changes in water availability. Trajectories of change in aboveground net primary production (ANPP) in sites subjected to 17 years of soil water augmentation were strongly non-linear with a marked increase in the stimulation of ANPP after year 8 (from 25% to 65%). Lags in alterations in grassland community composition are posited to be responsible for the form of this trajectory of change. In contrast, responses in ANPP to chronic increases in soil moisture variability appear to have decreased over a 10-yr period of manipulation, although the net effects of more variable precipitation inputs were to reduce ANPP, alter the genetic structure of the dominant grass species, increase soil nitrogen availability and reduce soil respiration. The loss of sensitivity to increased resource variability was not reflected in adjacent plots where precipitation was manipulated for only a single year. And when similar short-term experimental manipulations of precipitation variability were conducted in more arid grasslands, responses in ANPP were opposite those in mesic grassland. This suggests that grassland responses to alterations in precipitation inputs may vary dramatically depending on the long-term hydrologic regime.
Ambrose, Sophie E; Walker, Elizabeth A; Unflat-Berry, Lauren M; Oleson, Jacob J; Moeller, Mary Pat
2015-01-01
The primary objective of this study was to examine the quantity and quality of caregiver talk directed to children who are hard of hearing (CHH) compared with children with normal hearing (CNH). For the CHH only, the study explored how caregiver input changed as a function of child age (18 months versus 3 years), which child and family factors contributed to variance in caregiver linguistic input at 18 months and 3 years, and how caregiver talk at 18 months related to child language outcomes at 3 years. Participants were 59 CNH and 156 children with bilateral, mild-to-severe hearing loss. When children were approximately 18 months and/or 3 years of age, caregivers and children participated in a 5-min semistructured, conversational interaction. Interactions were transcribed and coded for two features of caregiver input representing quantity (number of total utterances and number of total words) and four features representing quality (number of different words, mean length of utterance in morphemes, proportion of utterances that were high level, and proportion of utterances that were directing). In addition, at the 18-month visit, parents completed a standardized questionnaire regarding their child's communication development. At the 3-year visit, a clinician administered a standardized language measure. At the 18-month visit, the CHH were exposed to a greater proportion of directing utterances than the CNH. At the 3-year visit, there were significant differences between the CNH and CHH for number of total words and all four of the quality variables, with the CHH being exposed to fewer words and lower quality input. Caregivers generally provided higher quality input to CHH at the 3-year visit compared with the 18-month visit. At the 18-month visit, quantity variables, but not quality variables, were related to several child and family factors. At the 3-year visit, the variable most strongly related to caregiver input was child language. Longitudinal analyses indicated that quality, but not quantity, of caregiver linguistic input at 18 months was related to child language abilities at 3 years, with directing utterances accounting for significant unique variance in child language outcomes. Although caregivers of CHH increased their use of quality features of linguistic input over time, the differences when compared with CNH suggest that some caregivers may need additional support to provide their children with optimal language learning environments. This is particularly important given the relationships that were identified between quality features of caregivers' linguistic input and children's language abilities. Family supports should include a focus on developing a style that is conversational eliciting as opposed to directive.
A Framework to Guide the Assessment of Human-Machine Systems.
Stowers, Kimberly; Oglesby, James; Sonesh, Shirley; Leyva, Kevin; Iwig, Chelsea; Salas, Eduardo
2017-03-01
We have developed a framework for guiding measurement in human-machine systems. The assessment of safety and performance in human-machine systems often relies on direct measurement, such as tracking reaction time and accidents. However, safety and performance emerge from the combination of several variables. The assessment of precursors to safety and performance are thus an important part of predicting and improving outcomes in human-machine systems. As part of an in-depth literature analysis involving peer-reviewed, empirical articles, we located and classified variables important to human-machine systems, giving a snapshot of the state of science on human-machine system safety and performance. Using this information, we created a framework of safety and performance in human-machine systems. This framework details several inputs and processes that collectively influence safety and performance. Inputs are divided according to human, machine, and environmental inputs. Processes are divided into attitudes, behaviors, and cognitive variables. Each class of inputs influences the processes and, subsequently, outcomes that emerge in human-machine systems. This framework offers a useful starting point for understanding the current state of the science and measuring many of the complex variables relating to safety and performance in human-machine systems. This framework can be applied to the design, development, and implementation of automated machines in spaceflight, military, and health care settings. We present a hypothetical example in our write-up of how it can be used to aid in project success.
Creating a non-linear total sediment load formula using polynomial best subset regression model
NASA Astrophysics Data System (ADS)
Okcu, Davut; Pektas, Ali Osman; Uyumaz, Ali
2016-08-01
The aim of this study is to derive a new total sediment load formula which is more accurate and which has less application constraints than the well-known formulae of the literature. 5 most known stream power concept sediment formulae which are approved by ASCE are used for benchmarking on a wide range of datasets that includes both field and flume (lab) observations. The dimensionless parameters of these widely used formulae are used as inputs in a new regression approach. The new approach is called Polynomial Best subset regression (PBSR) analysis. The aim of the PBRS analysis is fitting and testing all possible combinations of the input variables and selecting the best subset. Whole the input variables with their second and third powers are included in the regression to test the possible relation between the explanatory variables and the dependent variable. While selecting the best subset a multistep approach is used that depends on significance values and also the multicollinearity degrees of inputs. The new formula is compared to others in a holdout dataset and detailed performance investigations are conducted for field and lab datasets within this holdout data. Different goodness of fit statistics are used as they represent different perspectives of the model accuracy. After the detailed comparisons are carried out we figured out the most accurate equation that is also applicable on both flume and river data. Especially, on field dataset the prediction performance of the proposed formula outperformed the benchmark formulations.
Sobol' sensitivity analysis for stressor impacts on honeybee ...
We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather, colony resources, population structure, and other important variables. This allows us to test the effects of defined pesticide exposure scenarios versus controlled simulations that lack pesticide exposure. The daily resolution of the model also allows us to conditionally identify sensitivity metrics. We use the variancebased global decomposition sensitivity analysis method, Sobol’, to assess firstand secondorder parameter sensitivities within VarroaPop, allowing us to determine how variance in the output is attributed to each of the input variables across different exposure scenarios. Simulations with VarroaPop indicate queen strength, forager life span and pesticide toxicity parameters are consistent, critical inputs for colony dynamics. Further analysis also reveals that the relative importance of these parameters fluctuates throughout the simulation period according to the status of other inputs. Our preliminary results show that model variability is conditional and can be attributed to different parameters depending on different timescales. By using sensitivity analysis to assess model output and variability, calibrations of simulation models can be better informed to yield more
Observation and simulation of net primary productivity in Qilian Mountain, western China.
Zhou, Y; Zhu, Q; Chen, J M; Wang, Y Q; Liu, J; Sun, R; Tang, S
2007-11-01
We modeled net primary productivity (NPP) at high spatial resolution using an advanced spaceborne thermal emission and reflection radiometer (ASTER) image of a Qilian Mountain study area using the boreal ecosystem productivity simulator (BEPS). Two key driving variables of the model, leaf area index (LAI) and land cover type, were derived from ASTER and moderate resolution imaging spectroradiometer (MODIS) data. Other spatially explicit inputs included daily meteorological data (radiation, precipitation, temperature, humidity), available soil water holding capacity (AWC), and forest biomass. NPP was estimated for coniferous forests and other land cover types in the study area. The result showed that NPP of coniferous forests in the study area was about 4.4 tCha(-1)y(-1). The correlation coefficient between the modeled NPP and ground measurements was 0.84, with a mean relative error of about 13.9%.
NASA Astrophysics Data System (ADS)
Rahmati, Mehdi
2017-08-01
Developing accurate and reliable pedo-transfer functions (PTFs) to predict soil non-readily available characteristics is one of the most concerned topic in soil science and selecting more appropriate predictors is a crucial factor in PTFs' development. Group method of data handling (GMDH), which finds an approximate relationship between a set of input and output variables, not only provide an explicit procedure to select the most essential PTF input variables, but also results in more accurate and reliable estimates than other mostly applied methodologies. Therefore, the current research was aimed to apply GMDH in comparison with multivariate linear regression (MLR) and artificial neural network (ANN) to develop several PTFs to predict soil cumulative infiltration point-basely at specific time intervals (0.5-45 min) using soil readily available characteristics (RACs). In this regard, soil infiltration curves as well as several soil RACs including soil primary particles (clay (CC), silt (Si), and sand (Sa)), saturated hydraulic conductivity (Ks), bulk (Db) and particle (Dp) densities, organic carbon (OC), wet-aggregate stability (WAS), electrical conductivity (EC), and soil antecedent (θi) and field saturated (θfs) water contents were measured at 134 different points in Lighvan watershed, northwest of Iran. Then, applying GMDH, MLR, and ANN methodologies, several PTFs have been developed to predict cumulative infiltrations using two sets of selected soil RACs including and excluding Ks. According to the test data, results showed that developed PTFs by GMDH and MLR procedures using all soil RACs including Ks resulted in more accurate (with E values of 0.673-0.963) and reliable (with CV values lower than 11 percent) predictions of cumulative infiltrations at different specific time steps. In contrast, ANN procedure had lower accuracy (with E values of 0.356-0.890) and reliability (with CV values up to 50 percent) compared to GMDH and MLR. The results also revealed that Ks exclusion from input variables list caused around 30 percent decrease in PTFs accuracy for all applied procedures. However, it seems that Ks exclusion resulted in more practical PTFs especially in the case of GMDH network applying input variables which are less time consuming than Ks. In general, it is concluded that GMDH provides more accurate and reliable estimates of cumulative infiltration (a non-readily available characteristic of soil) with a minimum set of input variables (2-4 input variables) and can be promising strategy to model soil infiltration combining the advantages of ANN and MLR methodologies.
How model and input uncertainty impact maize yield simulations in West Africa
NASA Astrophysics Data System (ADS)
Waha, Katharina; Huth, Neil; Carberry, Peter; Wang, Enli
2015-02-01
Crop models are common tools for simulating crop yields and crop production in studies on food security and global change. Various uncertainties however exist, not only in the model design and model parameters, but also and maybe even more important in soil, climate and management input data. We analyze the performance of the point-scale crop model APSIM and the global scale crop model LPJmL with different climate and soil conditions under different agricultural management in the low-input maize-growing areas of Burkina Faso, West Africa. We test the models’ response to different levels of input information from little to detailed information on soil, climate (1961-2000) and agricultural management and compare the models’ ability to represent the observed spatial (between locations) and temporal variability (between years) in crop yields. We found that the resolution of different soil, climate and management information influences the simulated crop yields in both models. However, the difference between models is larger than between input data and larger between simulations with different climate and management information than between simulations with different soil information. The observed spatial variability can be represented well from both models even with little information on soils and management but APSIM simulates a higher variation between single locations than LPJmL. The agreement of simulated and observed temporal variability is lower due to non-climatic factors e.g. investment in agricultural research and development between 1987 and 1991 in Burkina Faso which resulted in a doubling of maize yields. The findings of our study highlight the importance of scale and model choice and show that the most detailed input data does not necessarily improve model performance.
Konstantinidis, Spyridon; Titchener-Hooker, Nigel; Velayudhan, Ajoy
2017-08-01
Bioprocess development studies often involve the investigation of numerical and categorical inputs via the adoption of Design of Experiments (DoE) techniques. An attractive alternative is the deployment of a grid compatible Simplex variant which has been shown to yield optima rapidly and consistently. In this work, the method is combined with dummy variables and it is deployed in three case studies wherein spaces are comprised of both categorical and numerical inputs, a situation intractable by traditional Simplex methods. The first study employs in silico data and lays out the dummy variable methodology. The latter two employ experimental data from chromatography based studies performed with the filter-plate and miniature column High Throughput (HT) techniques. The solute of interest in the former case study was a monoclonal antibody whereas the latter dealt with the separation of a binary system of model proteins. The implemented approach prevented the stranding of the Simplex method at local optima, due to the arbitrary handling of the categorical inputs, and allowed for the concurrent optimization of numerical and categorical, multilevel and/or dichotomous, inputs. The deployment of the Simplex method, combined with dummy variables, was therefore entirely successful in identifying and characterizing global optima in all three case studies. The Simplex-based method was further shown to be of equivalent efficiency to a DoE-based approach, represented here by D-Optimal designs. Such an approach failed, however, to both capture trends and identify optima, and led to poor operating conditions. It is suggested that the Simplex-variant is suited to development activities involving numerical and categorical inputs in early bioprocess development. © 2017 The Authors. Biotechnology Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Input-Based Grammar Pedagogy: A Comparison of Two Possibilities
ERIC Educational Resources Information Center
Marsden, Emma
2005-01-01
This article presents arguments for using listening and reading activities as an option for techniques in grammar pedagogy. It describes two possible approaches: Processing Instruction (PI) and Enriched Input (EI), and examples of their key features are included in the appendices. The article goes on to report on a classroom based quasi-experiment…
Sympathovagal imbalance in hyperthyroidism.
Burggraaf, J; Tulen, J H; Lalezari, S; Schoemaker, R C; De Meyer, P H; Meinders, A E; Cohen, A F; Pijl, H
2001-07-01
We assessed sympathovagal balance in thyrotoxicosis. Fourteen patients with Graves' hyperthyroidism were studied before and after 7 days of treatment with propranolol (40 mg 3 times a day) and in the euthyroid state. Data were compared with those obtained in a group of age-, sex-, and weight-matched controls. Autonomic inputs to the heart were assessed by power spectral analysis of heart rate variability. Systemic exposure to sympathetic neurohormones was estimated on the basis of 24-h urinary catecholamine excretion. The spectral power in the high-frequency domain was considerably reduced in hyperthyroid patients, indicating diminished vagal inputs to the heart. Increased heart rate and mid-frequency/high-frequency power ratio in the presence of reduced total spectral power and increased urinary catecholamine excretion strongly suggest enhanced sympathetic inputs in thyrotoxicosis. All abnormal features of autonomic balance were completely restored to normal in the euthyroid state. beta-Adrenoceptor antagonism reduced heart rate in hyperthyroid patients but did not significantly affect heart rate variability or catecholamine excretion. This is in keeping with the concept of a joint disruption of sympathetic and vagal inputs to the heart underlying changes in heart rate variability. Thus thyrotoxicosis is characterized by profound sympathovagal imbalance, brought about by increased sympathetic activity in the presence of diminished vagal tone.
Sensitivity and uncertainty of input sensor accuracy for grass-based reference evapotranspiration
USDA-ARS?s Scientific Manuscript database
Quantification of evapotranspiration (ET) in agricultural environments is becoming of increasing importance throughout the world, thus understanding input variability of relevant sensors is of paramount importance as well. The Colorado Agricultural and Meteorological Network (CoAgMet) and the Florid...
Assessment of input uncertainty by seasonally categorized latent variables using SWAT
USDA-ARS?s Scientific Manuscript database
Watershed processes have been explored with sophisticated simulation models for the past few decades. It has been stated that uncertainty attributed to alternative sources such as model parameters, forcing inputs, and measured data should be incorporated during the simulation process. Among varyin...
Speaker Invariance for Phonetic Information: an fMRI Investigation
Salvata, Caden; Blumstein, Sheila E.; Myers, Emily B.
2012-01-01
The current study explored how listeners map the variable acoustic input onto a common sound structure representation while being able to retain phonetic detail to distinguish among the identity of talkers. An adaptation paradigm was utilized to examine areas which showed an equal neural response (equal release from adaptation) to phonetic change when spoken by the same speaker and when spoken by two different speakers, and insensitivity (failure to show release from adaptation) when the same phonetic input was spoken by a different speaker. Neural areas which showed speaker invariance were located in the anterior portion of the middle superior temporal gyrus bilaterally. These findings provide support for the view that speaker normalization processes allow for the translation of a variable speech input to a common abstract sound structure. That this process appears to occur early in the processing stream, recruiting temporal structures, suggests that this mapping takes place prelexically, before sound structure input is mapped on to lexical representations. PMID:23264714
The input and output management of solid waste using DEA models: A case study at Jengka, Pahang
NASA Astrophysics Data System (ADS)
Mohamed, Siti Rosiah; Ghazali, Nur Fadzrina Mohd; Mohd, Ainun Hafizah
2017-08-01
Data Envelopment Analysis (DEA) as a tool for obtaining performance indices has been used extensively in several of organizations sector. The ways to improve the efficiency of Decision Making Units (DMUs) is impractical because some of inputs and outputs are uncontrollable and in certain situation its produce weak efficiency which often reflect the impact for operating environment. Based on the data from Alam Flora Sdn. Bhd Jengka, the researcher wants to determine the efficiency of solid waste management (SWM) in town Jengka Pahang using CCRI and CCRO model of DEA and duality formulation with vector average input and output. Three input variables (length collection in meter, frequency time per week in hour and number of garbage truck) and 2 outputs variables (frequency collection and the total solid waste collection in kilogram) are analyzed. As a conclusion, it shows only three roads from 23 roads are efficient that achieve efficiency score 1. Meanwhile, 20 other roads are in an inefficient management.
NASA Technical Reports Server (NTRS)
Fortenbaugh, R. L.
1980-01-01
Equations incorporated in a VATOL six degree of freedom off-line digital simulation program and data for the Vought SF-121 VATOL aircraft concept which served as the baseline for the development of this program are presented. The equations and data are intended to facilitate the development of a piloted VATOL simulation. The equation presentation format is to state the equations which define a particular model segment. Listings of constants required to quantify the model segment, input variables required to exercise the model segment, and output variables required by other model segments are included. In several instances a series of input or output variables are followed by a section number in parentheses which identifies the model segment of origination or termination of those variables.
Mathematical models of the simplest fuzzy PI/PD controllers with skewed input and output fuzzy sets.
Mohan, B M; Sinha, Arpita
2008-07-01
This paper unveils mathematical models for fuzzy PI/PD controllers which employ two skewed fuzzy sets for each of the two-input variables and three skewed fuzzy sets for the output variable. The basic constituents of these models are Gamma-type and L-type membership functions for each input, trapezoidal/triangular membership functions for output, intersection/algebraic product triangular norm, maximum/drastic sum triangular conorm, Mamdani minimum/Larsen product/drastic product inference method, and center of sums defuzzification method. The existing simplest fuzzy PI/PD controller structures derived via symmetrical fuzzy sets become special cases of the mathematical models revealed in this paper. Finally, a numerical example along with its simulation results are included to demonstrate the effectiveness of the simplest fuzzy PI controllers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Behrang, M.A.; Assareh, E.; Ghanbarzadeh, A.
2010-08-15
The main objective of present study is to predict daily global solar radiation (GSR) on a horizontal surface, based on meteorological variables, using different artificial neural network (ANN) techniques. Daily mean air temperature, relative humidity, sunshine hours, evaporation, and wind speed values between 2002 and 2006 for Dezful city in Iran (32 16'N, 48 25'E), are used in this study. In order to consider the effect of each meteorological variable on daily GSR prediction, six following combinations of input variables are considered: (I)Day of the year, daily mean air temperature and relative humidity as inputs and daily GSR as output.more » (II)Day of the year, daily mean air temperature and sunshine hours as inputs and daily GSR as output. (III)Day of the year, daily mean air temperature, relative humidity and sunshine hours as inputs and daily GSR as output. (IV)Day of the year, daily mean air temperature, relative humidity, sunshine hours and evaporation as inputs and daily GSR as output. (V)Day of the year, daily mean air temperature, relative humidity, sunshine hours and wind speed as inputs and daily GSR as output. (VI)Day of the year, daily mean air temperature, relative humidity, sunshine hours, evaporation and wind speed as inputs and daily GSR as output. Multi-layer perceptron (MLP) and radial basis function (RBF) neural networks are applied for daily GSR modeling based on six proposed combinations. The measured data between 2002 and 2005 are used to train the neural networks while the data for 214 days from 2006 are used as testing data. The comparison of obtained results from ANNs and different conventional GSR prediction (CGSRP) models shows very good improvements (i.e. the predicted values of best ANN model (MLP-V) has a mean absolute percentage error (MAPE) about 5.21% versus 10.02% for best CGSRP model (CGSRP 5)). (author)« less
Fuzzy Neuron: Method and Hardware Realization
NASA Technical Reports Server (NTRS)
Krasowski, Michael J.; Prokop, Norman F.
2014-01-01
This innovation represents a method by which single-to-multi-input, single-to-many-output system transfer functions can be estimated from input/output data sets. This innovation can be run in the background while a system is operating under other means (e.g., through human operator effort), or may be utilized offline using data sets created from observations of the estimated system. It utilizes a set of fuzzy membership functions spanning the input space for each input variable. Linear combiners associated with combinations of input membership functions are used to create the output(s) of the estimator. Coefficients are adjusted online through the use of learning algorithms.
Using SMAP data to improve drought early warning over the US Great Plains
NASA Astrophysics Data System (ADS)
Fu, R.; Fernando, N.; Tang, W.
2015-12-01
A drought prone region such as the Great Plains of the United States (US GP) requires credible and actionable drought early warning. Such information cannot simply be extracted from available climate forecasts because of their large uncertainties at regional scales, and unclear connections to the needs of the decision makers. In particular, current dynamic seasonal predictions and climate projections, such as those produced by the NOAA North American Multi-Model Ensemble experiment (NMME) are much more reliable for winter and spring than for the summer season for the US GP. To mitigate the weaknesses of dynamic prediction/projections, we have identified three key processes behind the spring-to-summer dry memory through observational studies, as the scientific basis for a statistical drought early warning system. This system uses percentile soil moisture anomalies in spring as a key input to provide a probabilistic summer drought early warning. The latter outperforms the dynamic prediction over the US Southern Plains and has been used by the Texas state water agency to support state drought preparedness. A main source of uncertainty for this drought early warning system is the soil moisture input obtained from the NOAA Climate Forecasting System (CFS). We are testing use of the beta version of NASA Soil Moisture Active Passive (SMAP) soil moisture data, along with the Soil Moisture and Ocean Salinity (SMOS), and the long-term Essential Climate Variable Soil Moisture (ECV-SM) soil moisture data, to reduce this uncertainty. Preliminary results based on ECV-SM suggests satellite based soil moisture data could improve early warning of rainfall anomalies over the western US GP with less dense vegetation. The skill degrades over the eastern US GP where denser vegetation is found. We evaluate our SMAP-based drought early warning for 2015 summer against observations.
Postural Stability in Older Adults With Alzheimer Disease.
Mesbah, Normala; Perry, Meredith; Hill, Keith D; Kaur, Mandeep; Hale, Leigh
2017-03-01
The prevalence of adults with Alzheimer disease (AD) aged >65 years is increasing and estimated to quadruple by 2051. The aim of this study was to investigate postural stability in people with mild to moderate AD and factors contributing to postural instability compared with healthy peers (controls). A computerized systematic search of databases and a hand search of reference lists for articles published from 1984 onward (English-language articles only) were conducted on June 2, 2015, using the main key words "postural stability" and "Alzheimer's disease." Sixty-seven studies were assessed for eligibility (a confirmed diagnosis of AD, comparison of measured postural stability between participants with AD and controls, measured factors potentially contributing to postural instability). Data were extracted, and Downs and Black criteria were applied to evaluate study quality. Eighteen articles were analyzed using qualitative synthesis and reported based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Strength of evidence was guided by the Grading of Recommendations Assessment, Development and Evaluation. Strong evidence was found that: (1) older adults with mild to moderate AD have reduced static and functional postural stability compared with healthy peers (controls) and (2) attentional demand during dual-task activity and loss of visual input were key factors contributing to postural instability. Deta-analysis was not possible due to heterogeneity of the data. Postural stability is impaired in older adults with mild to moderate AD. Decreasing visual input and concentrating on multiple tasks decrease postural stability. To reduce falls risk, more research discerning appropriate strategies for the early identification of impairment of postural stability is needed. Standardization of population description and consensus on outcome measures and the variables used to measure postural -instability and its contributing factors are necessary to ensure meaningful synthesis of data. © 2017 American Physical Therapy Association
Group interaction and flight crew performance
NASA Technical Reports Server (NTRS)
Foushee, H. Clayton; Helmreich, Robert L.
1988-01-01
The application of human-factors analysis to the performance of aircraft-operation tasks by the crew as a group is discussed in an introductory review and illustrated with anecdotal material. Topics addressed include the function of a group in the operational environment, the classification of group performance factors (input, process, and output parameters), input variables and the flight crew process, and the effect of process variables on performance. Consideration is given to aviation safety issues, techniques for altering group norms, ways of increasing crew effort and coordination, and the optimization of group composition.
Measuring phenological variability from satellite imagery
Reed, Bradley C.; Brown, Jesslyn F.; Vanderzee, D.; Loveland, Thomas R.; Merchant, James W.; Ohlen, Donald O.
1994-01-01
Vegetation phenological phenomena are closely related to seasonal dynamics of the lower atmosphere and are therefore important elements in global models and vegetation monitoring. Normalized difference vegetation index (NDVI) data derived from the National Oceanic and Atmospheric Administration's Advanced Very High Resolution Radiometer (AVHRR) satellite sensor offer a means of efficiently and objectively evaluating phenological characteristics over large areas. Twelve metrics linked to key phenological events were computed based on time-series NDVI data collected from 1989 to 1992 over the conterminous United States. These measures include the onset of greenness, time of peak NDVI, maximum NDVI, rate of greenup, rate of senescence, and integrated NDVI. Measures of central tendency and variability of the measures were computed and analyzed for various land cover types. Results from the analysis showed strong coincidence between the satellite-derived metrics and predicted phenological characteristics. In particular, the metrics identified interannual variability of spring wheat in North Dakota, characterized the phenology of four types of grasslands, and established the phenological consistency of deciduous and coniferous forests. These results have implications for large- area land cover mapping and monitoring. The utility of re- motely sensed data as input to vegetation mapping is demonstrated by showing the distinct phenology of several land cover types. More stable information contained in ancillary data should be incorporated into the mapping process, particularly in areas with high phenological variability. In a regional or global monitoring system, an increase in variability in a region may serve as a signal to perform more detailed land cover analysis with higher resolution imagery.
Simulating maize yield and biomass with spatial variability of soil field capacity
USDA-ARS?s Scientific Manuscript database
Spatial variability in field soil water and other properties is a challenge for system modelers who use only representative values for model inputs, rather than their distributions. In this study, we compared simulation results from a calibrated model with spatial variability of soil field capacity ...
CALiPER Exploratory Study: Accounting for Uncertainty in Lumen Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bergman, Rolf; Paget, Maria L.; Richman, Eric E.
2011-03-31
With a well-defined and shared understanding of uncertainty in lumen measurements, testing laboratories can better evaluate their processes, contributing to greater consistency and credibility of lighting testing a key component of the U.S. Department of Energy (DOE) Commercially Available LED Product Evaluation and Reporting (CALiPER) program. Reliable lighting testing is a crucial underlying factor contributing toward the success of many energy-efficient lighting efforts, such as the DOE GATEWAY demonstrations, Lighting Facts Label, ENERGY STAR® energy efficient lighting programs, and many others. Uncertainty in measurements is inherent to all testing methodologies, including photometric and other lighting-related testing. Uncertainty exists for allmore » equipment, processes, and systems of measurement in individual as well as combined ways. A major issue with testing and the resulting accuracy of the tests is the uncertainty of the complete process. Individual equipment uncertainties are typically identified, but their relative value in practice and their combined value with other equipment and processes in the same test are elusive concepts, particularly for complex types of testing such as photometry. The total combined uncertainty of a measurement result is important for repeatable and comparative measurements for light emitting diode (LED) products in comparison with other technologies as well as competing products. This study provides a detailed and step-by-step method for determining uncertainty in lumen measurements, working closely with related standards efforts and key industry experts. This report uses the structure proposed in the Guide to Uncertainty Measurements (GUM) for evaluating and expressing uncertainty in measurements. The steps of the procedure are described and a spreadsheet format adapted for integrating sphere and goniophotometric uncertainty measurements is provided for entering parameters, ordering the information, calculating intermediate values and, finally, obtaining expanded uncertainties. Using this basis and examining each step of the photometric measurement and calibration methods, mathematical uncertainty models are developed. Determination of estimated values of input variables is discussed. Guidance is provided for the evaluation of the standard uncertainties of each input estimate, covariances associated with input estimates and the calculation of the result measurements. With this basis, the combined uncertainty of the measurement results and finally, the expanded uncertainty can be determined.« less
Simulated lumped-parameter system reduced-order adaptive control studies
NASA Technical Reports Server (NTRS)
Johnson, C. R., Jr.; Lawrence, D. A.; Taylor, T.; Malakooti, M. V.
1981-01-01
Two methods of interpreting the misbehavior of reduced order adaptive controllers are discussed. The first method is based on system input-output description and the second is based on state variable description. The implementation of the single input, single output, autoregressive, moving average system is considered.
Encrypting Digital Camera with Automatic Encryption Key Deletion
NASA Technical Reports Server (NTRS)
Oakley, Ernest C. (Inventor)
2007-01-01
A digital video camera includes an image sensor capable of producing a frame of video data representing an image viewed by the sensor, an image memory for storing video data such as previously recorded frame data in a video frame location of the image memory, a read circuit for fetching the previously recorded frame data, an encryption circuit having an encryption key input connected to receive the previously recorded frame data from the read circuit as an encryption key, an un-encrypted data input connected to receive the frame of video data from the image sensor and an encrypted data output port, and a write circuit for writing a frame of encrypted video data received from the encrypted data output port of the encryption circuit to the memory and overwriting the video frame location storing the previously recorded frame data.
A new polytopic approach for the unknown input functional observer design
NASA Astrophysics Data System (ADS)
Bezzaoucha, Souad; Voos, Holger; Darouach, Mohamed
2018-03-01
In this paper, a constructive procedure to design Functional Unknown Input Observers for nonlinear continuous time systems is proposed under the Polytopic Takagi-Sugeno framework. An equivalent representation for the nonlinear model is achieved using the sector nonlinearity transformation. Applying the Lyapunov theory and the ? attenuation, linear matrix inequalities conditions are deduced which are solved for feasibility to obtain the observer design matrices. To cope with the effect of unknown inputs, classical approach of decoupling the unknown input for the linear case is used. Both algebraic and solver-based solutions are proposed (relaxed conditions). Necessary and sufficient conditions for the existence of the functional polytopic observer are given. For both approaches, the general and particular cases (measurable premise variables, full state estimation with full and reduced order cases) are considered and it is shown that the proposed conditions correspond to the one presented for standard linear case. To illustrate the proposed theoretical results, detailed numerical simulations are presented for a Quadrotor Aerial Robots Landing and a Waste Water Treatment Plant. Both systems are highly nonlinear and represented in a T-S polytopic form with unmeasurable premise variables and unknown inputs.
ExcelAutomat: a tool for systematic processing of files as applied to quantum chemical calculations
NASA Astrophysics Data System (ADS)
Laloo, Jalal Z. A.; Laloo, Nassirah; Rhyman, Lydia; Ramasami, Ponnadurai
2017-07-01
The processing of the input and output files of quantum chemical calculations often necessitates a spreadsheet as a key component of the workflow. Spreadsheet packages with a built-in programming language editor can automate the steps involved and thus provide a direct link between processing files and the spreadsheet. This helps to reduce user-interventions as well as the need to switch between different programs to carry out each step. The ExcelAutomat tool is the implementation of this method in Microsoft Excel (MS Excel) using the default Visual Basic for Application (VBA) programming language. The code in ExcelAutomat was adapted to work with the platform-independent open-source LibreOffice Calc, which also supports VBA. ExcelAutomat provides an interface through the spreadsheet to automate repetitive tasks such as merging input files, splitting, parsing and compiling data from output files, and generation of unique filenames. Selected extracted parameters can be retrieved as variables which can be included in custom codes for a tailored approach. ExcelAutomat works with Gaussian files and is adapted for use with other computational packages including the non-commercial GAMESS. ExcelAutomat is available as a downloadable MS Excel workbook or as a LibreOffice workbook.
ExcelAutomat: a tool for systematic processing of files as applied to quantum chemical calculations.
Laloo, Jalal Z A; Laloo, Nassirah; Rhyman, Lydia; Ramasami, Ponnadurai
2017-07-01
The processing of the input and output files of quantum chemical calculations often necessitates a spreadsheet as a key component of the workflow. Spreadsheet packages with a built-in programming language editor can automate the steps involved and thus provide a direct link between processing files and the spreadsheet. This helps to reduce user-interventions as well as the need to switch between different programs to carry out each step. The ExcelAutomat tool is the implementation of this method in Microsoft Excel (MS Excel) using the default Visual Basic for Application (VBA) programming language. The code in ExcelAutomat was adapted to work with the platform-independent open-source LibreOffice Calc, which also supports VBA. ExcelAutomat provides an interface through the spreadsheet to automate repetitive tasks such as merging input files, splitting, parsing and compiling data from output files, and generation of unique filenames. Selected extracted parameters can be retrieved as variables which can be included in custom codes for a tailored approach. ExcelAutomat works with Gaussian files and is adapted for use with other computational packages including the non-commercial GAMESS. ExcelAutomat is available as a downloadable MS Excel workbook or as a LibreOffice workbook.
NASA Astrophysics Data System (ADS)
Snow, Michael G.; Bajaj, Anil K.
2015-08-01
This work presents an uncertainty quantification (UQ) analysis of a comprehensive model for an electrostatically actuated microelectromechanical system (MEMS) switch. The goal is to elucidate the effects of parameter variations on certain key performance characteristics of the switch. A sufficiently detailed model of the electrostatically actuated switch in the basic configuration of a clamped-clamped beam is developed. This multi-physics model accounts for various physical effects, including the electrostatic fringing field, finite length of electrodes, squeeze film damping, and contact between the beam and the dielectric layer. The performance characteristics of immediate interest are the static and dynamic pull-in voltages for the switch. Numerical approaches for evaluating these characteristics are developed and described. Using Latin Hypercube Sampling and other sampling methods, the model is evaluated to find these performance characteristics when variability in the model's geometric and physical parameters is specified. Response surfaces of these results are constructed via a Multivariate Adaptive Regression Splines (MARS) technique. Using a Direct Simulation Monte Carlo (DSMC) technique on these response surfaces gives smooth probability density functions (PDFs) of the outputs characteristics when input probability characteristics are specified. The relative variation in the two pull-in voltages due to each of the input parameters is used to determine the critical parameters.
Assessment of uncertainties of the models used in thermal-hydraulic computer codes
NASA Astrophysics Data System (ADS)
Gricay, A. S.; Migrov, Yu. A.
2015-09-01
The article deals with matters concerned with the problem of determining the statistical characteristics of variable parameters (the variation range and distribution law) in analyzing the uncertainty and sensitivity of calculation results to uncertainty in input data. A comparative analysis of modern approaches to uncertainty in input data is presented. The need to develop an alternative method for estimating the uncertainty of model parameters used in thermal-hydraulic computer codes, in particular, in the closing correlations of the loop thermal hydraulics block, is shown. Such a method shall feature the minimal degree of subjectivism and must be based on objective quantitative assessment criteria. The method includes three sequential stages: selecting experimental data satisfying the specified criteria, identifying the key closing correlation using a sensitivity analysis, and carrying out case calculations followed by statistical processing of the results. By using the method, one can estimate the uncertainty range of a variable parameter and establish its distribution law in the above-mentioned range provided that the experimental information is sufficiently representative. Practical application of the method is demonstrated taking as an example the problem of estimating the uncertainty of a parameter appearing in the model describing transition to post-burnout heat transfer that is used in the thermal-hydraulic computer code KORSAR. The performed study revealed the need to narrow the previously established uncertainty range of this parameter and to replace the uniform distribution law in the above-mentioned range by the Gaussian distribution law. The proposed method can be applied to different thermal-hydraulic computer codes. In some cases, application of the method can make it possible to achieve a smaller degree of conservatism in the expert estimates of uncertainties pertinent to the model parameters used in computer codes.
NASA Astrophysics Data System (ADS)
Pistolesi, Marco; Cioni, Raffaello; Rosi, Mauro; Aguilera, Eduardo
2014-02-01
The ice-capped Cotopaxi volcano is known worldwide for the large-scale, catastrophic lahars that have occurred in connection with historical explosive eruptions. The most recent large-scale lahar event occurred in 1877 when scoria flows partially melted ice and snow of the summit glacier, generating debris flows that severely impacted all the river valleys originating from the volcano. The 1877 lahars have been considered in the recent years as a maximum expected event to define the hazard associated to lahar generation at Cotopaxi. Conversely, recent field-based studies have shown that such debris flows have occurred several times during the last 800 years of activity at Cotopaxi, and that the scale of lahars has been variable, including events much larger than that of 1877. Despite a rapid retreat of the summit ice cap over the past century, in fact, there are no data clearly suggesting that future events will be smaller than those observed in the deposits of the last 800 years of activity. In addition, geological field data prove that the lahar triggering mechanism also has to be considered as a key input parameter and, under appropriate eruptive mechanisms, a hazard scenario of a lahar with a volume 3-times larger than the 1877 event is likely. In order to analyze the impact scenarios in the southern drainage system of the volcano, simulations of inundation areas were performed with a semi-empirical model (LAHARZ), using input parameters including variable water volume. Results indicate that a lahar 3-times larger than the 1877 event would invade much wider areas than those flooded by the 1877 lahars along the southern valley system, eventually impacting highly-urbanized areas such as the city of Latacunga.
Double-Barrier Memristive Devices for Unsupervised Learning and Pattern Recognition.
Hansen, Mirko; Zahari, Finn; Ziegler, Martin; Kohlstedt, Hermann
2017-01-01
The use of interface-based resistive switching devices for neuromorphic computing is investigated. In a combined experimental and numerical study, the important device parameters and their impact on a neuromorphic pattern recognition system are studied. The memristive cells consist of a layer sequence Al/Al 2 O 3 /Nb x O y /Au and are fabricated on a 4-inch wafer. The key functional ingredients of the devices are a 1.3 nm thick Al 2 O 3 tunnel barrier and a 2.5 mm thick Nb x O y memristive layer. Voltage pulse measurements are used to study the electrical conditions for the emulation of synaptic functionality of single cells for later use in a recognition system. The results are evaluated and modeled in the framework of the plasticity model of Ziegler et al. Based on this model, which is matched to experimental data from 84 individual devices, the network performance with regard to yield, reliability, and variability is investigated numerically. As the network model, a computing scheme for pattern recognition and unsupervised learning based on the work of Querlioz et al. (2011), Sheridan et al. (2014), Zahari et al. (2015) is employed. This is a two-layer feedforward network with a crossbar array of memristive devices, leaky integrate-and-fire output neurons including a winner-takes-all strategy, and a stochastic coding scheme for the input pattern. As input pattern, the full data set of digits from the MNIST database is used. The numerical investigation indicates that the experimentally obtained yield, reliability, and variability of the memristive cells are suitable for such a network. Furthermore, evidence is presented that their strong I - V non-linearity might avoid the need for selector devices in crossbar array structures.
Casula, Elias P; Mayer, Isabella M S; Desikan, Mahalekshmi; Tabrizi, Sarah J; Rothwell, John C; Orth, Michael
2018-03-01
In Huntington's disease there is evidence of structural damage in the motor system, but it is still unclear how to link this to the behavioral disorder of movement. One feature of choreic movement is variable timing and coordination between sequences of actions. We postulate this results from desynchronization of neural activity in cortical motor areas. The objective of this study was to explore the ability to synchronize activity in a motor network using transcranial magnetic stimulation and to relate this to timing of motor performance. We examined synchronization in oscillatory activity of cortical motor areas in response to an external input produced by a pulse of transcranial magnetic stimulation. We combined this with EEG to compare the response of 16 presymptomatic Huntington's disease participants with 16 age-matched healthy volunteers to test whether the strength of synchronization relates to the variability of motor performance at the following 2 tasks: a grip force task and a speeded-tapping task. Phase synchronization in response to M1 stimulation was lower in Huntington's disease than healthy volunteers (P < .01), resulting in a reduced cortical activity at global (P < .02) and local levels (P < .01). Participants who showed better timed motor performance also showed stronger oscillatory synchronization (r = -0.356; P < .05) and higher cortical activity (r = -0.393; P < .05). Our data may model the ability of the motor command to respond to more subtle, physiological inputs from other brain areas. This novel insight indicates that impairments of the timing accuracy of synchronization and desynchronization could be a physiological basis for some key clinical features of Huntington's disease. © 2018 International Parkinson and Movement Disorder Society. © 2018 International Parkinson and Movement Disorder Society.
The impact of AMO and NAO in Western Iberia during the Late Holocene
NASA Astrophysics Data System (ADS)
Hernandez, A.; Leira, M.; Trigo, R.; Vázquez-Loureiro, D.; Carballeira, R.; Sáez, A.
2016-12-01
High mountain lakes in the Iberian Peninsula are particularly sensitive to the influence of North Atlantic large-scale modes of climate variability due to their geographical position and the reduced anthropic disturbances. In this context, Serra da Estrela (Portugal), the westernmost range of the Sistema Central, constitutes a physical barrier to air masses coming from the Atlantic Ocean. However, long-term climate reconstructions have not yet been conducted. We present a climate reconstruction of this region based on facies analysis, X-ray fluorescence core scanning, elemental and isotope geochemistry on bulk organic matter and a preliminary study of diatom assemblages from the sedimentary record of Lake Peixão (1677 m a.s.l.; Serra da Estrela) for the last ca. 3500 years. A multivariate statistical analysis has been performed to recognize the main environmental factors controlling the sedimentary infill. Our results reveal that two main processes explain the 70% of the total variance. Thus, changes in primary productivity, reflected in organic matter accumulation, and variations in runoff, related to external particles input, explain 53% and 17% respectively. Additionally, evidence of changes in productivity and water level regime recorded as variations in diatom assemblages correlate well with our interpretations. A comparison between the lake productivity changes and previous Atlantic Multidecadal Oscillation (AMO) reconstructions shows a good correlation, suggesting this climate mode as the main driver over lacustrine primary productivity at multi-decadal scales. In turn, changes in terrigenous inputs, linked to precipitation, seem to be more influenced by the winter North Atlantic Oscillation (NAO) variability. Hence, our results highlight that although the climate regime in this area is clearly influenced by the NAO, the AMO also plays a key role at long-term time-scales.
Gravity dependence of the effect of optokinetic stimulation on the subjective visual vertical.
Ward, Bryan K; Bockisch, Christopher J; Caramia, Nicoletta; Bertolini, Giovanni; Tarnutzer, Alexander Andrea
2017-05-01
Accurate and precise estimates of direction of gravity are essential for spatial orientation. According to Bayesian theory, multisensory vestibular, visual, and proprioceptive input is centrally integrated in a weighted fashion based on the reliability of the component sensory signals. For otolithic input, a decreasing signal-to-noise ratio was demonstrated with increasing roll angle. We hypothesized that the weights of vestibular (otolithic) and extravestibular (visual/proprioceptive) sensors are roll-angle dependent and predicted an increased weight of extravestibular cues with increasing roll angle, potentially following the Bayesian hypothesis. To probe this concept, the subjective visual vertical (SVV) was assessed in different roll positions (≤ ± 120°, steps = 30°, n = 10) with/without presenting an optokinetic stimulus (velocity = ± 60°/s). The optokinetic stimulus biased the SVV toward the direction of stimulus rotation for roll angles ≥ ± 30° ( P < 0.005). Offsets grew from 3.9 ± 1.8° (upright) to 22.1 ± 11.8° (±120° roll tilt, P < 0.001). Trial-to-trial variability increased with roll angle, demonstrating a nonsignificant increase when providing optokinetic stimulation. Variability and optokinetic bias were correlated ( R 2 = 0.71, slope = 0.71, 95% confidence interval = 0.57-0.86). An optimal-observer model combining an optokinetic bias with vestibular input reproduced measured errors closely. These findings support the hypothesis of a weighted multisensory integration when estimating direction of gravity with optokinetic stimulation. Visual input was weighted more when vestibular input became less reliable, i.e., at larger roll-tilt angles. However, according to Bayesian theory, the variability of combined cues is always lower than the variability of each source cue. If the observed increase in variability, although nonsignificant, is true, either it must depend on an additional source of variability, added after SVV computation, or it would conflict with the Bayesian hypothesis. NEW & NOTEWORTHY Applying a rotating optokinetic stimulus while recording the subjective visual vertical in different whole body roll angles, we noted the optokinetic-induced bias to correlate with the roll angle. These findings allow the hypothesis that the established optimal weighting of single-sensory cues depending on their reliability to estimate direction of gravity could be extended to a bias caused by visual self-motion stimuli. Copyright © 2017 the American Physiological Society.
Stochastic empirical loading and dilution model (SELDM) version 1.0.0
Granato, Gregory E.
2013-01-01
The Stochastic Empirical Loading and Dilution Model (SELDM) is designed to transform complex scientific data into meaningful information about the risk of adverse effects of runoff on receiving waters, the potential need for mitigation measures, and the potential effectiveness of such management measures for reducing these risks. The U.S. Geological Survey developed SELDM in cooperation with the Federal Highway Administration to help develop planning-level estimates of event mean concentrations, flows, and loads in stormwater from a site of interest and from an upstream basin. Planning-level estimates are defined as the results of analyses used to evaluate alternative management measures; planning-level estimates are recognized to include substantial uncertainties (commonly orders of magnitude). SELDM uses information about a highway site, the associated receiving-water basin, precipitation events, stormflow, water quality, and the performance of mitigation measures to produce a stochastic population of runoff-quality variables. SELDM provides input statistics for precipitation, prestorm flow, runoff coefficients, and concentrations of selected water-quality constituents from National datasets. Input statistics may be selected on the basis of the latitude, longitude, and physical characteristics of the site of interest and the upstream basin. The user also may derive and input statistics for each variable that are specific to a given site of interest or a given area. SELDM is a stochastic model because it uses Monte Carlo methods to produce the random combinations of input variable values needed to generate the stochastic population of values for each component variable. SELDM calculates the dilution of runoff in the receiving waters and the resulting downstream event mean concentrations and annual average lake concentrations. Results are ranked, and plotting positions are calculated, to indicate the level of risk of adverse effects caused by runoff concentrations, flows, and loads on receiving waters by storm and by year. Unlike deterministic hydrologic models, SELDM is not calibrated by changing values of input variables to match a historical record of values. Instead, input values for SELDM are based on site characteristics and representative statistics for each hydrologic variable. Thus, SELDM is an empirical model based on data and statistics rather than theoretical physiochemical equations. SELDM is a lumped parameter model because the highway site, the upstream basin, and the lake basin each are represented as a single homogeneous unit. Each of these source areas is represented by average basin properties, and results from SELDM are calculated as point estimates for the site of interest. Use of the lumped parameter approach facilitates rapid specification of model parameters to develop planning-level estimates with available data. The approach allows for parsimony in the required inputs to and outputs from the model and flexibility in the use of the model. For example, SELDM can be used to model runoff from various land covers or land uses by using the highway-site definition as long as representative water quality and impervious-fraction data are available.
Atzori, A S; Tedeschi, L O; Cannas, A
2013-05-01
The economic efficiency of dairy farms is the main goal of farmers. The objective of this work was to use routinely available information at the dairy farm level to develop an index of profitability to rank dairy farms and to assist the decision-making process of farmers to increase the economic efficiency of the entire system. A stochastic modeling approach was used to study the relationships between inputs and profitability (i.e., income over feed cost; IOFC) of dairy cattle farms. The IOFC was calculated as: milk revenue + value of male calves + culling revenue - herd feed costs. Two databases were created. The first one was a development database, which was created from technical and economic variables collected in 135 dairy farms. The second one was a synthetic database (sDB) created from 5,000 synthetic dairy farms using the Monte Carlo technique and based on the characteristics of the development database data. The sDB was used to develop a ranking index as follows: (1) principal component analysis (PCA), excluding IOFC, was used to identify principal components (sPC); and (2) coefficient estimates of a multiple regression of the IOFC on the sPC were obtained. Then, the eigenvectors of the sPC were used to compute the principal component values for the original 135 dairy farms that were used with the multiple regression coefficient estimates to predict IOFC (dRI; ranking index from development database). The dRI was used to rank the original 135 dairy farms. The PCA explained 77.6% of the sDB variability and 4 sPC were selected. The sPC were associated with herd profile, milk quality and payment, poor management, and reproduction based on the significant variables of the sPC. The mean IOFC in the sDB was 0.1377 ± 0.0162 euros per liter of milk (€/L). The dRI explained 81% of the variability of the IOFC calculated for the 135 original farms. When the number of farms below and above 1 standard deviation (SD) of the dRI were calculated, we found that 21 farms had dRI<-1 SD, 32 farms were between -1 SD and 0, 67 farms were between 0 and +1 SD, and 15 farms had dRI>+1 SD. The top 10% of the farms had a dRI greater than 0.170 €/L, whereas the bottom 10% farms had a dRI lower than 0.116 €/L. This stochastic approach allowed us to understand the relationships among the inputs of the studied dairy farms and to develop a ranking index for comparison purposes. The developed methodology may be improved by using more inputs at the dairy farm level and considering the actual cost to measure profitability. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Evaluating variable rate fungicide applications for control of Sclerotinia
USDA-ARS?s Scientific Manuscript database
Oklahoma peanut growers continue to try to increase yields and reduce input costs. Perhaps the largest input in a peanut crop is fungicide applications. This is especially true for areas in the state that have high disease pressure from Sclerotinia. On average, a single fungicide application cost...
Human encroachment on the coastal zone has led to a rise in the delivery of nitrogen (N) to estuarine and near-shore waters. Potential routes of anthropogenic N inputs include export from estuaries, atmospheric deposition, and dissolved N inputs from groundwater outflow. Stable...
Learning a Novel Pattern through Balanced and Skewed Input
ERIC Educational Resources Information Center
McDonough, Kim; Trofimovich, Pavel
2013-01-01
This study compared the effectiveness of balanced and skewed input at facilitating the acquisition of the transitive construction in Esperanto, characterized by the accusative suffix "-n" and variable word order (SVO, OVS). Thai university students (N = 98) listened to 24 sentences under skewed (one noun with high token frequency) or…
Code of Federal Regulations, 2014 CFR
2014-10-01
... must be considered as essential variables: Number of passes; thickness of plate; heat input per pass... not be used. The number of passes, thickness of plate, and heat input per pass may not vary more than... machine heat processes, provided such surfaces are remelted in the subsequent welding process. Where there...
Code of Federal Regulations, 2013 CFR
2013-10-01
... must be considered as essential variables: Number of passes; thickness of plate; heat input per pass... not be used. The number of passes, thickness of plate, and heat input per pass may not vary more than... machine heat processes, provided such surfaces are remelted in the subsequent welding process. Where there...
Code of Federal Regulations, 2012 CFR
2012-10-01
... must be considered as essential variables: Number of passes; thickness of plate; heat input per pass... not be used. The number of passes, thickness of plate, and heat input per pass may not vary more than... machine heat processes, provided such surfaces are remelted in the subsequent welding process. Where there...
Code of Federal Regulations, 2011 CFR
2011-10-01
... must be considered as essential variables: Number of passes; thickness of plate; heat input per pass... not be used. The number of passes, thickness of plate, and heat input per pass may not vary more than... machine heat processes, provided such surfaces are remelted in the subsequent welding process. Where there...
Mu, Zhijian; Huang, Aiying; Ni, Jiupai; Xie, Deti
2014-01-01
Organic soils are an important source of N2O, but global estimates of these fluxes remain uncertain because measurements are sparse. We tested the hypothesis that N2O fluxes can be predicted from estimates of mineral nitrogen input, calculated from readily-available measurements of CO2 flux and soil C/N ratio. From studies of organic soils throughout the world, we compiled a data set of annual CO2 and N2O fluxes which were measured concurrently. The input of soil mineral nitrogen in these studies was estimated from applied fertilizer nitrogen and organic nitrogen mineralization. The latter was calculated by dividing the rate of soil heterotrophic respiration by soil C/N ratio. This index of mineral nitrogen input explained up to 69% of the overall variability of N2O fluxes, whereas CO2 flux or soil C/N ratio alone explained only 49% and 36% of the variability, respectively. Including water table level in the model, along with mineral nitrogen input, further improved the model with the explanatory proportion of variability in N2O flux increasing to 75%. Unlike grassland or cropland soils, forest soils were evidently nitrogen-limited, so water table level had no significant effect on N2O flux. Our proposed approach, which uses the product of soil-derived CO2 flux and the inverse of soil C/N ratio as a proxy for nitrogen mineralization, shows promise for estimating regional or global N2O fluxes from organic soils, although some further enhancements may be warranted.
Sensitivity analysis of a sound absorption model with correlated inputs
NASA Astrophysics Data System (ADS)
Chai, W.; Christen, J.-L.; Zine, A.-M.; Ichchou, M.
2017-04-01
Sound absorption in porous media is a complex phenomenon, which is usually addressed with homogenized models, depending on macroscopic parameters. Since these parameters emerge from the structure at microscopic scale, they may be correlated. This paper deals with sensitivity analysis methods of a sound absorption model with correlated inputs. Specifically, the Johnson-Champoux-Allard model (JCA) is chosen as the objective model with correlation effects generated by a secondary micro-macro semi-empirical model. To deal with this case, a relatively new sensitivity analysis method Fourier Amplitude Sensitivity Test with Correlation design (FASTC), based on Iman's transform, is taken into application. This method requires a priori information such as variables' marginal distribution functions and their correlation matrix. The results are compared to the Correlation Ratio Method (CRM) for reference and validation. The distribution of the macroscopic variables arising from the microstructure, as well as their correlation matrix are studied. Finally the results of tests shows that the correlation has a very important impact on the results of sensitivity analysis. Assessment of correlation strength among input variables on the sensitivity analysis is also achieved.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., Demand Side Variability, and Network Variability studies, including input data, processing programs, and... should include the product or product groups carried under each listed contract; (k) Spreadsheets and...
Investigation of energy management strategies for photovoltaic systems - An analysis technique
NASA Technical Reports Server (NTRS)
Cull, R. C.; Eltimsahy, A. H.
1982-01-01
Progress is reported in formulating energy management strategies for stand-alone PV systems, developing an analytical tool that can be used to investigate these strategies, applying this tool to determine the proper control algorithms and control variables (controller inputs and outputs) for a range of applications, and quantifying the relative performance and economics when compared to systems that do not apply energy management. The analysis technique developed may be broadly applied to a variety of systems to determine the most appropriate energy management strategies, control variables and algorithms. The only inputs required are statistical distributions for stochastic energy inputs and outputs of the system and the system's device characteristics (efficiency and ratings). Although the formulation was originally driven by stand-alone PV system needs, the techniques are also applicable to hybrid and grid connected systems.
Investigation of energy management strategies for photovoltaic systems - An analysis technique
NASA Astrophysics Data System (ADS)
Cull, R. C.; Eltimsahy, A. H.
Progress is reported in formulating energy management strategies for stand-alone PV systems, developing an analytical tool that can be used to investigate these strategies, applying this tool to determine the proper control algorithms and control variables (controller inputs and outputs) for a range of applications, and quantifying the relative performance and economics when compared to systems that do not apply energy management. The analysis technique developed may be broadly applied to a variety of systems to determine the most appropriate energy management strategies, control variables and algorithms. The only inputs required are statistical distributions for stochastic energy inputs and outputs of the system and the system's device characteristics (efficiency and ratings). Although the formulation was originally driven by stand-alone PV system needs, the techniques are also applicable to hybrid and grid connected systems.
Troyer, T W; Miller, K D
1997-07-01
To understand the interspike interval (ISI) variability displayed by visual cortical neurons (Softky & Koch, 1993), it is critical to examine the dynamics of their neuronal integration, as well as the variability in their synaptic input current. Most previous models have focused on the latter factor. We match a simple integrate-and-fire model to the experimentally measured integrative properties of cortical regular spiking cells (McCormick, Connors, Lighthall, & Prince, 1985). After setting RC parameters, the post-spike voltage reset is set to match experimental measurements of neuronal gain (obtained from in vitro plots of firing frequency versus injected current). Examination of the resulting model leads to an intuitive picture of neuronal integration that unifies the seemingly contradictory 1/square root of N and random walk pictures that have previously been proposed. When ISIs are dominated by postspike recovery, 1/square root of N arguments hold and spiking is regular; after the "memory" of the last spike becomes negligible, spike threshold crossing is caused by input variance around a steady state and spiking is Poisson. In integrate-and-fire neurons matched to cortical cell physiology, steady-state behavior is predominant, and ISIs are highly variable at all physiological firing rates and for a wide range of inhibitory and excitatory inputs.
Analysis on electronic control unit of continuously variable transmission
NASA Astrophysics Data System (ADS)
Cao, Shuanggui
Continuously variable transmission system can ensure that the engine work along the line of best fuel economy, improve fuel economy, save fuel and reduce harmful gas emissions. At the same time, continuously variable transmission allows the vehicle speed is more smooth and improves the ride comfort. Although the CVT technology has made great development, but there are many shortcomings in the CVT. The CVT system of ordinary vehicles now is still low efficiency, poor starting performance, low transmission power, and is not ideal controlling, high cost and other issues. Therefore, many scholars began to study some new type of continuously variable transmission. The transmission system with electronic systems control can achieve automatic control of power transmission, give full play to the characteristics of the engine to achieve optimal control of powertrain, so the vehicle is always traveling around the best condition. Electronic control unit is composed of the core processor, input and output circuit module and other auxiliary circuit module. Input module collects and process many signals sent by sensor and , such as throttle angle, brake signals, engine speed signal, speed signal of input and output shaft of transmission, manual shift signals, mode selection signals, gear position signal and the speed ratio signal, so as to provide its corresponding processing for the controller core.
Spatial patterns of throughfall isotopic composition at the event and seasonal timescales
Scott T. Allen; Richard F. Keim; Jeffrey J. McDonnell
2015-01-01
Spatial variability of throughfall isotopic composition in forests is indicative of complex processes occurring in the canopy and remains insufficiently understood to properly characterize precipitation inputs to the catchment water balance. Here we investigate variability of throughfall isotopic composition with the objectives: (1) to quantify the spatial variability...
NASA Astrophysics Data System (ADS)
Garousi Nejad, I.; He, S.; Tang, Q.; Ogden, F. L.; Steinke, R. C.; Frazier, N.; Tarboton, D. G.; Ohara, N.; Lin, H.
2017-12-01
Spatial scale is one of the main considerations in hydrological modeling of snowmelt in mountainous areas. The size of model elements controls the degree to which variability can be explicitly represented versus what needs to be parameterized using effective properties such as averages or other subgrid variability parameterizations that may degrade the quality of model simulations. For snowmelt modeling terrain parameters such as slope, aspect, vegetation and elevation play an important role in the timing and quantity of snowmelt that serves as an input to hydrologic runoff generation processes. In general, higher resolution enhances the accuracy of the simulation since fine meshes represent and preserve the spatial variability of atmospheric and surface characteristics better than coarse resolution. However, this increases computational cost and there may be a scale beyond which the model response does not improve due to diminishing sensitivity to variability and irreducible uncertainty associated with the spatial interpolation of inputs. This paper examines the influence of spatial resolution on the snowmelt process using simulations of and data from the Animas River watershed, an alpine mountainous area in Colorado, USA, using an unstructured distributed physically based hydrological model developed for a parallel computing environment, ADHydro. Five spatial resolutions (30 m, 100 m, 250 m, 500 m, and 1 km) were used to investigate the variations in hydrologic response. This study demonstrated the importance of choosing the appropriate spatial scale in the implementation of ADHydro to obtain a balance between representing spatial variability and the computational cost. According to the results, variation in the input variables and parameters due to using different spatial resolution resulted in changes in the obtained hydrological variables, especially snowmelt, both at the basin-scale and distributed across the model mesh.
NASA Astrophysics Data System (ADS)
Srinivas, Kadivendi; Vundavilli, Pandu R.; Manzoor Hussain, M.; Saiteja, M.
2016-09-01
Welding input parameters such as current, gas flow rate and torch angle play a significant role in determination of qualitative mechanical properties of weld joint. Traditionally, it is necessary to determine the weld input parameters for every new welded product to obtain a quality weld joint which is time consuming. In the present work, the effect of plasma arc welding parameters on mild steel was studied using a neural network approach. To obtain a response equation that governs the input-output relationships, conventional regression analysis was also performed. The experimental data was constructed based on Taguchi design and the training data required for neural networks were randomly generated, by varying the input variables within their respective ranges. The responses were calculated for each combination of input variables by using the response equations obtained through the conventional regression analysis. The performances in Levenberg-Marquardt back propagation neural network and radial basis neural network (RBNN) were compared on various randomly generated test cases, which are different from the training cases. From the results, it is interesting to note that for the above said test cases RBNN analysis gave improved training results compared to that of feed forward back propagation neural network analysis. Also, RBNN analysis proved a pattern of increasing performance as the data points moved away from the initial input values.
Computing the structural influence matrix for biological systems.
Giordano, Giulia; Cuba Samaniego, Christian; Franco, Elisa; Blanchini, Franco
2016-06-01
We consider the problem of identifying structural influences of external inputs on steady-state outputs in a biological network model. We speak of a structural influence if, upon a perturbation due to a constant input, the ensuing variation of the steady-state output value has the same sign as the input (positive influence), the opposite sign (negative influence), or is zero (perfect adaptation), for any feasible choice of the model parameters. All these signs and zeros can constitute a structural influence matrix, whose (i, j) entry indicates the sign of steady-state influence of the jth system variable on the ith variable (the output caused by an external persistent input applied to the jth variable). Each entry is structurally determinate if the sign does not depend on the choice of the parameters, but is indeterminate otherwise. In principle, determining the influence matrix requires exhaustive testing of the system steady-state behaviour in the widest range of parameter values. Here we show that, in a broad class of biological networks, the influence matrix can be evaluated with an algorithm that tests the system steady-state behaviour only at a finite number of points. This algorithm also allows us to assess the structural effect of any perturbation, such as variations of relevant parameters. Our method is applied to nontrivial models of biochemical reaction networks and population dynamics drawn from the literature, providing a parameter-free insight into the system dynamics.
NASA Astrophysics Data System (ADS)
Wilson, Emily L.; DiGregorio, A. J.; Riot, Vincent J.; Ammons, Mark S.; Bruner, William W.; Carter, Darrell; Mao, Jianping; Ramanathan, Anand; Strahan, Susan E.; Oman, Luke D.; Hoffman, Christine; Garner, Richard M.
2017-03-01
We present a design for a 4 U (20 cm × 20 cm × 10 cm) occultation-viewing laser heterodyne radiometer (LHR) that measures methane (CH4), carbon dioxide (CO2) and water vapor (H2O) in the limb that is designed for deployment on a 6 U CubeSat. The LHR design collects sunlight that has undergone absorption by the trace gas and mixes it with a distributive feedback (DFB) laser centered at 1640 nm that scans across CO2, CH4, and H2O absorption features. Upper troposphere/lower stratosphere measurements of these gases provide key inputs to stratospheric circulation models: measuring stratospheric circulation and its variability is essential for projecting how climate change will affect stratospheric ozone.
Concepts for Multi-Speed Rotorcraft Drive System - Status of Design and Testing at NASA GRC
NASA Technical Reports Server (NTRS)
Stevens, Mark A.; Lewicki, David G.; Handschuh, Robert F.
2015-01-01
In several studies and on-going developments for advanced rotorcraft, the need for variable multi-speed capable rotors has been raised. Speed changes of up to 50 have been proposed for future rotorcraft to improve vehicle performance. A rotor speed change during operation not only requires a rotor that can perform effectively over the operating speedload range, but also requires a propulsion system possessing these same capabilities. A study was completed investigating possible drive system arrangements that can accommodate up to a 50 speed change. Key drivers were identified from which simplicity and weight were judged as central. This paper presents the current status of two gear train concepts coupled with the first of two clutch types developed and tested thus far with focus on design lessons learned and areas requiring development. Also, a third concept is presented, a dual input planetary differential as leveraged from a simple planetary with fixed carrier.
NASA Astrophysics Data System (ADS)
Roelofs, W. S. C.; Mathijssen, S. G. J.; Janssen, R. A. J.; de Leeuw, D. M.; Kemerink, M.
2012-02-01
The width and shape of the density of states (DOS) are key parameters to describe the charge transport of organic semiconductors. Here we extract the DOS using scanning Kelvin probe microscopy on a self-assembled monolayer field effect transistor (SAMFET). The semiconductor is only a single monolayer which has allowed extraction of the DOS over a wide energy range, pushing the methodology to its fundamental limit. The measured DOS consists of an exponential distribution of deep states with additional localized states on top. The charge transport has been calculated in a generic variable range-hopping model that allows any DOS as input. We show that with the experimentally extracted DOS an excellent agreement between measured and calculated transfer curves is obtained. This shows that detailed knowledge of the density of states is a prerequisite to consistently describe the transfer characteristics of organic field effect transistors.
Dissuasive exit signage for building fire evacuation.
Olander, Joakim; Ronchi, Enrico; Lovreglio, Ruggiero; Nilsson, Daniel
2017-03-01
This work presents the result of a questionnaire study which investigates the design of dissuasive emergency signage, i.e. signage conveying a message of not utilizing a specific exit door. The work analyses and tests a set of key features of dissuasive emergency signage using the Theory of Affordances. The variables having the largest impact on observer preference, interpretation and noticeability of the signage have been identified. Results show that features which clearly negate the exit-message of the original positive exit signage are most effective, for instance a red X-marking placed across the entirety of the exit signage conveys a clear dissuasive message. Other features of note are red flashing lights and alternation of colour. The sense of urgency conveyed by the sign is largely affected by sensory inputs such as red flashing lights or other features which cause the signs to break the tendencies of normalcy. Copyright © 2016 Elsevier Ltd. All rights reserved.
BOREAS RSS-8 BIOME-BGC SSA Simulation of Annual Water and Carbon Fluxes
NASA Technical Reports Server (NTRS)
Hall, Forrest G. (Editor); Nickeson, Jaime (Editor); Kimball, John
2000-01-01
The BOREAS RSS-8 team performed research to evaluate the effect of seasonal weather and landcover heterogeneity on boreal forest regional water and carbon fluxes using a process-level ecosystem model, BIOME-BGC, coupled with remote sensing-derived parameter maps of key state variables. This data set contains derived maps of landcover type and crown and stem biomass as model inputs to determine annual evapotranspiration, gross primary production, autotrophic respiration, and net primary productivity within the BOREAS SSA-MSA, at a 30-m spatial resolution. Model runs were conducted over a 3-year period from 1994-1996; images are provided for each of those years. The data are stored in binary image format. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC).
A dynamic plug flow reactor model for a vanadium redox flow battery cell
NASA Astrophysics Data System (ADS)
Li, Yifeng; Skyllas-Kazacos, Maria; Bao, Jie
2016-04-01
A dynamic plug flow reactor model for a single cell VRB system is developed based on material balance, and the Nernst equation is employed to calculate cell voltage with consideration of activation and concentration overpotentials. Simulation studies were conducted under various conditions to investigate the effects of several key operation variables including electrolyte flow rate, upper SOC limit and input current magnitude on the cell charging performance. The results show that all three variables have a great impact on performance, particularly on the possibility of gassing during charging at high SOCs or inadequate flow rates. Simulations were also carried out to study the effects of electrolyte imbalance during long term charging and discharging cycling. The results show the minimum electrolyte flow rate needed for operation within a particular SOC range in order to avoid gassing side reactions during charging. The model also allows scheduling of partial electrolyte remixing operations to restore capacity and also avoid possible gassing side reactions during charging. Simulation results also suggest the proper placement for cell voltage monitoring and highlight potential problems associated with setting the upper charging cut-off limit based on the inlet SOC calculated from the open-circuit cell voltage measurement.
Prospects and pitfalls of occupational hazard mapping: 'between these lines there be dragons'.
Koehler, Kirsten A; Volckens, John
2011-10-01
Hazard data mapping is a promising new technique that can enhance the process of occupational exposure assessment and risk communication. Hazard maps have the potential to improve worker health by providing key input for the design of hazard intervention and control strategies. Hazard maps are developed with aid from direct-reading instruments, which can collect highly spatially and temporally resolved data in a relatively short period of time. However, quantifying spatial-temporal variability in the occupational environment is not a straightforward process, and our lack of understanding of how to ascertain and model spatial and temporal variability is a limiting factor in the use and interpretation of workplace hazard maps. We provide an example of how sources of and exposures to workplace hazards may be mischaracterized in a hazard map due to a lack of completeness and representativeness of collected measurement data. Based on this example, we believe that a major priority for research in this emerging area should focus on the development of a statistical framework to quantify uncertainty in spatially and temporally varying data. In conjunction with this need is one for the development of guidelines and procedures for the proper sampling, generation, and evaluation of workplace hazard maps.
NASA Astrophysics Data System (ADS)
Goienetxea Uriarte, A.; Ruiz Zúñiga, E.; Urenda Moris, M.; Ng, A. H. C.
2015-05-01
Discrete Event Simulation (DES) is nowadays widely used to support decision makers in system analysis and improvement. However, the use of simulation for improving stochastic logistic processes is not common among healthcare providers. The process of improving healthcare systems involves the necessity to deal with trade-off optimal solutions that take into consideration a multiple number of variables and objectives. Complementing DES with Multi-Objective Optimization (SMO) creates a superior base for finding these solutions and in consequence, facilitates the decision-making process. This paper presents how SMO has been applied for system improvement analysis in a Swedish Emergency Department (ED). A significant number of input variables, constraints and objectives were considered when defining the optimization problem. As a result of the project, the decision makers were provided with a range of optimal solutions which reduces considerably the length of stay and waiting times for the ED patients. SMO has proved to be an appropriate technique to support healthcare system design and improvement processes. A key factor for the success of this project has been the involvement and engagement of the stakeholders during the whole process.
Seasonal dynamics of snail populations in coastal Kenya: Model calibration and snail control
NASA Astrophysics Data System (ADS)
Gurarie, D.; King, C. H.; Yoon, N.; Wang, X.; Alsallaq, R.
2017-10-01
A proper snail population model is important for accurately predicting Schistosoma transmission. Field data shows that the overall snail population and that of shedding snails have a strong pattern of seasonal variation. Because human hosts are infected by the cercariae released from shedding snails, the abundance of the snail population sets ultimate limits on human infection. For developing a predictive dynamic model of schistosome infection and control strategies we need realistic snail population dynamics. Here we propose two such models based on underlying environmental factors and snail population biology. The models consist of two-stage (young-adult) populations with resource-dependent reproduction, survival, maturation. The key input in the system is seasonal rainfall which creates snail habitats and resources (small vegetation). The models were tested, calibrated and validated using dataset collected in Msambweni (coastal Kenya). Seasonal rainfall in Msambweni is highly variable with intermittent wet - dry seasons. Typical snail patterns follow precipitation peaks with 2-4-month time-lag. Our models are able to reproduce such seasonal variability over extended period of time (3-year study). We applied them to explore the optimal seasonal timing for implementing snail control.
Trade-offs Between Socio-economic Development and Ecosystem Health under Changing Water Availability
NASA Astrophysics Data System (ADS)
Nazemi, A.; Hassanzadeh, E.; Elshorbagy, A. A.; Wheater, H. S.; Gober, P.; Jardine, T.; Lindenschmidt, K. E.
2017-12-01
Natural and human water systems at regional scales are often developed around key characteristics of streamflow. As a result, changes in streamflow regime can affect both socio-economic activities and freshwater ecosystems. In addition to natural variability and/or climate change, extensive water resource management to support socio-economic growth has also changed streamflow regimes. This study aims at understanding the trade-offs between agricultural expansion in the province of Saskatchewan, Canada, and alterations in the ecohydrological characteristics of the Saskatchewan River Delta (SRD) located downstream. Changes in climate along with extensive water resource management have altered the upstream flow regime. Moreover, Saskatchewan is investigating the possible expansion of irrigated agriculture to boost the provincial economy. To evaluate trade-offs across a range of possible scenarios for streamflow changes, the potential increase in provincial net benefit versus potential vulnerability of the SRD was assessed using perturbed flow realizations along with scenarios of irrigation expansion as input to an integrated water resource system model. This study sheds light on the potential variability in trade-offs between economic benefits and ecosystem health under a range of streamflow conditions, with the aim of informing decisions that can benefit both natural and human water systems.
Propagation of variability in railway dynamic simulations: application to virtual homologation
NASA Astrophysics Data System (ADS)
Funfschilling, Christine; Perrin, Guillaume; Kraft, Sönke
2012-01-01
Railway dynamic simulations are increasingly used to predict and analyse the behaviour of the vehicle and of the track during their whole life cycle. Up to now however, no simulation has been used in the certification procedure even if the expected benefits are important: cheaper and shorter procedures, more objectivity, better knowledge of the behaviour around critical situations. Deterministic simulations are nevertheless too poor to represent the whole physical of the track/vehicle system which contains several sources of variability: variability of the mechanical parameters of a train among a class of vehicles (mass, stiffness and damping of different suspensions), variability of the contact parameters (friction coefficient, wheel and rail profiles) and variability of the track design and quality. This variability plays an important role on the safety, on the ride quality, and thus on the certification criteria. When using the simulation for certification purposes, it seems therefore crucial to take into account the variability of the different inputs. The main goal of this article is thus to propose a method to introduce the variability in railway dynamics. A four-step method is described namely the definition of the stochastic problem, the modelling of the inputs variability, the propagation and the analysis of the output. Each step is illustrated with railway examples.
NASA Astrophysics Data System (ADS)
Yang, Can; Ma, Cheng; Hu, Linxi; He, Guangqiang
2018-06-01
We present a hierarchical modulation coherent communication protocol, which simultaneously achieves classical optical communication and continuous-variable quantum key distribution. Our hierarchical modulation scheme consists of a quadrature phase-shifting keying modulation for classical communication and a four-state discrete modulation for continuous-variable quantum key distribution. The simulation results based on practical parameters show that it is feasible to transmit both quantum information and classical information on a single carrier. We obtained a secure key rate of 10^{-3} bits/pulse to 10^{-1} bits/pulse within 40 kilometers, and in the meantime the maximum bit error rate for classical information is about 10^{-7}. Because continuous-variable quantum key distribution protocol is compatible with standard telecommunication technology, we think our hierarchical modulation scheme can be used to upgrade the digital communication systems to extend system function in the future.
Continuous variable quantum key distribution with modulated entangled states.
Madsen, Lars S; Usenko, Vladyslav C; Lassen, Mikael; Filip, Radim; Andersen, Ulrik L
2012-01-01
Quantum key distribution enables two remote parties to grow a shared key, which they can use for unconditionally secure communication over a certain distance. The maximal distance depends on the loss and the excess noise of the connecting quantum channel. Several quantum key distribution schemes based on coherent states and continuous variable measurements are resilient to high loss in the channel, but are strongly affected by small amounts of channel excess noise. Here we propose and experimentally address a continuous variable quantum key distribution protocol that uses modulated fragile entangled states of light to greatly enhance the robustness to channel noise. We experimentally demonstrate that the resulting quantum key distribution protocol can tolerate more noise than the benchmark set by the ideal continuous variable coherent state protocol. Our scheme represents a very promising avenue for extending the distance for which secure communication is possible.
Daniel J. Miller; Kelly M. Burnett
2008-01-01
Debris flows are important geomorphic agents in mountainous terrains that shape channel environments and add a dynamic element to sediment supply and channel disturbance. Identification of channels susceptible to debris-flow inputs of sediment and organic debris, and quantification of the likelihood and magnitude of those inputs, are key tasks for characterizing...
Carrillo, Presentación; Medina-Sánchez, Juan M.; Herrera, Guillermo; Durán, Cristina; Segovia, María; Cortés, Dolores; Salles, Soluna; Korbee, Nathalie; L. Figueroa, Félix; Mercado, Jesús M.
2015-01-01
Some of the most important effects of global change on coastal marine systems include increasing nutrient inputs and higher levels of ultraviolet radiation (UVR, 280–400 nm), which could affect primary producers, a key trophic link to the functioning of marine food webs. However, interactive effects of both factors on the phytoplankton community have not been assessed for the Mediterranean Sea. An in situ factorial experiment, with two levels of ultraviolet solar radiation (UVR+PAR vs. PAR) and nutrients (control vs. P-enriched), was performed to evaluate single and UVR×P effects on metabolic, enzymatic, stoichiometric and structural phytoplanktonic variables. While most phytoplankton variables were not affected by UVR, dissolved phosphatase (APAEX) and algal P content increased in the presence of UVR, which was interpreted as an acclimation mechanism of algae to oligotrophic marine waters. Synergistic UVR×P interactive effects were positive on photosynthetic variables (i.e., maximal electron transport rate, ETRmax), but negative on primary production and phytoplankton biomass because the pulse of P unmasked the inhibitory effect of UVR. This unmasking effect might be related to greater photodamage caused by an excess of electron flux after a P pulse (higher ETRmax) without an efficient release of carbon as the mechanism to dissipate the reducing power of photosynthetic electron transport. PMID:26599583
[Airports and air quality: a critical synthesis of the literature].
Cattani, Giorgio; Di Menno di Bucchianico, Alessandro; Gaeta, Alessandra; Romani, Daniela; Fontana, Luca; Iavicoli, Ivo
2014-01-01
This work reviewed existing literature on airport related activities that could worsen surrounding air quality; its aim is to underline the progress coming from recent-year studies, the knowledge emerging from new approaches, the development of semi-empiric analytical methods as well as the questions still needing to be clarified. To estimate pollution levels, spatial and temporal variability, and the sources relative contributions integrated assessment, using both fixed point measurement and model outputs, are needed. The general picture emerging from the studies was a non-negligible and highly spatially variable (within 2-3 km from the fence line) airport contribution; even if it is often not dominant compared to other concomitant pollution sources. Results were highly airport-specific. Traffic volumes, landscape and meteorology were the key variables that drove the impacts. Results were thus hardly exportable to other contexts. Airport related pollutant sources were found to be characterized by unusual emission patterns (particularly ultrafine particles, black carbon and nitrogen oxides during take-off); high time-resolution measurements allow to depict the rapidly changing take-off effect on air quality that could not be adequately observed otherwise. Few studies used high time resolution data in a successful way as statistical models inputs to estimate the aircraft take-off contribution to the observed average levels. These findings should not be neglected when exposure of people living near airports is to be assessed.
Axial and Centrifugal Compressor Mean Line Flow Analysis Method
NASA Technical Reports Server (NTRS)
Veres, Joseph P.
2009-01-01
This paper describes a method to estimate key aerodynamic parameters of single and multistage axial and centrifugal compressors. This mean-line compressor code COMDES provides the capability of sizing single and multistage compressors quickly during the conceptual design process. Based on the compressible fluid flow equations and the Euler equation, the code can estimate rotor inlet and exit blade angles when run in the design mode. The design point rotor efficiency and stator losses are inputs to the code, and are modeled at off design. When run in the off-design analysis mode, it can be used to generate performance maps based on simple models for losses due to rotor incidence and inlet guide vane reset angle. The code can provide an improved understanding of basic aerodynamic parameters such as diffusion factor, loading levels and incidence, when matching multistage compressor blade rows at design and at part-speed operation. Rotor loading levels and relative velocity ratio are correlated to the onset of compressor surge. NASA Stage 37 and the three-stage NASA 74-A axial compressors were analyzed and the results compared to test data. The code has been used to generate the performance map for the NASA 76-B three-stage axial compressor featuring variable geometry. The compressor stages were aerodynamically matched at off-design speeds by adjusting the variable inlet guide vane and variable stator geometry angles to control the rotor diffusion factor and incidence angles.
Development of a Stochastically-driven, Forward Predictive Performance Model for PEMFCs
NASA Astrophysics Data System (ADS)
Harvey, David Benjamin Paul
A one-dimensional multi-scale coupled, transient, and mechanistic performance model for a PEMFC membrane electrode assembly has been developed. The model explicitly includes each of the 5 layers within a membrane electrode assembly and solves for the transport of charge, heat, mass, species, dissolved water, and liquid water. Key features of the model include the use of a multi-step implementation of the HOR reaction on the anode, agglomerate catalyst sub-models for both the anode and cathode catalyst layers, a unique approach that links the composition of the catalyst layer to key properties within the agglomerate model and the implementation of a stochastic input-based approach for component material properties. The model employs a new methodology for validation using statistically varying input parameters and statistically-based experimental performance data; this model represents the first stochastic input driven unit cell performance model. The stochastic input driven performance model was used to identify optimal ionomer content within the cathode catalyst layer, demonstrate the role of material variation in potential low performing MEA materials, provide explanation for the performance of low-Pt loaded MEAs, and investigate the validity of transient-sweep experimental diagnostic methods.
African crop yield reductions due to increasingly unbalanced Nitrogen and Phosphorus consumption
NASA Astrophysics Data System (ADS)
van der Velde, Marijn; Folberth, Christian; Balkovič, Juraj; Ciais, Philippe; Fritz, Steffen; Janssens, Ivan A.; Obersteiner, Michael; See, Linda; Skalský, Rastislav; Xiong, Wei; Peñuealas, Josep
2014-05-01
The impact of soil nutrient depletion on crop production has been known for decades, but robust assessments of the impact of increasingly unbalanced nitrogen (N) and phosphorus (P) application rates on crop production are lacking. Here, we use crop response functions based on 741 FAO maize crop trials and EPIC crop modeling across Africa to examine maize yield deficits resulting from unbalanced N:P applications under low, medium, and high input scenarios, for past (1975), current, and future N:P mass ratios of respectively, 1:0.29, 1:0.15, and 1:0.05. At low N inputs (10 kg/ha), current yield deficits amount to 10% but will increase up to 27% under the assumed future N:P ratio, while at medium N inputs (50 kg N/ha), future yield losses could amount to over 40%. The EPIC crop model was then used to simulate maize yields across Africa. The model results showed relative median future yield reductions at low N inputs of 40%, and 50% at medium and high inputs, albeit with large spatial variability. Dominant low-quality soils such as Ferralsols, which are strongly adsorbing P, and Arenosols with a low nutrient retention capacity, are associated with a strong yield decline, although Arenosols show very variable crop yield losses at low inputs. Optimal N:P ratios, i.e. those where the lowest amount of applied P produces the highest yield (given N input) where calculated with EPIC to be as low as 1:0.5. Finally, we estimated the additional P required given current N inputs, and given N inputs that would allow Africa to close yield gaps (ca. 70%). At current N inputs, P consumption would have to increase 2.3-fold to be optimal, and to increase 11.7-fold to close yield gaps. The P demand to overcome these yield deficits would provide a significant additional pressure on current global extraction of P resources.
Nearest private query based on quantum oblivious key distribution
NASA Astrophysics Data System (ADS)
Xu, Min; Shi, Run-hua; Luo, Zhen-yu; Peng, Zhen-wan
2017-12-01
Nearest private query is a special private query which involves two parties, a user and a data owner, where the user has a private input (e.g., an integer) and the data owner has a private data set, and the user wants to query which element in the owner's private data set is the nearest to his input without revealing their respective private information. In this paper, we first present a quantum protocol for nearest private query, which is based on quantum oblivious key distribution (QOKD). Compared to the classical related protocols, our protocol has the advantages of the higher security and the better feasibility, so it has a better prospect of applications.
Balancing Authority Cooperation Concepts - Intra-Hour Scheduling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunsaker, Matthew; Samaan, Nader; Milligan, Michael
2013-03-29
The overall objective of this study was to understand, on an Interconnection-wide basis, the effects intra-hour scheduling compared to hourly scheduling. Moreover, the study sought to understand how the benefits of intra-hour scheduling would change by altering the input assumptions in different scenarios. This report describes results of three separate scenarios with differing key assumptions and comparing the production costs between hourly scheduling and 10-minute scheduling performance. The different scenarios were chosen to provide insight into how the estimated benefits might change by altering input assumptions. Several key assumptions were different in the three scenarios, however most assumptions were similarmore » and/or unchanged among the scenarios.« less
USDA-ARS?s Scientific Manuscript database
Precipitation patterns and nutrient inputs impact transport of nitrate (NO3-N) and phosphorus (TP) from Midwest watersheds. Nutrient concentrations and yields from two subsurface-drained watersheds, the Little Cobb River (LCR) in southern Minnesota and the South Fork Iowa River (SFIR) in northern Io...
Software development guidelines
NASA Technical Reports Server (NTRS)
Kovalevsky, N.; Underwood, J. M.
1979-01-01
Analysis, modularization, flowcharting, existing programs and subroutines, compatibility, input and output data, adaptability to checkout, and general-purpose subroutines are summarized. Statement ordering and numbering, specification statements, variable names, arrays, arithemtical expressions and statements, control statements, input/output, and subroutines are outlined. Intermediate results, desk checking, checkout data, dumps, storage maps, diagnostics, and program timing are reviewed.
Testing an Instructional Model in a University Educational Setting from the Student's Perspective
ERIC Educational Resources Information Center
Betoret, Fernando Domenech
2006-01-01
We tested a theoretical model that hypothesized relationships between several variables from input, process and product in an educational setting, from the university student's perspective, using structural equation modeling. In order to carry out the analysis, we measured in sequential order the input (referring to students' personal…
Estuaries in the Pacific Northwest have major intraannual and within estuary variation in sources and magnitudes of nutrient inputs. To develop an approach for setting nutrient criteria for these systems, we conducted a case study for Yaquina Bay, OR based on a synthesis of resea...
Model predictive controller design for boost DC-DC converter using T-S fuzzy cost function
NASA Astrophysics Data System (ADS)
Seo, Sang-Wha; Kim, Yong; Choi, Han Ho
2017-11-01
This paper proposes a Takagi-Sugeno (T-S) fuzzy method to select cost function weights of finite control set model predictive DC-DC converter control algorithms. The proposed method updates the cost function weights at every sample time by using T-S type fuzzy rules derived from the common optimal control engineering knowledge that a state or input variable with an excessively large magnitude can be penalised by increasing the weight corresponding to the variable. The best control input is determined via the online optimisation of the T-S fuzzy cost function for all the possible control input sequences. This paper implements the proposed model predictive control algorithm in real time on a Texas Instruments TMS320F28335 floating-point Digital Signal Processor (DSP). Some experimental results are given to illuminate the practicality and effectiveness of the proposed control system under several operating conditions. The results verify that our method can yield not only good transient and steady-state responses (fast recovery time, small overshoot, zero steady-state error, etc.) but also insensitiveness to abrupt load or input voltage parameter variations.
Eye-gaze and intent: Application in 3D interface control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schryver, J.C.; Goldberg, J.H.
1993-06-01
Computer interface control is typically accomplished with an input ``device`` such as keyboard, mouse, trackball, etc. An input device translates a users input actions, such as mouse clicks and key presses, into appropriate computer commands. To control the interface, the user must first convert intent into the syntax of the input device. A more natural means of computer control is possible when the computer can directly infer user intent, without need of intervening input devices. We describe an application of eye-gaze-contingent control of an interactive three-dimensional (3D) user interface. A salient feature of the user interface is natural input, withmore » a heightened impression of controlling the computer directly by the mind. With this interface, input of rotation and translation are intuitive, whereas other abstract features, such as zoom, are more problematic to match with user intent. This paper describes successes with implementation to date, and ongoing efforts to develop a more sophisticated intent inferencing methodology.« less
Eye-gaze and intent: Application in 3D interface control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schryver, J.C.; Goldberg, J.H.
1993-01-01
Computer interface control is typically accomplished with an input device'' such as keyboard, mouse, trackball, etc. An input device translates a users input actions, such as mouse clicks and key presses, into appropriate computer commands. To control the interface, the user must first convert intent into the syntax of the input device. A more natural means of computer control is possible when the computer can directly infer user intent, without need of intervening input devices. We describe an application of eye-gaze-contingent control of an interactive three-dimensional (3D) user interface. A salient feature of the user interface is natural input, withmore » a heightened impression of controlling the computer directly by the mind. With this interface, input of rotation and translation are intuitive, whereas other abstract features, such as zoom, are more problematic to match with user intent. This paper describes successes with implementation to date, and ongoing efforts to develop a more sophisticated intent inferencing methodology.« less
Case studies in Bayesian microbial risk assessments.
Kennedy, Marc C; Clough, Helen E; Turner, Joanne
2009-12-21
The quantification of uncertainty and variability is a key component of quantitative risk analysis. Recent advances in Bayesian statistics make it ideal for integrating multiple sources of information, of different types and quality, and providing a realistic estimate of the combined uncertainty in the final risk estimates. We present two case studies related to foodborne microbial risks. In the first, we combine models to describe the sequence of events resulting in illness from consumption of milk contaminated with VTEC O157. We used Monte Carlo simulation to propagate uncertainty in some of the inputs to computer models describing the farm and pasteurisation process. Resulting simulated contamination levels were then assigned to consumption events from a dietary survey. Finally we accounted for uncertainty in the dose-response relationship and uncertainty due to limited incidence data to derive uncertainty about yearly incidences of illness in young children. Options for altering the risk were considered by running the model with different hypothetical policy-driven exposure scenarios. In the second case study we illustrate an efficient Bayesian sensitivity analysis for identifying the most important parameters of a complex computer code that simulated VTEC O157 prevalence within a managed dairy herd. This was carried out in 2 stages, first to screen out the unimportant inputs, then to perform a more detailed analysis on the remaining inputs. The method works by building a Bayesian statistical approximation to the computer code using a number of known code input/output pairs (training runs). We estimated that the expected total number of children aged 1.5-4.5 who become ill due to VTEC O157 in milk is 8.6 per year, with 95% uncertainty interval (0,11.5). The most extreme policy we considered was banning on-farm pasteurisation of milk, which reduced the estimate to 6.4 with 95% interval (0,11). In the second case study the effective number of inputs was reduced from 30 to 7 in the screening stage, and just 2 inputs were found to explain 82.8% of the output variance. A combined total of 500 runs of the computer code were used. These case studies illustrate the use of Bayesian statistics to perform detailed uncertainty and sensitivity analyses, integrating multiple information sources in a way that is both rigorous and efficient.
Learning from adaptive neural dynamic surface control of strict-feedback systems.
Wang, Min; Wang, Cong
2015-06-01
Learning plays an essential role in autonomous control systems. However, how to achieve learning in the nonstationary environment for nonlinear systems is a challenging problem. In this paper, we present learning method for a class of n th-order strict-feedback systems by adaptive dynamic surface control (DSC) technology, which achieves the human-like ability of learning by doing and doing with learned knowledge. To achieve the learning, this paper first proposes stable adaptive DSC with auxiliary first-order filters, which ensures the boundedness of all the signals in the closed-loop system and the convergence of tracking errors in a finite time. With the help of DSC, the derivative of the filter output variable is used as the neural network (NN) input instead of traditional intermediate variables. As a result, the proposed adaptive DSC method reduces greatly the dimension of NN inputs, especially for high-order systems. After the stable DSC design, we decompose the stable closed-loop system into a series of linear time-varying perturbed subsystems. Using a recursive design, the recurrent property of NN input variables is easily verified since the complexity is overcome using DSC. Subsequently, the partial persistent excitation condition of the radial basis function NN is satisfied. By combining a state transformation, accurate approximations of the closed-loop system dynamics are recursively achieved in a local region along recurrent orbits. Then, the learning control method using the learned knowledge is proposed to achieve the closed-loop stability and the improved control performance. Simulation studies are performed to demonstrate the proposed scheme can not only reuse the learned knowledge to achieve the better control performance with the faster tracking convergence rate and the smaller tracking error but also greatly alleviate the computational burden because of reducing the number and complexity of NN input variables.
Allocation of control rights in the PPP Project: a cooperative game model
NASA Astrophysics Data System (ADS)
Zhang, Yunhua; Feng, Jingchun; Yang, Shengtao
2017-06-01
Reasonable allocation of control rights is the key to the success of Public-Private Partnership (PPP) projects. PPP are services or ventures which are financed and operated through cooperation between governmental and private sector actors and which involve reasonable control rights sharing between these two partners. After professional firm with capital and technology as a shareholder participating in PPP project firms, the PPP project is diversified in participants and input resources. Meanwhile the allocation of control rights of PPP project tends to be complicated. According to the diversification of participants and input resources of PPP projects, the key participants are divided into professional firms and pure investors. Based on the cost of repurchase of different input resources in markets, the cooperative game relationship between these two parties is analyzed, on the basis of which the allocation model of the cooperative game for control rights is constructed to ensure optimum allocation ration of control rights and verify the share of control rights in proportion to the cost of repurchase.
1992 NASA Life Support Systems Analysis workshop
NASA Technical Reports Server (NTRS)
Evanich, Peggy L.; Crabb, Thomas M.; Gartrell, Charles F.
1992-01-01
The 1992 Life Support Systems Analysis Workshop was sponsored by NASA's Office of Aeronautics and Space Technology (OAST) to integrate the inputs from, disseminate information to, and foster communication among NASA, industry, and academic specialists. The workshop continued discussion and definition of key issues identified in the 1991 workshop, including: (1) modeling and experimental validation; (2) definition of systems analysis evaluation criteria; (3) integration of modeling at multiple levels; and (4) assessment of process control modeling approaches. Through both the 1991 and 1992 workshops, NASA has continued to seek input from industry and university chemical process modeling and analysis experts, and to introduce and apply new systems analysis approaches to life support systems. The workshop included technical presentations, discussions, and interactive planning, with sufficient time allocated for discussion of both technology status and technology development recommendations. Key personnel currently involved with life support technology developments from NASA, industry, and academia provided input to the status and priorities of current and future systems analysis methods and requirements.
NASA Astrophysics Data System (ADS)
Dumedah, Gift; Walker, Jeffrey P.
2017-03-01
The sources of uncertainty in land surface models are numerous and varied, from inaccuracies in forcing data to uncertainties in model structure and parameterizations. Majority of these uncertainties are strongly tied to the overall makeup of the model, but the input forcing data set is independent with its accuracy usually defined by the monitoring or the observation system. The impact of input forcing data on model estimation accuracy has been collectively acknowledged to be significant, yet its quantification and the level of uncertainty that is acceptable in the context of the land surface model to obtain a competitive estimation remain mostly unknown. A better understanding is needed about how models respond to input forcing data and what changes in these forcing variables can be accommodated without deteriorating optimal estimation of the model. As a result, this study determines the level of forcing data uncertainty that is acceptable in the Joint UK Land Environment Simulator (JULES) to competitively estimate soil moisture in the Yanco area in south eastern Australia. The study employs hydro genomic mapping to examine the temporal evolution of model decision variables from an archive of values obtained from soil moisture data assimilation. The data assimilation (DA) was undertaken using the advanced Evolutionary Data Assimilation. Our findings show that the input forcing data have significant impact on model output, 35% in root mean square error (RMSE) for 5cm depth of soil moisture and 15% in RMSE for 15cm depth of soil moisture. This specific quantification is crucial to illustrate the significance of input forcing data spread. The acceptable uncertainty determined based on dominant pathway has been validated and shown to be reliable for all forcing variables, so as to provide optimal soil moisture. These findings are crucial for DA in order to account for uncertainties that are meaningful from the model standpoint. Moreover, our results point to a proper treatment of input forcing data in general land surface and hydrological model estimation.
SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahlfeld, R., E-mail: r.ahlfeld14@imperial.ac.uk; Belkouchi, B.; Montomoli, F.
2016-09-01
A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrixmore » is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and 10 different input distributions or histograms.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cho, Daniel D; Wernicke, A Gabriella; Nori, Dattatreyudu
Purpose/Objective(s): The aim of this study is to build the estimator of toxicity using artificial neural network (ANN) for head and neck cancer patients Materials/Methods: An ANN can combine variables into a predictive model during training and considered all possible correlations of variables. We constructed an ANN based on the data from 73 patients with advanced H and N cancer treated with external beam radiotherapy and/or chemotherapy at our institution. For the toxicity estimator we defined input data including age, sex, site, stage, pathology, status of chemo, technique of external beam radiation therapy (EBRT), length of treatment, dose of EBRT,more » status of post operation, length of follow-up, the status of local recurrences and distant metastasis. These data were digitized based on the significance and fed to the ANN as input nodes. We used 20 hidden nodes (for the 13 input nodes) to take care of the correlations of input nodes. For training ANN, we divided data into three subsets such as training set, validation set and test set. Finally, we built the estimator for the toxicity from ANN output. Results: We used 13 input variables including the status of local recurrences and distant metastasis and 20 hidden nodes for correlations. 59 patients for training set, 7 patients for validation set and 7 patients for test set and fed the inputs to Matlab neural network fitting tool. We trained the data within 15% of errors of outcome. In the end we have the toxicity estimation with 74% of accuracy. Conclusion: We proved in principle that ANN can be a very useful tool for predicting the RT outcomes for high risk H and N patients. Currently we are improving the results using cross validation.« less
SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos
NASA Astrophysics Data System (ADS)
Ahlfeld, R.; Belkouchi, B.; Montomoli, F.
2016-09-01
A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrix is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and 10 different input distributions or histograms.
Bittner, J.W.; Biscardi, R.W.
1991-03-19
An electronic measurement circuit is disclosed for high speed comparison of the relative amplitudes of a predetermined number of electrical input signals independent of variations in the magnitude of the sum of the signals. The circuit includes a high speed electronic switch that is operably connected to receive on its respective input terminals one of said electrical input signals and to have its common terminal serve as an input for a variable-gain amplifier-detector circuit that is operably connected to feed its output to a common terminal of a second high speed electronic switch. The respective terminals of the second high speed electronic switch are operably connected to a plurality of integrating sample and hold circuits, which in turn have their outputs connected to a summing logic circuit that is operable to develop first, second and third output voltages, the first output voltage being proportional to a predetermined ratio of sums and differences between the compared input signals, the second output voltage being proportional to a second summed ratio of predetermined sums and differences between said input signals, and the third output voltage being proportional to the sum of signals to the summing logic circuit. A servo system that is operably connected to receive said third output signal and compare it with a reference voltage to develop a slowly varying feedback voltage to control the variable-gain amplifier in said common amplifier-detector circuit in order to make said first and second output signals independent of variations in the magnitude of the sum of said input signals. 2 figures.
Bittner, John W.; Biscardi, Richard W.
1991-01-01
An electronic measurement circuit for high speed comparison of the relative amplitudes of a predetermined number of electrical input signals independent of variations in the magnitude of the sum of the signals. The circuit includes a high speed electronic switch that is operably connected to receive on its respective input terminals one of said electrical input signals and to have its common terminal serve as an input for a variable-gain amplifier-detector circuit that is operably connected to feed its output to a common terminal of a second high speed electronic switch. The respective terminals of the second high speed electronic switch are operably connected to a plurality of integrating sample and hold circuits, which in turn have their outputs connected to a summing logic circuit that is operable to develop first, second and third output voltages, the first output voltage being proportional to a predetermined ratio of sums and differences between the compared input signals, the second output voltage being proportional to a second summed ratio of predetermined sums and differences between said input signals, and the third output voltage being proportional to the sum of signals to the summing logic circuit. A servo system that is operably connected to receive said third output signal and compare it with a reference voltage to develop a slowly varying feedback voltage to control the variable-gain amplifier in said common amplifier-detector circuit in order to make said first and second output signals independent of variations in the magnitude of the sum of said input signals.
An exact algebraic solution of the infimum in H-infinity optimization with output feedback
NASA Technical Reports Server (NTRS)
Chen, Ben M.; Saberi, Ali; Ly, Uy-Loi
1991-01-01
This paper presents a simple and noniterative procedure for the computation of the exact value of the infimum in the standard H-infinity-optimal control with output feedback. The problem formulation is general and does not place any restrictions on the direct feedthrough terms between the control input and the controlled output variables, and between the disturbance input and the measurement output variables. The method is applicable to systems that satisfy (1) the transfer function from the control input to the controlled output is right-invertible and has no invariant zeros on the j(w) axis and, (2) the transfer function from the disturbance to the measurement output is left-invertible and has no invariant zeros on the j(w) axis. A set of necessary and sufficient conditions for the solvability of H-infinity-almost disturbance decoupling problem via measurement feedback with internal stability is also given.
Scenario planning for water resource management in semi arid zone
NASA Astrophysics Data System (ADS)
Gupta, Rajiv; Kumar, Gaurav
2018-06-01
Scenario planning for water resource management in semi arid zone is performed using systems Input-Output approach of time domain analysis. This approach derived the future weights of input variables of the hydrological system from their precedent weights. Input variables considered here are precipitation, evaporation, population and crop irrigation. Ingles & De Souza's method and Thornthwaite model have been used to estimate runoff and evaporation respectively. Difference between precipitation inflow and the sum of runoff and evaporation has been approximated as groundwater recharge. Population and crop irrigation derived the total water demand. Compensation of total water demand by groundwater recharge has been analyzed. Further compensation has been evaluated by proposing efficient methods of water conservation. The best measure to be adopted for water conservation is suggested based on the cost benefit analysis. A case study for nine villages in Chirawa region of district Jhunjhunu, Rajasthan (India) validates the model.
Automatic insulation resistance testing apparatus
Wyant, Francis J.; Nowlen, Steven P.; Luker, Spencer M.
2005-06-14
An apparatus and method for automatic measurement of insulation resistances of a multi-conductor cable. In one embodiment of the invention, the apparatus comprises a power supply source, an input measuring means, an output measuring means, a plurality of input relay controlled contacts, a plurality of output relay controlled contacts, a relay controller and a computer. In another embodiment of the invention the apparatus comprises a power supply source, an input measuring means, an output measuring means, an input switching unit, an output switching unit and a control unit/data logger. Embodiments of the apparatus of the invention may also incorporate cable fire testing means. The apparatus and methods of the present invention use either voltage or current for input and output measured variables.
A stochastic model of input effectiveness during irregular gamma rhythms.
Dumont, Grégory; Northoff, Georg; Longtin, André
2016-02-01
Gamma-band synchronization has been linked to attention and communication between brain regions, yet the underlying dynamical mechanisms are still unclear. How does the timing and amplitude of inputs to cells that generate an endogenously noisy gamma rhythm affect the network activity and rhythm? How does such "communication through coherence" (CTC) survive in the face of rhythm and input variability? We present a stochastic modelling approach to this question that yields a very fast computation of the effectiveness of inputs to cells involved in gamma rhythms. Our work is partly motivated by recent optogenetic experiments (Cardin et al. Nature, 459(7247), 663-667 2009) that tested the gamma phase-dependence of network responses by first stabilizing the rhythm with periodic light pulses to the interneurons (I). Our computationally efficient model E-I network of stochastic two-state neurons exhibits finite-size fluctuations. Using the Hilbert transform and Kuramoto index, we study how the stochastic phase of its gamma rhythm is entrained by external pulses. We then compute how this rhythmic inhibition controls the effectiveness of external input onto pyramidal (E) cells, and how variability shapes the window of firing opportunity. For transferring the time variations of an external input to the E cells, we find a tradeoff between the phase selectivity and depth of rate modulation. We also show that the CTC is sensitive to the jitter in the arrival times of spikes to the E cells, and to the degree of I-cell entrainment. We further find that CTC can occur even if the underlying deterministic system does not oscillate; quasicycle-type rhythms induced by the finite-size noise retain the basic CTC properties. Finally a resonance analysis confirms the relative importance of the I cell pacing for rhythm generation. Analysis of whole network behaviour, including computations of synchrony, phase and shifts in excitatory-inhibitory balance, can be further sped up by orders of magnitude using two coupled stochastic differential equations, one for each population. Our work thus yields a fast tool to numerically and analytically investigate CTC in a noisy context. It shows that CTC can be quite vulnerable to rhythm and input variability, which both decrease phase preference.
Karmakar, Chandan; Udhayakumar, Radhagayathri K; Li, Peng; Venkatesh, Svetha; Palaniswami, Marimuthu
2017-01-01
Distribution entropy ( DistEn ) is a recently developed measure of complexity that is used to analyse heart rate variability (HRV) data. Its calculation requires two input parameters-the embedding dimension m , and the number of bins M which replaces the tolerance parameter r that is used by the existing approximation entropy ( ApEn ) and sample entropy ( SampEn ) measures. The performance of DistEn can also be affected by the data length N . In our previous studies, we have analyzed stability and performance of DistEn with respect to one parameter ( m or M ) or combination of two parameters ( N and M ). However, impact of varying all the three input parameters on DistEn is not yet studied. Since DistEn is predominantly aimed at analysing short length heart rate variability (HRV) signal, it is important to comprehensively study the stability, consistency and performance of the measure using multiple case studies. In this study, we examined the impact of changing input parameters on DistEn for synthetic and physiological signals. We also compared the variations of DistEn and performance in distinguishing physiological (Elderly from Young) and pathological (Healthy from Arrhythmia) conditions with ApEn and SampEn . The results showed that DistEn values are minimally affected by the variations of input parameters compared to ApEn and SampEn. DistEn also showed the most consistent and the best performance in differentiating physiological and pathological conditions with various of input parameters among reported complexity measures. In conclusion, DistEn is found to be the best measure for analysing short length HRV time series.
Mu, Zhijian; Huang, Aiying; Ni, Jiupai; Xie, Deti
2014-01-01
Organic soils are an important source of N2O, but global estimates of these fluxes remain uncertain because measurements are sparse. We tested the hypothesis that N2O fluxes can be predicted from estimates of mineral nitrogen input, calculated from readily-available measurements of CO2 flux and soil C/N ratio. From studies of organic soils throughout the world, we compiled a data set of annual CO2 and N2O fluxes which were measured concurrently. The input of soil mineral nitrogen in these studies was estimated from applied fertilizer nitrogen and organic nitrogen mineralization. The latter was calculated by dividing the rate of soil heterotrophic respiration by soil C/N ratio. This index of mineral nitrogen input explained up to 69% of the overall variability of N2O fluxes, whereas CO2 flux or soil C/N ratio alone explained only 49% and 36% of the variability, respectively. Including water table level in the model, along with mineral nitrogen input, further improved the model with the explanatory proportion of variability in N2O flux increasing to 75%. Unlike grassland or cropland soils, forest soils were evidently nitrogen-limited, so water table level had no significant effect on N2O flux. Our proposed approach, which uses the product of soil-derived CO2 flux and the inverse of soil C/N ratio as a proxy for nitrogen mineralization, shows promise for estimating regional or global N2O fluxes from organic soils, although some further enhancements may be warranted. PMID:24798347
Antanasijević, Davor Z; Pocajt, Viktor V; Povrenović, Dragan S; Ristić, Mirjana Đ; Perić-Grujić, Aleksandra A
2013-01-15
This paper describes the development of an artificial neural network (ANN) model for the forecasting of annual PM(10) emissions at the national level, using widely available sustainability and economical/industrial parameters as inputs. The inputs for the model were selected and optimized using a genetic algorithm and the ANN was trained using the following variables: gross domestic product, gross inland energy consumption, incineration of wood, motorization rate, production of paper and paperboard, sawn wood production, production of refined copper, production of aluminum, production of pig iron and production of crude steel. The wide availability of the input parameters used in this model can overcome a lack of data and basic environmental indicators in many countries, which can prevent or seriously impede PM emission forecasting. The model was trained and validated with the data for 26 EU countries for the period from 1999 to 2006. PM(10) emission data, collected through the Convention on Long-range Transboundary Air Pollution - CLRTAP and the EMEP Programme or as emission estimations by the Regional Air Pollution Information and Simulation (RAINS) model, were obtained from Eurostat. The ANN model has shown very good performance and demonstrated that the forecast of PM(10) emission up to two years can be made successfully and accurately. The mean absolute error for two-year PM(10) emission prediction was only 10%, which is more than three times better than the predictions obtained from the conventional multi-linear regression and principal component regression models that were trained and tested using the same datasets and input variables. Copyright © 2012 Elsevier B.V. All rights reserved.
Karmakar, Chandan; Udhayakumar, Radhagayathri K.; Li, Peng; Venkatesh, Svetha; Palaniswami, Marimuthu
2017-01-01
Distribution entropy (DistEn) is a recently developed measure of complexity that is used to analyse heart rate variability (HRV) data. Its calculation requires two input parameters—the embedding dimension m, and the number of bins M which replaces the tolerance parameter r that is used by the existing approximation entropy (ApEn) and sample entropy (SampEn) measures. The performance of DistEn can also be affected by the data length N. In our previous studies, we have analyzed stability and performance of DistEn with respect to one parameter (m or M) or combination of two parameters (N and M). However, impact of varying all the three input parameters on DistEn is not yet studied. Since DistEn is predominantly aimed at analysing short length heart rate variability (HRV) signal, it is important to comprehensively study the stability, consistency and performance of the measure using multiple case studies. In this study, we examined the impact of changing input parameters on DistEn for synthetic and physiological signals. We also compared the variations of DistEn and performance in distinguishing physiological (Elderly from Young) and pathological (Healthy from Arrhythmia) conditions with ApEn and SampEn. The results showed that DistEn values are minimally affected by the variations of input parameters compared to ApEn and SampEn. DistEn also showed the most consistent and the best performance in differentiating physiological and pathological conditions with various of input parameters among reported complexity measures. In conclusion, DistEn is found to be the best measure for analysing short length HRV time series. PMID:28979215
Hoppie, Lyle O.
1982-01-12
Disclosed are several embodiments of a regenerative braking device for an automotive vehicle. The device includes a plurality of rubber rollers (24, 26) mounted for rotation between an input shaft (14) connectable to the vehicle drivetrain and an output shaft (16) which is drivingly connected to the input shaft by a variable ratio transmission (20). When the transmission ratio is such that the input shaft rotates faster than the output shaft, the rubber rollers are torsionally stressed to accumulate energy, thereby slowing the vehicle. When the transmission ratio is such that the output shaft rotates faster than the input shaft, the rubber rollers are torsionally relaxed to deliver accumulated energy, thereby accelerating or driving the vehicle.
Post, R.F.
1958-11-11
An electronic computer circuit is described for producing an output voltage proportional to the product or quotient of tbe voltages of a pair of input signals. ln essence, the disclosed invention provides a computer having two channels adapted to receive separate input signals and each having amplifiers with like fixed amplification factors and like negatlve feedback amplifiers. One of the channels receives a constant signal for comparison purposes, whereby a difference signal is produced to control the amplification factors of the variable feedback amplifiers. The output of the other channel is thereby proportional to the product or quotient of input signals depending upon the relation of input to fixed signals in the first mentioned channel.
Advanced Terrain Representation for the Microticcit Workstation: System Maintenance Manual
1986-02-01
enter the */ /* password. */ /* Inputs: passwd - password to compare userfs entry to */ /* Outputs: TRUE - if password entered correctly...include "atrdefs.h" #include "ctype.h" extern char window[]; /* useable portion of screen */ 1 i getpw( passwd ) char passwd []; { int c...blank input window */ pcvgcp(&row,*col); curs_off(); nchars - ntries - 0; len « strlen( passwd ); pcvwca(len,• *,REVIDEO); /* process keys till user
Logistic regression modeling to assess groundwater vulnerability to contamination in Hawaii, USA
NASA Astrophysics Data System (ADS)
Mair, Alan; El-Kadi, Aly I.
2013-10-01
Capture zone analysis combined with a subjective susceptibility index is currently used in Hawaii to assess vulnerability to contamination of drinking water sources derived from groundwater. In this study, we developed an alternative objective approach that combines well capture zones with multiple-variable logistic regression (LR) modeling and applied it to the highly-utilized Pearl Harbor and Honolulu aquifers on the island of Oahu, Hawaii. Input for the LR models utilized explanatory variables based on hydrogeology, land use, and well geometry/location. A suite of 11 target contaminants detected in the region, including elevated nitrate (> 1 mg/L), four chlorinated solvents, four agricultural fumigants, and two pesticides, was used to develop the models. We then tested the ability of the new approach to accurately separate groups of wells with low and high vulnerability, and the suitability of nitrate as an indicator of other types of contamination. Our results produced contaminant-specific LR models that accurately identified groups of wells with the lowest/highest reported detections and the lowest/highest nitrate concentrations. Current and former agricultural land uses were identified as significant explanatory variables for eight of the 11 target contaminants, while elevated nitrate was a significant variable for five contaminants. The utility of the combined approach is contingent on the availability of hydrologic and chemical monitoring data for calibrating groundwater and LR models. Application of the approach using a reference site with sufficient data could help identify key variables in areas with similar hydrogeology and land use but limited data. In addition, elevated nitrate may also be a suitable indicator of groundwater contamination in areas with limited data. The objective LR modeling approach developed in this study is flexible enough to address a wide range of contaminants and represents a suitable addition to the current subjective approach.
ERIC Educational Resources Information Center
Ramírez-Esparza, Nairán; García-Sierra, Adrián; Kuhl, Patricia K.
2017-01-01
This study tested the impact of child-directed language input on language development in Spanish-English bilingual infants (N = 25, 11- and 14-month-olds from the Seattle metropolitan area), across languages and independently for each language, controlling for socioeconomic status. Language input was characterized by social interaction variables,…
TRANDESNF: A computer program for transonic airfoil design and analysis in nonuniform flow
NASA Technical Reports Server (NTRS)
Chang, J. F.; Lan, C. Edward
1987-01-01
The use of a transonic airfoil code for analysis, inverse design, and direct optimization of an airfoil immersed in propfan slipstream is described. A summary of the theoretical method, program capabilities, input format, output variables, and program execution are described. Input data of sample test cases and the corresponding output are given.
MURI: Impact of Oceanographic Variability on Acoustic Communications
2011-09-01
multiplexing ( OFDM ), multiple- input/multiple-output ( MIMO ) transmissions, and multi-user single-input/multiple-output (SIMO) communications. Lastly... MIMO - OFDM communications: Receiver design for Doppler distorted underwater acoustic channels,” Proc. Asilomar Conf. on Signals, Systems, and... MIMO ) will be of particular interest. Validating experimental data will be obtained during the ONR acoustic communications experiment in summer 2008
NASA Astrophysics Data System (ADS)
Carter, Jeffrey R.; Simon, Wayne E.
1990-08-01
Neural networks are trained using Recursive Error Minimization (REM) equations to perform statistical classification. Using REM equations with continuous input variables reduces the required number of training experiences by factors of one to two orders of magnitude over standard back propagation. Replacing the continuous input variables with discrete binary representations reduces the number of connections by a factor proportional to the number of variables reducing the required number of experiences by another order of magnitude. Undesirable effects of using recurrent experience to train neural networks for statistical classification problems are demonstrated and nonrecurrent experience used to avoid these undesirable effects. 1. THE 1-41 PROBLEM The statistical classification problem which we address is is that of assigning points in ddimensional space to one of two classes. The first class has a covariance matrix of I (the identity matrix) the covariance matrix of the second class is 41. For this reason the problem is known as the 1-41 problem. Both classes have equal probability of occurrence and samples from both classes may appear anywhere throughout the ddimensional space. Most samples near the origin of the coordinate system will be from the first class while most samples away from the origin will be from the second class. Since the two classes completely overlap it is impossible to have a classifier with zero error. The minimum possible error is known as the Bayes error and
Fuzzy parametric uncertainty analysis of linear dynamical systems: A surrogate modeling approach
NASA Astrophysics Data System (ADS)
Chowdhury, R.; Adhikari, S.
2012-10-01
Uncertainty propagation engineering systems possess significant computational challenges. This paper explores the possibility of using correlated function expansion based metamodelling approach when uncertain system parameters are modeled using Fuzzy variables. In particular, the application of High-Dimensional Model Representation (HDMR) is proposed for fuzzy finite element analysis of dynamical systems. The HDMR expansion is a set of quantitative model assessment and analysis tools for capturing high-dimensional input-output system behavior based on a hierarchy of functions of increasing dimensions. The input variables may be either finite-dimensional (i.e., a vector of parameters chosen from the Euclidean space RM) or may be infinite-dimensional as in the function space CM[0,1]. The computational effort to determine the expansion functions using the alpha cut method scales polynomially with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is integrated with a commercial Finite Element software. Modal analysis of a simplified aircraft wing with Fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations.
The method of belief scales as a means for dealing with uncertainty in tough regulatory decisions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pilch, Martin M.
Modeling and simulation is playing an increasing role in supporting tough regulatory decisions, which are typically characterized by variabilities and uncertainties in the scenarios, input conditions, failure criteria, model parameters, and even model form. Variability exists when there is a statistically significant database that is fully relevant to the application. Uncertainty, on the other hand, is characterized by some degree of ignorance. A simple algebraic problem was used to illustrate how various risk methodologies address variability and uncertainty in a regulatory context. These traditional risk methodologies include probabilistic methods (including frequensic and Bayesian perspectives) and second-order methods where variabilities andmore » uncertainties are treated separately. Representing uncertainties with (subjective) probability distributions and using probabilistic methods to propagate subjective distributions can lead to results that are not logically consistent with available knowledge and that may not be conservative. The Method of Belief Scales (MBS) is developed as a means to logically aggregate uncertain input information and to propagate that information through the model to a set of results that are scrutable, easily interpretable by the nonexpert, and logically consistent with the available input information. The MBS, particularly in conjunction with sensitivity analyses, has the potential to be more computationally efficient than other risk methodologies. The regulatory language must be tailored to the specific risk methodology if ambiguity and conflict are to be avoided.« less
Feeney, Daniel F; Meyer, François G; Noone, Nicholas; Enoka, Roger M
2017-10-01
Motor neurons appear to be activated with a common input signal that modulates the discharge activity of all neurons in the motor nucleus. It has proven difficult for neurophysiologists to quantify the variability in a common input signal, but characterization of such a signal may improve our understanding of how the activation signal varies across motor tasks. Contemporary methods of quantifying the common input to motor neurons rely on compiling discrete action potentials into continuous time series, assuming the motor pool acts as a linear filter, and requiring signals to be of sufficient duration for frequency analysis. We introduce a space-state model in which the discharge activity of motor neurons is modeled as inhomogeneous Poisson processes and propose a method to quantify an abstract latent trajectory that represents the common input received by motor neurons. The approach also approximates the variation in synaptic noise in the common input signal. The model is validated with four data sets: a simulation of 120 motor units, a pair of integrate-and-fire neurons with a Renshaw cell providing inhibitory feedback, the discharge activity of 10 integrate-and-fire neurons, and the discharge times of concurrently active motor units during an isometric voluntary contraction. The simulations revealed that a latent state-space model is able to quantify the trajectory and variability of the common input signal across all four conditions. When compared with the cumulative spike train method of characterizing common input, the state-space approach was more sensitive to the details of the common input current and was less influenced by the duration of the signal. The state-space approach appears to be capable of detecting rather modest changes in common input signals across conditions. NEW & NOTEWORTHY We propose a state-space model that explicitly delineates a common input signal sent to motor neurons and the physiological noise inherent in synaptic signal transmission. This is the first application of a deterministic state-space model to represent the discharge characteristics of motor units during voluntary contractions. Copyright © 2017 the American Physiological Society.
Gehring, Tobias; Händchen, Vitus; Duhme, Jörg; Furrer, Fabian; Franz, Torsten; Pacher, Christoph; Werner, Reinhard F; Schnabel, Roman
2015-10-30
Secret communication over public channels is one of the central pillars of a modern information society. Using quantum key distribution this is achieved without relying on the hardness of mathematical problems, which might be compromised by improved algorithms or by future quantum computers. State-of-the-art quantum key distribution requires composable security against coherent attacks for a finite number of distributed quantum states as well as robustness against implementation side channels. Here we present an implementation of continuous-variable quantum key distribution satisfying these requirements. Our implementation is based on the distribution of continuous-variable Einstein-Podolsky-Rosen entangled light. It is one-sided device independent, which means the security of the generated key is independent of any memoryfree attacks on the remote detector. Since continuous-variable encoding is compatible with conventional optical communication technology, our work is a step towards practical implementations of quantum key distribution with state-of-the-art security based solely on telecom components.
Gehring, Tobias; Händchen, Vitus; Duhme, Jörg; Furrer, Fabian; Franz, Torsten; Pacher, Christoph; Werner, Reinhard F.; Schnabel, Roman
2015-01-01
Secret communication over public channels is one of the central pillars of a modern information society. Using quantum key distribution this is achieved without relying on the hardness of mathematical problems, which might be compromised by improved algorithms or by future quantum computers. State-of-the-art quantum key distribution requires composable security against coherent attacks for a finite number of distributed quantum states as well as robustness against implementation side channels. Here we present an implementation of continuous-variable quantum key distribution satisfying these requirements. Our implementation is based on the distribution of continuous-variable Einstein–Podolsky–Rosen entangled light. It is one-sided device independent, which means the security of the generated key is independent of any memoryfree attacks on the remote detector. Since continuous-variable encoding is compatible with conventional optical communication technology, our work is a step towards practical implementations of quantum key distribution with state-of-the-art security based solely on telecom components. PMID:26514280
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurnik, Charles W; Violette, Daniel M.
Addressing other evaluation issues that have been raised in the context of energy efficiency programs, this chapter focuses on methods used to address the persistence of energy savings, which is an important input to the benefit/cost analysis of energy efficiency programs and portfolios. In addition to discussing 'persistence' (which refers to the stream of benefits over time from an energy efficiency measure or program), this chapter provides a summary treatment of these issues -Synergies across programs -Rebound -Dual baselines -Errors in variables (the measurement and/or accuracy of input variables to the evaluation).
NASA Technical Reports Server (NTRS)
Newsom, J. R.; Mukhopadhyay, V.
1983-01-01
A method for designing robust feedback controllers for multiloop systems is presented. Robustness is characterized in terms of the minimum singular value of the system return difference matrix at the plant input. Analytical gradients of the singular values with respect to design variables in the controller are derived. A cumulative measure of the singular values and their gradients with respect to the design variables is used with a numerical optimization technique to increase the system's robustness. Both unconstrained and constrained optimization techniques are evaluated. Numerical results are presented for a two-input/two-output drone flight control system.
Guo, Changning; Doub, William H; Kauffman, John F
2010-08-01
Monte Carlo simulations were applied to investigate the propagation of uncertainty in both input variables and response measurements on model prediction for nasal spray product performance design of experiment (DOE) models in the first part of this study, with an initial assumption that the models perfectly represent the relationship between input variables and the measured responses. In this article, we discard the initial assumption, and extended the Monte Carlo simulation study to examine the influence of both input variable variation and product performance measurement variation on the uncertainty in DOE model coefficients. The Monte Carlo simulations presented in this article illustrate the importance of careful error propagation during product performance modeling. Our results show that the error estimates based on Monte Carlo simulation result in smaller model coefficient standard deviations than those from regression methods. This suggests that the estimated standard deviations from regression may overestimate the uncertainties in the model coefficients. Monte Carlo simulations provide a simple software solution to understand the propagation of uncertainty in complex DOE models so that design space can be specified with statistically meaningful confidence levels. (c) 2010 Wiley-Liss, Inc. and the American Pharmacists Association
NASA Astrophysics Data System (ADS)
Behera, Kishore Kumar; Pal, Snehanshu
2018-03-01
This paper describes a new approach towards optimum utilisation of ferrochrome added during stainless steel making in AOD converter. The objective of optimisation is to enhance end blow chromium content of steel and reduce the ferrochrome addition during refining. By developing a thermodynamic based mathematical model, a study has been conducted to compute the optimum trade-off between ferrochrome addition and end blow chromium content of stainless steel using a predator prey genetic algorithm through training of 100 dataset considering different input and output variables such as oxygen, argon, nitrogen blowing rate, duration of blowing, initial bath temperature, chromium and carbon content, weight of ferrochrome added during refining. Optimisation is performed within constrained imposed on the input parameters whose values fall within certain ranges. The analysis of pareto fronts is observed to generate a set of feasible optimal solution between the two conflicting objectives that provides an effective guideline for better ferrochrome utilisation. It is found out that after a certain critical range, further addition of ferrochrome does not affect the chromium percentage of steel. Single variable response analysis is performed to study the variation and interaction of all individual input parameters on output variables.
Variability of perceptual multistability: from brain state to individual trait
Kleinschmidt, Andreas; Sterzer, Philipp; Rees, Geraint
2012-01-01
Few phenomena are as suitable as perceptual multistability to demonstrate that the brain constructively interprets sensory input. Several studies have outlined the neural circuitry involved in generating perceptual inference but only more recently has the individual variability of this inferential process been appreciated. Studies of the interaction of evoked and ongoing neural activity show that inference itself is not merely a stimulus-triggered process but is related to the context of the current brain state into which the processing of external stimulation is embedded. As brain states fluctuate, so does perception of a given sensory input. In multistability, perceptual fluctuation rates are consistent for a given individual but vary considerably between individuals. There has been some evidence for a genetic basis for these individual differences and recent morphometric studies of parietal lobe regions have identified neuroanatomical substrates for individual variability in spontaneous switching behaviour. Moreover, disrupting the function of these latter regions by transcranial magnetic stimulation yields systematic interference effects on switching behaviour, further arguing for a causal role of these regions in perceptual inference. Together, these studies have advanced our understanding of the biological mechanisms by which the brain constructs the contents of consciousness from sensory input. PMID:22371620
Sequential Modular Position and Momentum Measurements of a Trapped Ion Mechanical Oscillator
NASA Astrophysics Data System (ADS)
Flühmann, C.; Negnevitsky, V.; Marinelli, M.; Home, J. P.
2018-04-01
The noncommutativity of position and momentum observables is a hallmark feature of quantum physics. However, this incompatibility does not extend to observables that are periodic in these base variables. Such modular-variable observables have been suggested as tools for fault-tolerant quantum computing and enhanced quantum sensing. Here, we implement sequential measurements of modular variables in the oscillatory motion of a single trapped ion, using state-dependent displacements and a heralded nondestructive readout. We investigate the commutative nature of modular variable observables by demonstrating no-signaling in time between successive measurements, using a variety of input states. Employing a different periodicity, we observe signaling in time. This also requires wave-packet overlap, resulting in quantum interference that we enhance using squeezed input states. The sequential measurements allow us to extract two-time correlators for modular variables, which we use to violate a Leggett-Garg inequality. Signaling in time and Leggett-Garg inequalities serve as efficient quantum witnesses, which we probe here with a mechanical oscillator, a system that has a natural crossover from the quantum to the classical regime.
Moreira, Fabiana Tavares; Prantoni, Alessandro Lívio; Martini, Bruno; de Abreu, Michelle Alves; Stoiev, Sérgio Biato; Turra, Alexander
2016-01-15
Microplastics such as pellets have been reported for many years on sandy beaches around the globe. Nevertheless, high variability is observed in their estimates and distribution patterns across the beach environment are still to be unravelled. Here, we investigate the small-scale temporal and spatial variability in the abundance of pellets in the intertidal zone of a sandy beach and evaluate factors that can increase the variability in data sets. The abundance of pellets was estimated during twelve consecutive tidal cycles, identifying the position of the high tide between cycles and sampling drift-lines across the intertidal zone. We demonstrate that beach dynamic processes such as the overlap of strandlines and artefacts of the methods can increase the small-scale variability. The results obtained are discussed in terms of the methodological considerations needed to understand the distribution of pellets in the beach environment, with special implications for studies focused on patterns of input. Copyright © 2015 Elsevier Ltd. All rights reserved.
Data analytics using canonical correlation analysis and Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Rickman, Jeffrey M.; Wang, Yan; Rollett, Anthony D.; Harmer, Martin P.; Compson, Charles
2017-07-01
A canonical correlation analysis is a generic parametric model used in the statistical analysis of data involving interrelated or interdependent input and output variables. It is especially useful in data analytics as a dimensional reduction strategy that simplifies a complex, multidimensional parameter space by identifying a relatively few combinations of variables that are maximally correlated. One shortcoming of the canonical correlation analysis, however, is that it provides only a linear combination of variables that maximizes these correlations. With this in mind, we describe here a versatile, Monte-Carlo based methodology that is useful in identifying non-linear functions of the variables that lead to strong input/output correlations. We demonstrate that our approach leads to a substantial enhancement of correlations, as illustrated by two experimental applications of substantial interest to the materials science community, namely: (1) determining the interdependence of processing and microstructural variables associated with doped polycrystalline aluminas, and (2) relating microstructural decriptors to the electrical and optoelectronic properties of thin-film solar cells based on CuInSe2 absorbers. Finally, we describe how this approach facilitates experimental planning and process control.
Soares, Marta O.; Palmer, Stephen; Ades, Anthony E.; Harrison, David; Shankar-Hari, Manu; Rowan, Kathy M.
2015-01-01
Cost-effectiveness analysis (CEA) models are routinely used to inform health care policy. Key model inputs include relative effectiveness of competing treatments, typically informed by meta-analysis. Heterogeneity is ubiquitous in meta-analysis, and random effects models are usually used when there is variability in effects across studies. In the absence of observed treatment effect modifiers, various summaries from the random effects distribution (random effects mean, predictive distribution, random effects distribution, or study-specific estimate [shrunken or independent of other studies]) can be used depending on the relationship between the setting for the decision (population characteristics, treatment definitions, and other contextual factors) and the included studies. If covariates have been measured that could potentially explain the heterogeneity, then these can be included in a meta-regression model. We describe how covariates can be included in a network meta-analysis model and how the output from such an analysis can be used in a CEA model. We outline a model selection procedure to help choose between competing models and stress the importance of clinical input. We illustrate the approach with a health technology assessment of intravenous immunoglobulin for the management of adult patients with severe sepsis in an intensive care setting, which exemplifies how risk of bias information can be incorporated into CEA models. We show that the results of the CEA and value-of-information analyses are sensitive to the model and highlight the importance of sensitivity analyses when conducting CEA in the presence of heterogeneity. The methods presented extend naturally to heterogeneity in other model inputs, such as baseline risk. PMID:25712447
NASA Astrophysics Data System (ADS)
Wang, Tian; Cui, Xiaoxin; Ni, Yewen; Liao, Kai; Liao, Nan; Yu, Dunshan; Cui, Xiaole
2017-04-01
With shrinking transistor feature size, the fin-type field-effect transistor (FinFET) has become the most promising option in low-power circuit design due to its superior capability to suppress leakage. To support the VLSI digital system flow based on logic synthesis, we have designed an optimized high-performance low-power FinFET standard cell library based on employing the mixed FBB/RBB technique in the existing stacked structure of each cell. This paper presents the reliability evaluation of the optimized cells under process and operating environment variations based on Monte Carlo analysis. The variations are modelled with Gaussian distribution of the device parameters and 10000 sweeps are conducted in the simulation to obtain the statistical properties of the worst-case delay and input-dependent leakage for each cell. For comparison, a set of non-optimal cells that adopt the same topology without employing the mixed biasing technique is also generated. Experimental results show that the optimized cells achieve standard deviation reduction of 39.1% and 30.7% at most in worst-case delay and input-dependent leakage respectively while the normalized deviation shrinking in worst-case delay and input-dependent leakage can be up to 98.37% and 24.13%, respectively, which demonstrates that our optimized cells are less sensitive to variability and exhibit more reliability. Project supported by the National Natural Science Foundation of China (No. 61306040), the State Key Development Program for Basic Research of China (No. 2015CB057201), the Beijing Natural Science Foundation (No. 4152020), and Natural Science Foundation of Guangdong Province, China (No. 2015A030313147).
Annual land cover change mapping using MODIS time series to improve emissions inventories.
NASA Astrophysics Data System (ADS)
López Saldaña, G.; Quaife, T. L.; Clifford, D.
2014-12-01
Understanding and quantifying land surface changes is necessary for estimating greenhouse gas and ammonia emissions, and for meeting air quality limits and targets. More sophisticated inventories methodologies for at least key emission source are needed due to policy-driven air quality directives. Quantifying land cover changes on an annual basis requires greater spatial and temporal disaggregation of input data. The main aim of this study is to develop a methodology for using Earth Observations (EO) to identify annual land surface changes that will improve emissions inventories from agriculture and land use/land use change and forestry (LULUCF) in the UK. First goal is to find the best sets of input features that describe accurately the surface dynamics. In order to identify annual and inter-annual land surface changes, a times series of surface reflectance was used to capture seasonal variability. Daily surface reflectance images from the Moderate Resolution Imaging Spectroradiometer (MODIS) at 500m resolution were used to invert a Bidirectional Reflectance Distribution Function (BRDF) model to create the seamless time series. Given the limited number of cloud-free observations, a BRDF climatology was used to constrain the model inversion and where no high-scientific quality observations were available at all, as a gap filler. The Land Cover Map 2007 (LC2007) produced by the Centre for Ecology & Hydrology (CEH) was used for training and testing purposes. A prototype land cover product was created for 2006 to 2008. Several machine learning classifiers were tested as well as different sets of input features going from the BRDF parameters to spectral Albedo. We will present the results of the time series development and the first exercises when creating the prototype land cover product.
Welton, Nicky J; Soares, Marta O; Palmer, Stephen; Ades, Anthony E; Harrison, David; Shankar-Hari, Manu; Rowan, Kathy M
2015-07-01
Cost-effectiveness analysis (CEA) models are routinely used to inform health care policy. Key model inputs include relative effectiveness of competing treatments, typically informed by meta-analysis. Heterogeneity is ubiquitous in meta-analysis, and random effects models are usually used when there is variability in effects across studies. In the absence of observed treatment effect modifiers, various summaries from the random effects distribution (random effects mean, predictive distribution, random effects distribution, or study-specific estimate [shrunken or independent of other studies]) can be used depending on the relationship between the setting for the decision (population characteristics, treatment definitions, and other contextual factors) and the included studies. If covariates have been measured that could potentially explain the heterogeneity, then these can be included in a meta-regression model. We describe how covariates can be included in a network meta-analysis model and how the output from such an analysis can be used in a CEA model. We outline a model selection procedure to help choose between competing models and stress the importance of clinical input. We illustrate the approach with a health technology assessment of intravenous immunoglobulin for the management of adult patients with severe sepsis in an intensive care setting, which exemplifies how risk of bias information can be incorporated into CEA models. We show that the results of the CEA and value-of-information analyses are sensitive to the model and highlight the importance of sensitivity analyses when conducting CEA in the presence of heterogeneity. The methods presented extend naturally to heterogeneity in other model inputs, such as baseline risk. © The Author(s) 2015.
Boahen, Kwabena
2013-01-01
A fundamental question in neuroscience is how neurons perform precise operations despite inherent variability. This question also applies to neuromorphic engineering, where low-power microchips emulate the brain using large populations of diverse silicon neurons. Biological neurons in the auditory pathway display precise spike timing, critical for sound localization and interpretation of complex waveforms such as speech, even though they are a heterogeneous population. Silicon neurons are also heterogeneous, due to a key design constraint in neuromorphic engineering: smaller transistors offer lower power consumption and more neurons per unit area of silicon, but also more variability between transistors and thus between silicon neurons. Utilizing this variability in a neuromorphic model of the auditory brain stem with 1,080 silicon neurons, we found that a low-voltage-activated potassium conductance (gKL) enables precise spike timing via two mechanisms: statically reducing the resting membrane time constant and dynamically suppressing late synaptic inputs. The relative contribution of these two mechanisms is unknown because blocking gKL in vitro eliminates dynamic adaptation but also lengthens the membrane time constant. We replaced gKL with a static leak in silico to recover the short membrane time constant and found that silicon neurons could mimic the spike-time precision of their biological counterparts, but only over a narrow range of stimulus intensities and biophysical parameters. The dynamics of gKL were required for precise spike timing robust to stimulus variation across a heterogeneous population of silicon neurons, thus explaining how neural and neuromorphic systems may perform precise operations despite inherent variability. PMID:23554436
NASA Astrophysics Data System (ADS)
Wang, Tianyang; Liang, Ming; Li, Jianyong; Cheng, Weidong; Li, Chuan
2015-10-01
The interfering vibration signals of a gearbox often represent a challenging issue in rolling bearing fault detection and diagnosis, particularly under unknown variable rotational speed conditions. Though some methods have been proposed to remove the gearbox interfering signals based on their discrete frequency nature, such methods may not work well under unknown variable speed conditions. As such, we propose a new approach to address this issue. The new approach consists of three main steps: (a) adaptive gear interference removal, (b) fault characteristic order (FCO) based fault detection, and (c) rotational-order-sideband (ROS) based fault type identification. For gear interference removal, an enhanced adaptive noise cancellation (ANC) algorithm has been developed in this study. The new ANC algorithm does not require an additional accelerometer to provide reference input. Instead, the reference signal is adaptively constructed from signal maxima and instantaneous dominant meshing multiple (IDMM) trend. Key ANC parameters such as filter length and step size have also been tailored to suit the variable speed conditions, The main advantage of using ROS for fault type diagnosis is that it is insusceptible to confusion caused by the co-existence of bearing and gear rotational frequency peaks in the identification of the bearing fault characteristic frequency in the FCO sub-order region. The effectiveness of the proposed method has been demonstrated using both simulation and experimental data. Our experimental study also indicates that the proposed method is applicable regardless whether the bearing and gear rotational speeds are proportional to each other or not.
NASA Technical Reports Server (NTRS)
1993-01-01
Using chordic technology, a data entry operator can finger key combinations for text or graphics input. Because only one hand is needed, a disabled person may use it. Strain and fatigue are less than when using a conventional keyboard; input is faster, and the system can be learned in about an hour. Infogrip, Inc. developed chordic input technology with Stennis Space Center (SSC). (NASA is interested in potentially faster human/computer interaction on spacecraft as well as a low cost tactile/visual training system for the handicapped.) The company is now marketing the BAT as an improved system for both disabled and non-disabled computer operators.
ERIC Educational Resources Information Center
Aguilar, Jessica M.; Plante, Elena; Sandoval, Michelle
2018-01-01
Purpose: Variability in the input plays an important role in language learning. The current study examined the role of object variability for new word learning by preschoolers with specific language impairment (SLI). Method: Eighteen 4- and 5-year-old children with SLI were taught 8 new words in 3 short activities over the course of 3 sessions.…
Mathematical Model of Solidification During Electroslag Casting of Pilger Roll
NASA Astrophysics Data System (ADS)
Liu, Fubin; Li, Huabing; Jiang, Zhouhua; Dong, Yanwu; Chen, Xu; Geng, Xin; Zang, Ximin
A mathematical model for describing the interaction of multiple physical fields in slag bath and solidification process in ingot during pilger roll casting with variable cross-section which is produced by the electroslag casting (ESC) process was developed. The commercial software ANSYS was applied to calculate the electromagnetic field, magnetic driven fluid flow, buoyancy-driven flow and heat transfer. The transportation phenomenon in slag bath and solidification characteristic of ingots are analyzed for variable cross-section with variable input power under the conditions of 9Cr3NiMo steel and 70%CaF2 - 30%Al2O3 slag system. The calculated results show that characteristic of current density distribution, velocity patterns and temperature profiles in the slag bath and metal pool profiles in ingot have distinct difference at variable cross-sections due to difference of input power and cooling condition. The pool shape and the local solidification time (LST) during Pilger roll ESC process are analyzed.
Spike-Threshold Adaptation Predicted by Membrane Potential Dynamics In Vivo
Fontaine, Bertrand; Peña, José Luis; Brette, Romain
2014-01-01
Neurons encode information in sequences of spikes, which are triggered when their membrane potential crosses a threshold. In vivo, the spiking threshold displays large variability suggesting that threshold dynamics have a profound influence on how the combined input of a neuron is encoded in the spiking. Threshold variability could be explained by adaptation to the membrane potential. However, it could also be the case that most threshold variability reflects noise and processes other than threshold adaptation. Here, we investigated threshold variation in auditory neurons responses recorded in vivo in barn owls. We found that spike threshold is quantitatively predicted by a model in which the threshold adapts, tracking the membrane potential at a short timescale. As a result, in these neurons, slow voltage fluctuations do not contribute to spiking because they are filtered by threshold adaptation. More importantly, these neurons can only respond to input spikes arriving together on a millisecond timescale. These results demonstrate that fast adaptation to the membrane potential captures spike threshold variability in vivo. PMID:24722397
NASA Astrophysics Data System (ADS)
Beekman, F.; Hardebol, N.; Cloetingh, S.; Tesauro, M.
2006-12-01
Better understanding of 3D rheological heterogeneity of the European Lithosphere provide the key to tie the recorded intraplate deformation pattern to stress fields transmitted into plate interior from plate boundary forces. The first order strain patterns result from stresses transmitted through the European lithosphere that is marked by a patchwork of high strength variability from inherited structural and compositional heterogeneities and upper mantle thermal perturbations. As the lithospheric rheology depends primarily on its spatial structure, composition and thermal estate, the 3D strength model for the European lithosphere relies on a 3D compositional model that yields the compositional heterogeneities and an iteratively calculated thermal cube using Fouriers law for heat conduction. The accurate appraisal of spatial strength variability results from proper mapping and integration of the geophysical compositional and thermal input parameters. Therefore, much attention has been paid to a proper description of first order structural and tectonic features that facilitate compilation of the compositional and thermal input models. As such, the 3D strength model reflects the thermo-mechanical structure inherited from the Europeans polyphase deformation history. Major 3D spatial mechanical strength variability has been revealed. The East-European and Fennoscandian Craton to the NE exhibit high strength (30-50 1012 N/m) from low mantle temperatures and surface heatflow of 35-60 mW/m2 while central and western Europe reflect a polyphase Phanerozoic thermo- tectonic history. Here, regions with high rigidity are formed primarily by patches of thermally stabilized Variscan Massifs (e.g. Rhenish, Armorican, Bohemian, and Iberian Massif) with low heatflow and lithospheric thickness values (50-65 mW/m2; 110-150 km) yielding strengths of ~15-25 1012 N/m. In contrast, major axis of weakened lithosphere coincides with Cenozoic Rift System (e.g. Upper and Lower Rhine Grabens, Pannonian Basin and Massif Central) attributed to the presence of tomographically imaged plumes. This study has elucidated the memory of the present-days Europeans lithosphere induced by compositional and thermal heterogeneities. The resulting lateral strength variations has a clear signature of the pst lithospheres polyphase deformation and also entails active tectonics, tectonically induced topography and surface processes.
Long-distance continuous-variable quantum key distribution by controlling excess noise
NASA Astrophysics Data System (ADS)
Huang, Duan; Huang, Peng; Lin, Dakai; Zeng, Guihua
2016-01-01
Quantum cryptography founded on the laws of physics could revolutionize the way in which communication information is protected. Significant progresses in long-distance quantum key distribution based on discrete variables have led to the secure quantum communication in real-world conditions being available. However, the alternative approach implemented with continuous variables has not yet reached the secure distance beyond 100 km. Here, we overcome the previous range limitation by controlling system excess noise and report such a long distance continuous-variable quantum key distribution experiment. Our result paves the road to the large-scale secure quantum communication with continuous variables and serves as a stepping stone in the quest for quantum network.
Long-distance continuous-variable quantum key distribution by controlling excess noise.
Huang, Duan; Huang, Peng; Lin, Dakai; Zeng, Guihua
2016-01-13
Quantum cryptography founded on the laws of physics could revolutionize the way in which communication information is protected. Significant progresses in long-distance quantum key distribution based on discrete variables have led to the secure quantum communication in real-world conditions being available. However, the alternative approach implemented with continuous variables has not yet reached the secure distance beyond 100 km. Here, we overcome the previous range limitation by controlling system excess noise and report such a long distance continuous-variable quantum key distribution experiment. Our result paves the road to the large-scale secure quantum communication with continuous variables and serves as a stepping stone in the quest for quantum network.
Long-distance continuous-variable quantum key distribution by controlling excess noise
Huang, Duan; Huang, Peng; Lin, Dakai; Zeng, Guihua
2016-01-01
Quantum cryptography founded on the laws of physics could revolutionize the way in which communication information is protected. Significant progresses in long-distance quantum key distribution based on discrete variables have led to the secure quantum communication in real-world conditions being available. However, the alternative approach implemented with continuous variables has not yet reached the secure distance beyond 100 km. Here, we overcome the previous range limitation by controlling system excess noise and report such a long distance continuous-variable quantum key distribution experiment. Our result paves the road to the large-scale secure quantum communication with continuous variables and serves as a stepping stone in the quest for quantum network. PMID:26758727
Manufacturing of ionic polymer-metal composites (IPMCs) that can actuate into complex curves
NASA Astrophysics Data System (ADS)
Stoimenov, Boyko L.; Rossiter, Jonathan M.; Mukai, Toshiharu
2007-04-01
Ionic polymer-metal composites (IPMC) are soft actuators with potential applications in the fields of medicine and biologically inspired robotics. Typically, an IPMC bends with approximately constant curvature when voltage is applied to it. More complex shapes were achieved in the past by pre-shaping the actuator or by segmentation and separate actuation of each segment. There are many applications for which fully independent control of each segment of the IPMC is not required and the use of external wiring is objectionable. In this paper we propose two key elements needed to create an IPMC, which can actuate into a complex curve. The first is a connection between adjacent segments, which enables opposite curvature. This can be achieved by reversing the polarity applied on each side of the IPMC, for example by a through-hole connection. The second key element is a variable curvature segment. The segment is designed to bend with any fraction of its full bending ability under given electrical input by changing the overlap of opposite charge electrodes. We demonstrated the usefulness of these key elements in two devices. One is a bi-stable buckled IPMC beam, also used as a building block in a linear actuator device. The other one is an IPMC, actuating into an S-shaped curve with gradually increasing curvature near the ends. The proposed method of manufacturing holds promise for a wide range of new applications of IPMCs, including applications in which IPMCs are used for sensing.
EPA announced the availability of the final report, Uncertainty and Variability in Physiologically-Based Pharmacokinetic (PBPK) Models: Key Issues and Case Studies. This report summarizes some of the recent progress in characterizing uncertainty and variability in physi...
A cortical motor nucleus drives the basal ganglia-recipient thalamus in singing birds
Goldberg, Jesse H.
2012-01-01
The pallido-recipient thalamus transmits information from the basal ganglia (BG) to the cortex and plays a critical role motor initiation and learning. Thalamic activity is strongly inhibited by pallidal inputs from the BG, but the role of non-pallidal inputs, such as excitatory inputs from cortex, is unclear. We have recorded simultaneously from presynaptic pallidal axon terminals and postsynaptic thalamocortical neurons in a BG-recipient thalamic nucleus necessary for vocal variability and learning in zebra finches. We found that song-locked rate modulations in the thalamus could not be explained by pallidal inputs alone, and persisted following pallidal lesion. Instead, thalamic activity was likely driven by inputs from a motor ‘cortical’ nucleus also necessary for singing. These findings suggest a role for cortical inputs to the pallido-recipient thalamus in driving premotor signals important for exploratory behavior and learning. PMID:22327474
A state comparison amplifier with feed forward state correction
NASA Astrophysics Data System (ADS)
Mazzarella, Luca; Donaldson, Ross; Collins, Robert; Zanforlin, Ugo; Buller, Gerald; Jeffers, John
2017-04-01
The Quantum State Comparison AMPlifier (SCAMP) is a probabilistic amplifier that works for known sets of coherent states. The input state is mixed with a guess state at a beam splitter and one of the output ports is coupled to a detector. The other output contains the amplified state, which is accepted on the condition that no counts are recorded. The system uses only classical resources and has been shown to achieve high gain and repetition rate. However the output fidelity is not high enough for most quantum communication purposes. Here we show how the success probability and fidelity are enhanced by repeated comparison stages, conditioning later state choices on the outcomes of earlier detections. A detector firing at an early stage means that a guess is wrong. This knowledge allows us to correct the state perfectly. The system requires fast-switching between different input states, but still requires only classical resources. Figures of merit compare favourably with other schemes, most notably the probability-fidelity product is higher than for unambiguous state discrimination. Due to its simplicity, the system is a candidate to counteract quantum signal degradation in a lossy fibre or as a quantum receiver to improve the key rate of continuous variable quantum communication. The work was supported by the QComm Project of the UK Engineering and Physical Sciences Research Council (EP/M013472/1).
Application of uncertainty and sensitivity analysis to the air quality SHERPA modelling tool
NASA Astrophysics Data System (ADS)
Pisoni, E.; Albrecht, D.; Mara, T. A.; Rosati, R.; Tarantola, S.; Thunis, P.
2018-06-01
Air quality has significantly improved in Europe over the past few decades. Nonetheless we still find high concentrations in measurements mainly in specific regions or cities. This dimensional shift, from EU-wide to hot-spot exceedances, calls for a novel approach to regional air quality management (to complement EU-wide existing policies). The SHERPA (Screening for High Emission Reduction Potentials on Air quality) modelling tool was developed in this context. It provides an additional tool to be used in support to regional/local decision makers responsible for the design of air quality plans. It is therefore important to evaluate the quality of the SHERPA model, and its behavior in the face of various kinds of uncertainty. Uncertainty and sensitivity analysis techniques can be used for this purpose. They both reveal the links between assumptions and forecasts, help in-model simplification and may highlight unexpected relationships between inputs and outputs. Thus, a policy steered SHERPA module - predicting air quality improvement linked to emission reduction scenarios - was evaluated by means of (1) uncertainty analysis (UA) to quantify uncertainty in the model output, and (2) by sensitivity analysis (SA) to identify the most influential input sources of this uncertainty. The results of this study provide relevant information about the key variables driving the SHERPA output uncertainty, and advise policy-makers and modellers where to place their efforts for an improved decision-making process.
The water quality of the LOCAR Pang and Lambourn catchments
NASA Astrophysics Data System (ADS)
Neal, C.; Jarvie, H. P.; Wade, A. J.; Neal, M.; Wyatt, R.; Wickham, H.; Hill, L.; Hewitt, N.
The water quality of the Pang and Lambourn, tributaries of the River Thames, in south-eastern England, is described in relation to spatial and temporal dimensions. The river waters are supplied mainly from Chalk-fed aquifer sources and are, therefore, of a calcium-bicarbonate type. The major, minor and trace element chemistry of the rivers is controlled by a combination of atmospheric and pollutant inputs from agriculture and sewage sources superimposed on a background water quality signal linked to geological sources. Water quality does not vary greatly over time or space. However, in detail, there are differences in water quality between the Pang and Lambourn and between sites along the Pang and the Lambourn. These differences reflect hydrological processes, water flow pathways and water quality input fluxes. The Pang’s pattern of water quality change is more variable than that of the Lambourn. The flow hydrograph also shows both a cyclical and "uniform pattern" characteristic of aquifer drainage with, superimposed, a series of "flashier" spiked responses characteristic of karstic systems. The Lambourn, in contrast, shows simpler features without the "flashier" responses. The results are discussed in relation to the newly developed UK community programme LOCAR dealing with Lowland Catchment Research. A descriptive and box model structure is provided to describe the key features of water quality variations in relation to soil, unsaturated and groundwater flows and storage both away from and close to the river.
Aspen-triticale alleycropping system: effects of landscape position and fertilizer rate
W.L. Headlee; R.B. Hall; R.S. Jr. Zalesny
2010-01-01
Short-rotation woody crops offer several key advantages over other potential bioenergy feedstocks, particularly with regard to nutrient inputs and biomass storage. However, a key disadvantage is a lack of income for the grower early in the rotation. Alleycropping offers the opportunity to grow annual crops for income while the trees become established.
Prestressed elastomer for energy storage
Hoppie, Lyle O.; Speranza, Donald
1982-01-01
Disclosed is a regenerative braking device for an automotive vehicle. The device includes a power isolating assembly (14), an infinitely variable transmission (20) interconnecting an input shaft (16) with an output shaft (18), and an energy storage assembly (22). The storage assembly includes a plurality of elastomeric rods (44, 46) mounted for rotation and connected in series between the input and output shafts. The elastomeric rods are prestressed along their rotational or longitudinal axes to inhibit buckling of the rods due to torsional stressing of the rods in response to relative rotation of the input and output shafts.
Feeney, Daniel F; Mani, Diba; Enoka, Roger M
2018-06-07
We investigated the associations between grooved pegboard times, force steadiness (coefficient of variation for force), and variability in an estimate of the common synaptic input to motor neurons innervating the wrist extensor muscles during steady contractions performed by young and older adults. The discharge times of motor units were derived from recordings obtained with high-density surface electrodes while participants performed steady isometric contractions at 10% and 20% of maximal voluntary contraction (MVC) force. The steady contractions were performed with a pinch grip and wrist extension, both independently (single action) and concurrently (double action). The variance in common synaptic input to motor neurons was estimated with a state-space model of the latent common input dynamics. There was a statistically significant association between the coefficient of variation for force during the steady contractions and the estimated variance in common synaptic input in young (r 2 = 0.31) and older (r 2 = 0.39) adults, but not between either the mean or the coefficient of variation for interspike interval of single motor units with the coefficient of variation for force. Moreover, the estimated variance in common synaptic input during the double-action task with the wrist extensors at the 20% target was significantly associated with grooved pegboard time (r 2 = 0.47) for older adults, but not young adults. These findings indicate that longer pegboard times of older adults were associated with worse force steadiness and greater fluctuations in the estimated common synaptic input to motor neurons during steady contractions. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
2011-07-25
testing, the EFTR must be keyed with the same key used to encrypt the Enhanced Flight Termination Systems ( EFTS ) message. To ensure identical keys...required to verify the proper state. e. Procedure. (1) Pull up EFTS graphic user interface (GUI) (Figure 3). (2) Click “Receiver Power On...commanded mode steady state input currents will not exceed their specified values. TOP 05-2-543 25 July 2011 19 Figure 3. EFTS GUIa
Deng, Rongkang; Kao, Joseph P Y; Kanold, Patrick O
2017-05-09
GABAergic activity is important in neocortical development and plasticity. Because the maturation of GABAergic interneurons is regulated by neural activity, the source of excitatory inputs to GABAergic interneurons plays a key role in development. We show, by laser-scanning photostimulation, that layer 4 and layer 5 GABAergic interneurons in the auditory cortex in neonatal mice (
A Model of Career Success: A Longitudinal Study of Emergency Physicians
ERIC Educational Resources Information Center
Pachulicz, Sarah; Schmitt, Neal; Kuljanin, Goran
2008-01-01
Objective and subjective career success were hypothesized to mediate the relationships between sociodemographic variables, human capital indices, individual difference variables, and organizational sponsorship as inputs and a retirement decision and intentions to leave either the specialty of emergency medicine (EM) or medicine as output…
Relationship between cotton yield and soil electrical conductivity, topography, and landsat imagery
USDA-ARS?s Scientific Manuscript database
Understanding spatial and temporal variability in crop yield is a prerequisite to implementing site-specific management of crop inputs. Apparent soil electrical conductivity (ECa), soil brightness, and topography are easily obtained data that can explain yield variability. The objectives of this stu...
Input filter compensation for switching regulators
NASA Technical Reports Server (NTRS)
Kelkar, S. S.; Lee, F. C.
1983-01-01
A novel input filter compensation scheme for a buck regulator that eliminates the interaction between the input filter output impedance and the regulator control loop is presented. The scheme is implemented using a feedforward loop that senses the input filter state variables and uses this information to modulate the duty cycle signal. The feedforward design process presented is seen to be straightforward and the feedforward easy to implement. Extensive experimental data supported by analytical results show that significant performance improvement is achieved with the use of feedforward in the following performance categories: loop stability, audiosusceptibility, output impedance and transient response. The use of feedforward results in isolating the switching regulator from its power source thus eliminating all interaction between the regulator and equipment upstream. In addition the use of feedforward removes some of the input filter design constraints and makes the input filter design process simpler thus making it possible to optimize the input filter. The concept of feedforward compensation can also be extended to other types of switching regulators.
Production Economics of Private Forestry: A Comparison of Industrial and Nonindustrial Forest Owners
David H. Newman; David N. Wear
1993-01-01
This paper compares the producrion behavior of industrial and nonindustrial private forestland owners in the southeastern U.S. using a restricted profit function. Profits are modeled as a function of two outputs, sawtimber and pulpwood. one variable input, regeneration effort. and two quasi-fixed inputs, land and growing stock. Although an identical profit function is...
Nitrogen input from residential lawn care practices in suburban watersheds in Baltimore county, MD
Neely L. Law; Lawrence E. Band; J. Morgan Grove
2004-01-01
A residential lawn care survey was conducted as part of the Baltimore Ecosystem Study, a Long-term Ecological Research project funded by the National Science Foundation and collaborating agencies, to estimate the nitrogen input to urban watersheds from lawn care practices. The variability in the fertilizer N application rates and the factors affecting the application...
Darold P. Batzer; Brian J. Palik
2007-01-01
Aquatic invertebrates are crucial components of foodwebs in seasonal woodland ponds, and leaf litter is probably the most important food resource for those organisms. We quantified the influence of leaf litter inputs on aquatic invertebrates in two seasonal woodland ponds using an interception experiment. Ponds were hydrologically split using a sandbag-plastic barrier...
J.W. Hornbeck; S.W. Bailey; D.C. Buso; J.B. Shanley
1997-01-01
Chemistry of precipitation and streamwater and resulting input-output budgets for nutrient ions were determined concurrently for three years on three upland, forested watersheds located within an 80 km radius in central New England. Chemistry of precipitation and inputs of nutrients via wet deposition were similar among the three watersheds and were generally typical...
Fusion of Hard and Soft Information in Nonparametric Density Estimation
2015-06-10
and stochastic optimization models, in analysis of simulation output, and when instantiating probability models. We adopt a constrained maximum...particular, density estimation is needed for generation of input densities to simulation and stochastic optimization models, in analysis of simulation output...an essential step in simulation analysis and stochastic optimization is the generation of probability densities for input random variables; see for
Variable input observer for structural health monitoring of high-rate systems
NASA Astrophysics Data System (ADS)
Hong, Jonathan; Laflamme, Simon; Cao, Liang; Dodson, Jacob
2017-02-01
The development of high-rate structural health monitoring methods is intended to provide damage detection on timescales of 10 µs -10ms where speed of detection is critical to maintain structural integrity. Here, a novel Variable Input Observer (VIO) coupled with an adaptive observer is proposed as a potential solution for complex high-rate problems. The VIO is designed to adapt its input space based on real-time identification of the system's essential dynamics. By selecting appropriate time-delayed coordinates defined by both a time delay and an embedding dimension, the proper input space is chosen which allows more accurate estimations of the current state and a reduction of the convergence rate. The optimal time-delay is estimated based on mutual information, and the embedding dimension is based on false nearest neighbors. A simulation of the VIO is conducted on a two degree-of-freedom system with simulated damage. Results are compared with an adaptive Luenberger observer, a fixed time-delay observer, and a Kalman Filter. Under its preliminary design, the VIO converges significantly faster than the Luenberger and fixed observer. It performed similarly to the Kalman Filter in terms of convergence, but with greater accuracy.
Family medicine outpatient encounters are more complex than those of cardiology and psychiatry.
Katerndahl, David; Wood, Robert; Jaén, Carlos Roberto
2011-01-01
comparison studies suggest that the guideline-concordant care provided for specific medical conditions is less optimal in primary care compared with cardiology and psychiatry settings. The purpose of this study is to estimate the relative complexity of patient encounters in general/family practice, cardiology, and psychiatry settings. secondary analysis of the 2000 National Ambulatory Medical Care Survey data for ambulatory patients seen in general/family practice, cardiology, and psychiatry settings was performed. The complexity for each variable was estimated as the quantity weighted by variability and diversity. there is minimal difference in the unadjusted input and total encounter complexity of general/family practice and cardiology; psychiatry's input is less complex. Cardiology encounters involved more input quantitatively, but the diversity of general/family practice input eliminated the difference. Cardiology also involved more complex output. However, when the duration of visit is factored in, the complexity of care provided per hour in general/family practice is 33% more relative to cardiology and 5 times more relative to psychiatry. care during family physician visits is more complex per hour than the care during visits to cardiologists or psychiatrists. This may account for a lower rate of completion of process items measured for quality of care.
Scaling of global input-output networks
NASA Astrophysics Data System (ADS)
Liang, Sai; Qi, Zhengling; Qu, Shen; Zhu, Ji; Chiu, Anthony S. F.; Jia, Xiaoping; Xu, Ming
2016-06-01
Examining scaling patterns of networks can help understand how structural features relate to the behavior of the networks. Input-output networks consist of industries as nodes and inter-industrial exchanges of products as links. Previous studies consider limited measures for node strengths and link weights, and also ignore the impact of dataset choice. We consider a comprehensive set of indicators in this study that are important in economic analysis, and also examine the impact of dataset choice, by studying input-output networks in individual countries and the entire world. Results show that Burr, Log-Logistic, Log-normal, and Weibull distributions can better describe scaling patterns of global input-output networks. We also find that dataset choice has limited impacts on the observed scaling patterns. Our findings can help examine the quality of economic statistics, estimate missing data in economic statistics, and identify key nodes and links in input-output networks to support economic policymaking.
DREAM-3D and the importance of model inputs and boundary conditions
NASA Astrophysics Data System (ADS)
Friedel, Reiner; Tu, Weichao; Cunningham, Gregory; Jorgensen, Anders; Chen, Yue
2015-04-01
Recent work on radiation belt 3D diffusion codes such as the Los Alamos "DREAM-3D" code have demonstrated the ability of such codes to reproduce realistic magnetospheric storm events in the relativistic electron dynamics - as long as sufficient "event-oriented" boundary conditions and code inputs such as wave powers, low energy boundary conditions, background plasma densities, and last closed drift shell (outer boundary) are available. In this talk we will argue that the main limiting factor in our modeling ability is no longer our inability to represent key physical processes that govern the dynamics of the radiation belts (radial, pitch angle and energy diffusion) but rather our limitations in specifying accurate boundary conditions and code inputs. We use here DREAM-3D runs to show the sensitivity of the modeled outcomes to these boundary conditions and inputs, and also discuss alternate "proxy" approaches to obtain the required inputs from other (ground-based) sources.
NASA Astrophysics Data System (ADS)
Selle, B.; Schwientek, M.
2012-04-01
Water quality of ground and surface waters in catchments is typically driven by many complex and interacting processes. While small scale processes are often studied in great detail, their relevance and interplay at catchment scales remain often poorly understood. For many catchments, extensive monitoring data on water quality have been collected for different purposes. These heterogeneous data sets contain valuable information on catchment scale processes but are rarely analysed using integrated methods. Principle component analysis (PCA) has previously been applied to this kind of data sets. However, a detailed analysis of scores, which are an important result of a PCA, is often missing. Mathematically, PCA expresses measured variables on water quality, e.g. nitrate concentrations, as linear combination of independent, not directly observable key processes. These computed key processes are represented by principle components. Their scores are interpretable as process intensities which vary in space and time. Subsequently, scores can be correlated with other key variables and catchment characteristics, such as water travel times and land use that were not considered in PCA. This detailed analysis of scores represents an extension of the commonly applied PCA which could considerably improve the understanding of processes governing water quality at catchment scales. In this study, we investigated the 170 km2 Ammer catchment in SW Germany which is characterised by an above average proportion of agricultural (71%) and urban (17%) areas. The Ammer River is mainly fed by karstic springs. For PCA, we separately analysed concentrations from (a) surface waters of the Ammer River and its tributaries, (b) spring waters from the main aquifers and (c) deep groundwater from production wells. This analysis was extended by a detailed analysis of scores. We analysed measured concentrations on major ions and selected organic micropollutants. Additionally, redox-sensitive variables and environmental tracers indicating groundwater age were analysed for deep groundwater from production wells. For deep groundwater, we found that microbial turnover was stronger influenced by local availability of energy sources than by travel times of groundwater to the wells. Groundwater quality primarily reflected the input of pollutants determined by landuse, e.g. agrochemicals. We concluded that for water quality in the Ammer catchment, conservative mixing of waters with different origin is more important than reactive transport processes along the flow path.
Inverse modeling of geochemical and mechanical compaction in sedimentary basins
NASA Astrophysics Data System (ADS)
Colombo, Ivo; Porta, Giovanni Michele; Guadagnini, Alberto
2015-04-01
We study key phenomena driving the feedback between sediment compaction processes and fluid flow in stratified sedimentary basins formed through lithification of sand and clay sediments after deposition. Processes we consider are mechanic compaction of the host rock and the geochemical compaction due to quartz cementation in sandstones. Key objectives of our study include (i) the quantification of the influence of the uncertainty of the model input parameters on the model output and (ii) the application of an inverse modeling technique to field scale data. Proper accounting of the feedback between sediment compaction processes and fluid flow in the subsurface is key to quantify a wide set of environmentally and industrially relevant phenomena. These include, e.g., compaction-driven brine and/or saltwater flow at deep locations and its influence on (a) tracer concentrations observed in shallow sediments, (b) build up of fluid overpressure, (c) hydrocarbon generation and migration, (d) subsidence due to groundwater and/or hydrocarbons withdrawal, and (e) formation of ore deposits. Main processes driving the diagenesis of sediments after deposition are mechanical compaction due to overburden and precipitation/dissolution associated with reactive transport. The natural evolution of sedimentary basins is characterized by geological time scales, thus preventing direct and exhaustive measurement of the system dynamical changes. The outputs of compaction models are plagued by uncertainty because of the incomplete knowledge of the models and parameters governing diagenesis. Development of robust methodologies for inverse modeling and parameter estimation under uncertainty is therefore crucial to the quantification of natural compaction phenomena. We employ a numerical methodology based on three building blocks: (i) space-time discretization of the compaction process; (ii) representation of target output variables through a Polynomial Chaos Expansion (PCE); and (iii) model inversion (parameter estimation) within a maximum likelihood framework. In this context, the PCE-based surrogate model enables one to (i) minimize the computational cost associated with the (forward and inverse) modeling procedures leading to uncertainty quantification and parameter estimation, and (ii) compute the full set of Sobol indices quantifying the contribution of each uncertain parameter to the variability of target state variables. Results are illustrated through the simulation of one-dimensional test cases. The analyses focuses on the calibration of model parameters through literature field cases. The quality of parameter estimates is then analyzed as a function of number, type and location of data.
Miller, Derek M; DeMayo, William M; Bourdages, George H; Wittman, Samuel R; Yates, Bill J; McCall, Andrew A
2017-04-01
The integration of inputs from vestibular and proprioceptive sensors within the central nervous system is critical to postural regulation. We recently demonstrated in both decerebrate and conscious cats that labyrinthine and hindlimb inputs converge onto vestibular nucleus neurons. The pontomedullary reticular formation (pmRF) also plays a key role in postural control, and additionally participates in regulating locomotion. Thus, we hypothesized that like vestibular nucleus neurons, pmRF neurons integrate inputs from the limb and labyrinth. To test this hypothesis, we recorded the responses of pmRF neurons to passive ramp-and-hold movements of the hindlimb and to whole-body tilts, in both decerebrate and conscious felines. We found that pmRF neuronal activity was modulated by hindlimb movement in the rostral-caudal plane. Most neurons in both decerebrate (83% of units) and conscious (61% of units) animals encoded both flexion and extension movements of the hindlimb. In addition, hindlimb somatosensory inputs converged with vestibular inputs onto pmRF neurons in both preparations. Pontomedullary reticular formation neurons receiving convergent vestibular and limb inputs likely participate in balance control by governing reticulospinal outflow.
Miller, Derek M.; DeMayo, William M.; Bourdages, George H.; Wittman, Samuel; Yates, Bill J.; McCall, Andrew A.
2017-01-01
The integration of inputs from vestibular and proprioceptive sensors within the central nervous system is critical to postural regulation. We recently demonstrated in both decerebrate and conscious cats that labyrinthine and hindlimb inputs converge onto vestibular nucleus neurons. The pontomedullary reticular formation (pmRF) also plays a key role in postural control, and additionally participates in regulating locomotion. Thus, we hypothesized that like vestibular nucleus neurons, pmRF neurons integrate inputs from the limb and labyrinth. To test this hypothesis, we recorded the responses of pmRF neurons to passive ramp-and-hold movements of the hindlimb and to whole-body tilts, in both decerebrate and conscious felines. We found that pmRF neuronal activity was modulated by hindlimb movement in the rostral-caudal plane. Most neurons in both decerebrate (83% of units) and conscious (61% of units) animals encoded both flexion and extension movements of the hindlimb. Additionally, hindlimb somatosensory inputs converged with vestibular inputs onto pmRF neurons in both preparations. Pontomedullary reticular formation neurons receiving convergent vestibular and limb inputs likely participate in balance control by governing reticulospinal outflow. PMID:28188328
Two-key concurrent responding: response-reinforcement dependencies and blackouts1
Herbert, Emily W.
1970-01-01
Two-key concurrent responding was maintained for three pigeons by a single variable-interval 1-minute schedule of reinforcement in conjunction with a random number generator that assigned feeder operations between keys with equal probability. The duration of blackouts was varied between keys when each response initiated a blackout, and grain arranged by the variable-interval schedule was automatically presented after a blackout (Exp. I). In Exp. II every key peck, except for those that produced grain, initiated a blackout, and grain was dependent upon a response following a blackout. For each pigeon in Exp. I and for one pigeon in Exp. II, the relative frequency of responding on a key approximated, i.e., matched, the relative reciprocal of the duration of the blackout interval on that key. In a third experiment, blackouts scheduled on a variable-interval were of equal duration on the two keys. For one key, grain automatically followed each blackout; for the other key, grain was dependent upon a response and never followed a blackout. The relative frequency of responding on the former key, i.e., the delay key, better approximated the negative exponential function obtained by Chung (1965) than the matching function predicted by Chung and Herrnstein (1967). PMID:16811458
Chien, Jung Hung; Mukherjee, Mukul; Siu, Ka-Chun; Stergiou, Nicholas
2016-05-01
When maintaining postural stability temporally under increased sensory conflict, a more rigid response is used where the available degrees of freedom are essentially frozen. The current study investigated if such a strategy is also utilized during more dynamic situations of postural control as is the case with walking. This study attempted to answer this question by using the Locomotor Sensory Organization Test (LSOT). This apparatus incorporates SOT inspired perturbations of the visual and the somatosensory system. Ten healthy young adults performed the six conditions of the traditional SOT and the corresponding six conditions on the LSOT. The temporal structure of sway variability was evaluated from all conditions. The results showed that in the anterior posterior direction somatosensory input is crucial for postural control for both walking and standing; visual input also had an effect but was not as prominent as the somatosensory input. In the medial lateral direction and with respect to walking, visual input has a much larger effect than somatosensory input. This is possibly due to the added contributions by peripheral vision during walking; in standing such contributions may not be as significant for postural control. In sum, as sensory conflict increases more rigid and regular sway patterns are found during standing confirming the previous results presented in the literature, however the opposite was the case with walking where more exploratory and adaptive movement patterns are present.
A dual-input nonlinear system analysis of autonomic modulation of heart rate
NASA Technical Reports Server (NTRS)
Chon, K. H.; Mullen, T. J.; Cohen, R. J.
1996-01-01
Linear analyses of fluctuations in heart rate and other hemodynamic variables have been used to elucidate cardiovascular regulatory mechanisms. The role of nonlinear contributions to fluctuations in hemodynamic variables has not been fully explored. This paper presents a nonlinear system analysis of the effect of fluctuations in instantaneous lung volume (ILV) and arterial blood pressure (ABP) on heart rate (HR) fluctuations. To successfully employ a nonlinear analysis based on the Laguerre expansion technique (LET), we introduce an efficient procedure for broadening the spectral content of the ILV and ABP inputs to the model by adding white noise. Results from computer simulations demonstrate the effectiveness of broadening the spectral band of input signals to obtain consistent and stable kernel estimates with the use of the LET. Without broadening the band of the ILV and ABP inputs, the LET did not provide stable kernel estimates. Moreover, we extend the LET to the case of multiple inputs in order to accommodate the analysis of the combined effect of ILV and ABP effect on heart rate. Analyzes of data based on the second-order Volterra-Wiener model reveal an important contribution of the second-order kernels to the description of the effect of lung volume and arterial blood pressure on heart rate. Furthermore, physiological effects of the autonomic blocking agents propranolol and atropine on changes in the first- and second-order kernels are also discussed.
Mobile source reference material for activity data collection from the Emissions Inventory Improvement Program (EIIP). Provides complete methods for collecting key inputs to onroad mobile and nonroad mobile emissions models.
Drivers and effects of Karenia mikimotoi blooms in the western English Channel
NASA Astrophysics Data System (ADS)
Barnes, Morvan K.; Tilstone, Gavin H.; Smyth, Timothy J.; Widdicombe, Claire E.; Gloël, Johanna; Robinson, Carol; Kaiser, Jan; Suggett, David J.
2015-09-01
Naturally occurring red tides and harmful algal blooms (HABs) are of increasing importance in the coastal environment and can have dramatic effects on coastal benthic and epipelagic communities worldwide. Such blooms are often unpredictable, irregular or of short duration, and thus determining the underlying driving factors is problematic. The dinoflagellate Karenia mikimotoi is an HAB, commonly found in the western English Channel and thought to be responsible for occasional mass finfish and benthic mortalities. We analysed a 19-year coastal time series of phytoplankton biomass to examine the seasonality and interannual variability of K. mikimotoi in the western English Channel and determine both the primary environmental drivers of these blooms as well as the effects on phytoplankton productivity and oxygen conditions. We observed high variability in timing and magnitude of K. mikimotoi blooms, with abundances reaching >1000 cells mL-1 at 10 m depth, inducing up to a 12-fold increase in the phytoplankton carbon content of the water column. No long-term trends in the timing or magnitude of K. mikimotoi abundance were evident from the data. Key driving factors were identified as persistent summertime rainfall and the resultant input of low-salinity high-nutrient river water. The largest bloom in 2009 was associated with highest annual primary production and led to considerable oxygen depletion at depth, most likely as a result of enhanced biological breakdown of bloom material; however, this oxygen depletion may not affect zooplankton. Our data suggests that K. mikimotoi blooms are not only a key and consistent feature of western English Channel productivity, but importantly can potentially be predicted from knowledge of rainfall or river discharge.
Schell, Lawrence M.; Ravenscroft, Julia; Cole, Maxine; Jacobs, Agnes; Newman, Joan
2005-01-01
In this article we describe a research partnership between the Akwesasne Mohawk Nation and scientists at the University at Albany, State University of New York, initiated to address community and scientific concerns regarding environmental contamination and its health consequences (thyroid hormone function, social adjustment, and school functioning). The investigation focuses on cultural inputs into health disparities. It employs a risk-focusing model of biocultural interaction: behaviors expressing cultural identity and values allocate or focus risk, in this instance the risk of toxicant exposure, which alters health status through the effects of toxicants. As culturally based behaviors and activities fulfill a key role in the model, accurate assessment of subtle cultural and behavioral variables is required and best accomplished through integration of local expert knowledge from the community. As a partnership project, the investigation recognizes the cultural and socioeconomic impacts of research in small communities beyond the production of scientific knowledge. The components of sustainable partnerships are discussed, including strategies that helped promote equity between the partners such as hiring community members as key personnel, integrating local expertise into research design, and developing a local Community Outreach and Education Program. Although challenges arose during the design and implementation of the research project, a collaborative approach has benefited the community and facilitated research. PMID:16330372
NASA Astrophysics Data System (ADS)
Ciriello, V.; Lauriola, I.; Bonvicini, S.; Cozzani, V.; Di Federico, V.; Tartakovsky, Daniel M.
2017-11-01
Ubiquitous hydrogeological uncertainty undermines the veracity of quantitative predictions of soil and groundwater contamination due to accidental hydrocarbon spills from onshore pipelines. Such predictions, therefore, must be accompanied by quantification of predictive uncertainty, especially when they are used for environmental risk assessment. We quantify the impact of parametric uncertainty on quantitative forecasting of temporal evolution of two key risk indices, volumes of unsaturated and saturated soil contaminated by a surface spill of light nonaqueous-phase liquids. This is accomplished by treating the relevant uncertain parameters as random variables and deploying two alternative probabilistic models to estimate their effect on predictive uncertainty. A physics-based model is solved with a stochastic collocation method and is supplemented by a global sensitivity analysis. A second model represents the quantities of interest as polynomials of random inputs and has a virtually negligible computational cost, which enables one to explore any number of risk-related contamination scenarios. For a typical oil-spill scenario, our method can be used to identify key flow and transport parameters affecting the risk indices, to elucidate texture-dependent behavior of different soils, and to evaluate, with a degree of confidence specified by the decision-maker, the extent of contamination and the correspondent remediation costs.
2013-06-01
1 18th ICCRTS Using a Functional Simulation of Crisis Management to Test the C2 Agility Model Parameters on Key Performance Variables...AND SUBTITLE Using a Functional Simulation of Crisis Management to Test the C2 Agility Model Parameters on Key Performance Variables 5a. CONTRACT...command in crisis management. C2 Agility Model Agility can be conceptualized at a number of different levels; for instance at the team
Leverrier, Anthony; Grangier, Philippe
2009-05-08
We present a continuous-variable quantum key distribution protocol combining a discrete modulation and reverse reconciliation. This protocol is proven unconditionally secure and allows the distribution of secret keys over long distances, thanks to a reverse reconciliation scheme efficient at very low signal-to-noise ratio.
Sedimentation in the chaparral: how do you handle unusual events?
Raymond M. Rice
1982-01-01
Abstract - Processes of erosion and sedimentation in steep chaparral drainage basins of southern California are described. The word ""hyperschedastic"" is coined to describe the sedimentation regime which is highly variable because of the interaction of marginally stable drainage basins, great variability in storm inputs, and the random occurrence...
A Longitudinal Study of Occupational Aspirations and Attainments of Iowa Young Adults.
ERIC Educational Resources Information Center
Yoesting, Dean R.; And Others
The causal linkage between socioeconomic status, occupational and educational aspiration, and attainment was examined in this attempt to test an existing theoretical model which used socioeconomic status as a major input variable, with significant other influence as a crucial intervening variable between socioeconomic status and aspiration. The…
NASA Astrophysics Data System (ADS)
Fioretti, Guido
2007-02-01
The productions function maps the inputs of a firm or a productive system onto its outputs. This article expounds generalizations of the production function that include state variables, organizational structures and increasing returns to scale. These extensions are needed in order to explain the regularities of the empirical distributions of certain economic variables.
Guo, Weixing; Langevin, C.D.
2002-01-01
This report documents a computer program (SEAWAT) that simulates variable-density, transient, ground-water flow in three dimensions. The source code for SEAWAT was developed by combining MODFLOW and MT3DMS into a single program that solves the coupled flow and solute-transport equations. The SEAWAT code follows a modular structure, and thus, new capabilities can be added with only minor modifications to the main program. SEAWAT reads and writes standard MODFLOW and MT3DMS data sets, although some extra input may be required for some SEAWAT simulations. This means that many of the existing pre- and post-processors can be used to create input data sets and analyze simulation results. Users familiar with MODFLOW and MT3DMS should have little difficulty applying SEAWAT to problems of variable-density ground-water flow.
Trends in Solidification Grain Size and Morphology for Additive Manufacturing of Ti-6Al-4V
NASA Astrophysics Data System (ADS)
Gockel, Joy; Sheridan, Luke; Narra, Sneha P.; Klingbeil, Nathan W.; Beuth, Jack
2017-12-01
Metal additive manufacturing (AM) is used for both prototyping and production of final parts. Therefore, there is a need to predict and control the microstructural size and morphology. Process mapping is an approach that represents AM process outcomes in terms of input variables. In this work, analytical, numerical, and experimental approaches are combined to provide a holistic view of trends in the solidification grain structure of Ti-6Al-4V across a wide range of AM process input variables. The thermal gradient is shown to vary significantly through the depth of the melt pool, which precludes development of fully equiaxed microstructure throughout the depth of the deposit within any practical range of AM process variables. A strategy for grain size control is demonstrated based on the relationship between melt pool size and grain size across multiple deposit geometries, and additional factors affecting grain size are discussed.
NASA Astrophysics Data System (ADS)
Herrera, J. L.; Rosón, G.; Varela, R. A.; Piedracoba, S.
2008-07-01
The key features of the western Galician shelf hydrography and dynamics are analyzed on a solid statistical and experimental basis. The results allowed us to gather together information dispersed in previous oceanographic works of the region. Empirical orthogonal functions analysis and a canonical correlation analysis were applied to a high-resolution dataset collected from 47 surveys done on a weekly frequency from May 2001 to May 2002. The main results of these analyses are summarized bellow. Salinity, temperature and the meridional component of the residual current are correlated with the relevant local forcings (the meridional coastal wind component and the continental run-off) and with a remote forcing (the meridional temperature gradient at latitude 37°N). About 80% of the salinity and temperature total variability over the shelf, and 37% of the residual meridional current total variability are explained by two EOFs for each variable. Up to 22% of the temperature total variability and 14% of the residual meridional current total variability is devoted to the set up of cross-shore gradients of the thermohaline properties caused by the wind-induced Ekman transport. Up to 11% and 10%, respectively, is related to the variability of the meridional temperature gradient at the Western Iberian Winter Front. About 30% of the temperature total variability can be explained by the development and erosion of the seasonal thermocline and by the seasonal variability of the thermohaline properties of the central waters. This thermocline presented unexpected low salinity values due to the trapping during spring and summer of the high continental inputs from the River Miño recorded in 2001. The low salinity plumes can be traced on the Galician shelf during almost all the annual cycle; they tend to be extended throughout the entire water column under downwelling conditions and concentrate in the surface layer when upwelling favourable winds blow. Our evidences point to the meridional temperature gradient acting as an important controlling factor of the central waters thermohaline properties and in the development and decay of the Iberian Poleward Current.
NASA Astrophysics Data System (ADS)
Bauer, Johannes; Dávila-Chacón, Jorge; Wermter, Stefan
2015-10-01
Humans and other animals have been shown to perform near-optimally in multi-sensory integration tasks. Probabilistic population codes (PPCs) have been proposed as a mechanism by which optimal integration can be accomplished. Previous approaches have focussed on how neural networks might produce PPCs from sensory input or perform calculations using them, like combining multiple PPCs. Less attention has been given to the question of how the necessary organisation of neurons can arise and how the required knowledge about the input statistics can be learned. In this paper, we propose a model of learning multi-sensory integration based on an unsupervised learning algorithm in which an artificial neural network learns the noise characteristics of each of its sources of input. Our algorithm borrows from the self-organising map the ability to learn latent-variable models of the input and extends it to learning to produce a PPC approximating a probability density function over the latent variable behind its (noisy) input. The neurons in our network are only required to perform simple calculations and we make few assumptions about input noise properties and tuning functions. We report on a neurorobotic experiment in which we apply our algorithm to multi-sensory integration in a humanoid robot to demonstrate its effectiveness and compare it to human multi-sensory integration on the behavioural level. We also show in simulations that our algorithm performs near-optimally under certain plausible conditions, and that it reproduces important aspects of natural multi-sensory integration on the neural level.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ide, Toshiki; Hofmann, Holger F.; JST-CREST, Graduate School of Advanced Sciences of Matter, Hiroshima University, Kagamiyama 1-3-1, Higashi Hiroshima 739-8530
The information encoded in the polarization of a single photon can be transferred to a remote location by two-channel continuous-variable quantum teleportation. However, the finite entanglement used in the teleportation causes random changes in photon number. If more than one photon appears in the output, the continuous-variable teleportation accidentally produces clones of the original input photon. In this paper, we derive the polarization statistics of the N-photon output components and show that they can be decomposed into an optimal cloning term and completely unpolarized noise. We find that the accidental cloning of the input photon is nearly optimal at experimentallymore » feasible squeezing levels, indicating that the loss of polarization information is partially compensated by the availability of clones.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Haixia; Zhang, Jing
We propose a scheme for continuous-variable quantum cloning of coherent states with phase-conjugate input modes using linear optics. The quantum cloning machine yields M identical optimal clones from N replicas of a coherent state and N replicas of its phase conjugate. This scheme can be straightforwardly implemented with the setups accessible at present since its optical implementation only employs simple linear optical elements and homodyne detection. Compared with the original scheme for continuous-variable quantum cloning with phase-conjugate input modes proposed by Cerf and Iblisdir [Phys. Rev. Lett. 87, 247903 (2001)], which utilized a nondegenerate optical parametric amplifier, our scheme losesmore » the output of phase-conjugate clones and is regarded as irreversible quantum cloning.« less
Computer program for design analysis of radial-inflow turbines
NASA Technical Reports Server (NTRS)
Glassman, A. J.
1976-01-01
A computer program written in FORTRAN that may be used for the design analysis of radial-inflow turbines was documented. The following information is included: loss model (estimation of losses), the analysis equations, a description of the input and output data, the FORTRAN program listing and list of variables, and sample cases. The input design requirements include the power, mass flow rate, inlet temperature and pressure, and rotational speed. The program output data includes various diameters, efficiencies, temperatures, pressures, velocities, and flow angles for the appropriate calculation stations. The design variables include the stator-exit angle, rotor radius ratios, and rotor-exit tangential velocity distribution. The losses are determined by an internal loss model.
Gustafson, William Jr; Vogelmann, Andrew; Endo, Satoshi; Toto, Tami; Xiao, Heng; Li, Zhijin; Cheng, Xiaoping; Kim, Jinwon; Krishna, Bhargavi
2015-08-31
The Alpha 2 release is the second release from the LASSO Pilot Phase that builds upon the Alpha 1 release. Alpha 2 contains additional diagnostics in the data bundles and focuses on cases from spring-summer 2016. A data bundle is a unified package consisting of LASSO LES input and output, observations, evaluation diagnostics, and model skill scores. LES input include model configuration information and forcing data. LES output includes profile statistics and full domain fields of cloud and environmental variables. Model evaluation data consists of LES output and ARM observations co-registered on the same grid and sampling frequency. Model performance is quantified by skill scores and diagnostics in terms of cloud and environmental variables.
A data mining approach to optimize pellets manufacturing process based on a decision tree algorithm.
Ronowicz, Joanna; Thommes, Markus; Kleinebudde, Peter; Krysiński, Jerzy
2015-06-20
The present study is focused on the thorough analysis of cause-effect relationships between pellet formulation characteristics (pellet composition as well as process parameters) and the selected quality attribute of the final product. The shape using the aspect ratio value expressed the quality of pellets. A data matrix for chemometric analysis consisted of 224 pellet formulations performed by means of eight different active pharmaceutical ingredients and several various excipients, using different extrusion/spheronization process conditions. The data set contained 14 input variables (both formulation and process variables) and one output variable (pellet aspect ratio). A tree regression algorithm consistent with the Quality by Design concept was applied to obtain deeper understanding and knowledge of formulation and process parameters affecting the final pellet sphericity. The clear interpretable set of decision rules were generated. The spehronization speed, spheronization time, number of holes and water content of extrudate have been recognized as the key factors influencing pellet aspect ratio. The most spherical pellets were achieved by using a large number of holes during extrusion, a high spheronizer speed and longer time of spheronization. The described data mining approach enhances knowledge about pelletization process and simultaneously facilitates searching for the optimal process conditions which are necessary to achieve ideal spherical pellets, resulting in good flow characteristics. This data mining approach can be taken into consideration by industrial formulation scientists to support rational decision making in the field of pellets technology. Copyright © 2015 Elsevier B.V. All rights reserved.
Optimization of a GO2/GH2 Swirl Coaxial Injector Element
NASA Technical Reports Server (NTRS)
Tucker, P. Kevin; Shyy, Wei; Vaidyanathan, Rajkumar
1999-01-01
An injector optimization methodology, method i, is used to investigate optimal design points for a gaseous oxygen/gaseous hydrogen (GO2/GH2) swirl coaxial injector element. The element is optimized in terms of design variables such as fuel pressure drop, DELTA P(sub f), oxidizer pressure drop, DELTA P(sub 0) combustor length, L(sub comb), and full cone swirl angle, theta, for a given mixture ratio and chamber pressure. Dependent variables such as energy release efficiency, ERE, wall heat flux, Q(sub w) injector heat flux, Q(sub inj), relative combustor weight, W(sub rel), and relative injector cost, C(sub rel), are calculated and then correlated with the design variables. An empirical design methodology is used to generate these responses for 180 combinations of input variables. Method i is then used to generate response surfaces for each dependent variable. Desirability functions based on dependent variable constraints are created and used to facilitate development of composite response surfaces representing some, or all, of the five dependent variables in terms of the input variables. Two examples illustrating the utility and flexibility of method i are discussed in detail. First, joint response surfaces are constructed by sequentially adding dependent variables. Optimum designs are identified after addition of each variable and the effect each variable has on the design is shown. This stepwise demonstration also highlights the importance of including variables such as weight and cost early in the design process. Secondly, using the composite response surface that includes all five dependent variables, unequal weights are assigned to emphasize certain variables relative to others. Here, method i is used to enable objective trade studies on design issues such as component life and thrust to weight ratio.
The HTM Spatial Pooler-A Neocortical Algorithm for Online Sparse Distributed Coding.
Cui, Yuwei; Ahmad, Subutai; Hawkins, Jeff
2017-01-01
Hierarchical temporal memory (HTM) provides a theoretical framework that models several key computational principles of the neocortex. In this paper, we analyze an important component of HTM, the HTM spatial pooler (SP). The SP models how neurons learn feedforward connections and form efficient representations of the input. It converts arbitrary binary input patterns into sparse distributed representations (SDRs) using a combination of competitive Hebbian learning rules and homeostatic excitability control. We describe a number of key properties of the SP, including fast adaptation to changing input statistics, improved noise robustness through learning, efficient use of cells, and robustness to cell death. In order to quantify these properties we develop a set of metrics that can be directly computed from the SP outputs. We show how the properties are met using these metrics and targeted artificial simulations. We then demonstrate the value of the SP in a complete end-to-end real-world HTM system. We discuss the relationship with neuroscience and previous studies of sparse coding. The HTM spatial pooler represents a neurally inspired algorithm for learning sparse representations from noisy data streams in an online fashion.
Pitot tube calculations with a TI-59
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, K.
Industrial plant and stack analysis dictates that flow measurements in ducts be accurate. This is usually accomplished by running a traverse with a pitot tube across the duct or flue. A traverse is a series of measurements taken at predetermined points across the duct. The values of these measurements are calculated into point flow rates and averaged. A program for the Texas Instruments TI-59 programmable calculator follows. The program will perform calculations for an infinite number of test points, both with the standard (combined impact type) pitot tube and the S-type (combined reverse type). The type of tube is selectedmore » by inputting an indicating valve that triggers a flag in the program. To use the standard pitot tube, a 1 is input into key E. When the S-type is used, a zero is input into key E. The program output will note if the S-type had been used. Since most process systems are not at standard conditions (32/sup 0/F, 1 atm) the program will take this into account.« less
Optimality of Gaussian attacks in continuous-variable quantum cryptography.
Navascués, Miguel; Grosshans, Frédéric; Acín, Antonio
2006-11-10
We analyze the asymptotic security of the family of Gaussian modulated quantum key distribution protocols for continuous-variables systems. We prove that the Gaussian unitary attack is optimal for all the considered bounds on the key rate when the first and second momenta of the canonical variables involved are known by the honest parties.
Technology Benefit Estimator (T/BEST): User's Manual
NASA Technical Reports Server (NTRS)
Generazio, Edward R.; Chamis, Christos C.; Abumeri, Galib
1994-01-01
The Technology Benefit Estimator (T/BEST) system is a formal method to assess advanced technologies and quantify the benefit contributions for prioritization. T/BEST may be used to provide guidelines to identify and prioritize high payoff research areas, help manage research and limited resources, show the link between advanced concepts and the bottom line, i.e., accrued benefit and value, and to communicate credibly the benefits of research. The T/BEST software computer program is specifically designed to estimating benefits, and benefit sensitivities, of introducing new technologies into existing propulsion systems. Key engine cycle, structural, fluid, mission and cost analysis modules are used to provide a framework for interfacing with advanced technologies. An open-ended, modular approach is used to allow for modification and addition of both key and advanced technology modules. T/BEST has a hierarchical framework that yields varying levels of benefit estimation accuracy that are dependent on the degree of input detail available. This hierarchical feature permits rapid estimation of technology benefits even when the technology is at the conceptual stage. As knowledge of the technology details increases the accuracy of the benefit analysis increases. Included in T/BEST's framework are correlations developed from a statistical data base that is relied upon if there is insufficient information given in a particular area, e.g., fuel capacity or aircraft landing weight. Statistical predictions are not required if these data are specified in the mission requirements. The engine cycle, structural fluid, cost, noise, and emissions analyses interact with the default or user material and component libraries to yield estimates of specific global benefits: range, speed, thrust, capacity, component life, noise, emissions, specific fuel consumption, component and engine weights, pre-certification test, mission performance engine cost, direct operating cost, life cycle cost, manufacturing cost, development cost, risk, and development time. Currently, T/BEST operates on stand-alone or networked workstations, and uses a UNIX shell or script to control the operation of interfaced FORTRAN based analyses. T/BEST's interface structure works equally well with non-FORTRAN or mixed software analysis. This interface structure is designed to maintain the integrity of the expert's analyses by interfacing with expert's existing input and output files. Parameter input and output data (e.g., number of blades, hub diameters, etc.) are passed via T/BEST's neutral file, while copious data (e.g., finite element models, profiles, etc.) are passed via file pointers that point to the expert's analyses output files. In order to make the communications between the T/BEST's neutral file and attached analyses codes simple, only two software commands, PUT and GET, are required. This simplicity permits easy access to all input and output variables contained within the neutral file. Both public domain and proprietary analyses codes may be attached with a minimal amount of effort, while maintaining full data and analysis integrity, and security. T/BESt's sotware framework, status, beginner-to-expert operation, interface architecture, analysis module addition, and key analysis modules are discussed. Representative examples of T/BEST benefit analyses are shown.
Technology Benefit Estimator (T/BEST): User's manual
NASA Astrophysics Data System (ADS)
Generazio, Edward R.; Chamis, Christos C.; Abumeri, Galib
1994-12-01
The Technology Benefit Estimator (T/BEST) system is a formal method to assess advanced technologies and quantify the benefit contributions for prioritization. T/BEST may be used to provide guidelines to identify and prioritize high payoff research areas, help manage research and limited resources, show the link between advanced concepts and the bottom line, i.e., accrued benefit and value, and to communicate credibly the benefits of research. The T/BEST software computer program is specifically designed to estimating benefits, and benefit sensitivities, of introducing new technologies into existing propulsion systems. Key engine cycle, structural, fluid, mission and cost analysis modules are used to provide a framework for interfacing with advanced technologies. An open-ended, modular approach is used to allow for modification and addition of both key and advanced technology modules. T/BEST has a hierarchical framework that yields varying levels of benefit estimation accuracy that are dependent on the degree of input detail available. This hierarchical feature permits rapid estimation of technology benefits even when the technology is at the conceptual stage. As knowledge of the technology details increases the accuracy of the benefit analysis increases. Included in T/BEST's framework are correlations developed from a statistical data base that is relied upon if there is insufficient information given in a particular area, e.g., fuel capacity or aircraft landing weight. Statistical predictions are not required if these data are specified in the mission requirements. The engine cycle, structural fluid, cost, noise, and emissions analyses interact with the default or user material and component libraries to yield estimates of specific global benefits: range, speed, thrust, capacity, component life, noise, emissions, specific fuel consumption, component and engine weights, pre-certification test, mission performance engine cost, direct operating cost, life cycle cost, manufacturing cost, development cost, risk, and development time. Currently, T/BEST operates on stand-alone or networked workstations, and uses a UNIX shell or script to control the operation of interfaced FORTRAN based analyses. T/BEST's interface structure works equally well with non-FORTRAN or mixed software analysis. This interface structure is designed to maintain the integrity of the expert's analyses by interfacing with expert's existing input and output files. Parameter input and output data (e.g., number of blades, hub diameters, etc.) are passed via T/BEST's neutral file, while copious data (e.g., finite element models, profiles, etc.) are passed via file pointers that point to the expert's analyses output files. In order to make the communications between the T/BEST's neutral file and attached analyses codes simple, only two software commands, PUT and GET, are required. This simplicity permits easy access to all input and output variables contained within the neutral file. Both public domain and proprietary analyses codes may be attached with a minimal amount of effort, while maintaining full data and analysis integrity, and security.
Key management of the double random-phase-encoding method using public-key encryption
NASA Astrophysics Data System (ADS)
Saini, Nirmala; Sinha, Aloka
2010-03-01
Public-key encryption has been used to encode the key of the encryption process. In the proposed technique, an input image has been encrypted by using the double random-phase-encoding method using extended fractional Fourier transform. The key of the encryption process have been encoded by using the Rivest-Shamir-Adelman (RSA) public-key encryption algorithm. The encoded key has then been transmitted to the receiver side along with the encrypted image. In the decryption process, first the encoded key has been decrypted using the secret key and then the encrypted image has been decrypted by using the retrieved key parameters. The proposed technique has advantage over double random-phase-encoding method because the problem associated with the transmission of the key has been eliminated by using public-key encryption. Computer simulation has been carried out to validate the proposed technique.
Approach for Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives
NASA Technical Reports Server (NTRS)
Putko, Michele M.; Newman, Perry A.; Taylor, Arthur C., III; Green, Lawrence L.
2001-01-01
This paper presents an implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for a quasi 1-D Euler CFD (computational fluid dynamics) code. Given uncertainties in statistically independent, random, normally distributed input variables, a first- and second-order statistical moment matching procedure is performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, the moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.
NASA Technical Reports Server (NTRS)
He, Yuning
2015-01-01
Safety of unmanned aerial systems (UAS) is paramount, but the large number of dynamically changing controller parameters makes it hard to determine if the system is currently stable, and the time before loss of control if not. We propose a hierarchical statistical model using Treed Gaussian Processes to predict (i) whether a flight will be stable (success) or become unstable (failure), (ii) the time-to-failure if unstable, and (iii) time series outputs for flight variables. We first classify the current flight input into success or failure types, and then use separate models for each class to predict the time-to-failure and time series outputs. As different inputs may cause failures at different times, we have to model variable length output curves. We use a basis representation for curves and learn the mappings from input to basis coefficients. We demonstrate the effectiveness of our prediction methods on a NASA neuro-adaptive flight control system.
Artificial neural network modeling of dissolved oxygen in the Heihe River, Northwestern China.
Wen, Xiaohu; Fang, Jing; Diao, Meina; Zhang, Chuanqi
2013-05-01
Identification and quantification of dissolved oxygen (DO) profiles of river is one of the primary concerns for water resources managers. In this research, an artificial neural network (ANN) was developed to simulate the DO concentrations in the Heihe River, Northwestern China. A three-layer back-propagation ANN was used with the Bayesian regularization training algorithm. The input variables of the neural network were pH, electrical conductivity, chloride (Cl(-)), calcium (Ca(2+)), total alkalinity, total hardness, nitrate nitrogen (NO3-N), and ammonical nitrogen (NH4-N). The ANN structure with 14 hidden neurons obtained the best selection. By making comparison between the results of the ANN model and the measured data on the basis of correlation coefficient (r) and root mean square error (RMSE), a good model-fitting DO values indicated the effectiveness of neural network model. It is found that the coefficient of correlation (r) values for the training, validation, and test sets were 0.9654, 0.9841, and 0.9680, respectively, and the respective values of RMSE for the training, validation, and test sets were 0.4272, 0.3667, and 0.4570, respectively. Sensitivity analysis was used to determine the influence of input variables on the dependent variable. The most effective inputs were determined as pH, NO3-N, NH4-N, and Ca(2+). Cl(-) was found to be least effective variables on the proposed model. The identified ANN model can be used to simulate the water quality parameters.
Unsteady Aerodynamic Testing Using the Dynamic Plunge Pitch and Roll Model Mount
NASA Technical Reports Server (NTRS)
Lutze, Frederick H.; Fan, Yigang
1999-01-01
A final report on the DyPPiR tests that were run are presented. Essentially it consists of two parts, a description of the data reduction techniques and the results. The data reduction techniques include three methods that were considered: 1) signal processing of wind on - wind off data; 2) using wind on data in conjunction with accelerometer measurements; and 3) using a dynamic model of the sting to predict the sting oscillations and determining the aerodynamic inputs using an optimization process. After trying all three, we ended up using method 1, mainly because of its simplicity and our confidence in its accuracy. The results section consists of time history plots of the input variables (angle of attack, roll angle, and/or plunge position) and the corresponding time histories of the output variables, C(sub L), C(sub D), C(sub m), C(sub l), C(sub m), C(sub n). Also included are some phase plots of one or more of the output variable vs. an input variable. Typically of interest are pitch moment coefficient vs. angle of attack for an oscillatory motion where the hysteresis loops can be observed. These plots are useful to determine the "more interesting" cases. Samples of the data as it appears on the disk are presented at the end of the report. The last maneuver, a rolling pull up, is indicative of the unique capabilities of the DyPPiR, allowing combinations of motions to be exercised at the same time.
ERIC Educational Resources Information Center
Lopez-Catalan, Blanca; Bañuls, Victor A.
2017-01-01
Purpose: The purpose of this paper is to present the results of national level Delphi study carried out in Spain aimed at providing inputs for higher education administrators and decision makers about key e-learning trends for supporting postgraduate courses. Design/methodology/approach: The ranking of the e-learning trends is based on a…
Variability and Dynamics of the Yucatan Upwelling: High-Resolution Simulations
NASA Astrophysics Data System (ADS)
Jouanno, J.; Pallàs-Sanz, E.; Sheinbaum, J.
2018-02-01
The Yucatan shelf in the southern Gulf of Mexico is under the influence of an upwelling that uplifts cool and nutrient rich waters over the continental shelf. The analysis of a set of high-resolution (Δx = Δy ≈ 2.8 km) simulations of the Gulf of Mexico shows two dominant modes of variability of the Yucatan upwelling system: (1) a low-frequency mode related to variations in position and intensity of the Loop Current along the shelf, with upwelling intensified when the Loop Current is strong and approaches to the Yucatan shelf break and (2) a high-frequency mode with peak frequency in the 6-10 days band related to wind-forced coastal waves that force vertical velocities along the eastern Yucatan shelf break. To first order, the strength and position of the Loop Current are found to control the intensity of the upwelling, but we show that high-frequency winds also contribute (˜17%) to a net input of cool waters (<22.5°C) on the Yucatan shelf. Finally, although more observational studies are needed to corroborate the topographic character of the Yucatan upwelling system, this study reveals the key role played by a notch along the Yucatan shelf break: a sensitivity simulation without the notch shows a 55% reduction of the upwelling.
Gui, Guan; Chen, Zhang-xin; Xu, Li; Wan, Qun; Huang, Jiyan; Adachi, Fumiyuki
2014-01-01
Channel estimation problem is one of the key technical issues in sparse frequency-selective fading multiple-input multiple-output (MIMO) communication systems using orthogonal frequency division multiplexing (OFDM) scheme. To estimate sparse MIMO channels, sparse invariable step-size normalized least mean square (ISS-NLMS) algorithms were applied to adaptive sparse channel estimation (ACSE). It is well known that step-size is a critical parameter which controls three aspects: algorithm stability, estimation performance, and computational cost. However, traditional methods are vulnerable to cause estimation performance loss because ISS cannot balance the three aspects simultaneously. In this paper, we propose two stable sparse variable step-size NLMS (VSS-NLMS) algorithms to improve the accuracy of MIMO channel estimators. First, ASCE is formulated in MIMO-OFDM systems. Second, different sparse penalties are introduced to VSS-NLMS algorithm for ASCE. In addition, difference between sparse ISS-NLMS algorithms and sparse VSS-NLMS ones is explained and their lower bounds are also derived. At last, to verify the effectiveness of the proposed algorithms for ASCE, several selected simulation results are shown to prove that the proposed sparse VSS-NLMS algorithms can achieve better estimation performance than the conventional methods via mean square error (MSE) and bit error rate (BER) metrics.
Gui, Guan; Chen, Zhang-xin; Xu, Li; Wan, Qun; Huang, Jiyan; Adachi, Fumiyuki
2014-01-01
Channel estimation problem is one of the key technical issues in sparse frequency-selective fading multiple-input multiple-output (MIMO) communication systems using orthogonal frequency division multiplexing (OFDM) scheme. To estimate sparse MIMO channels, sparse invariable step-size normalized least mean square (ISS-NLMS) algorithms were applied to adaptive sparse channel estimation (ACSE). It is well known that step-size is a critical parameter which controls three aspects: algorithm stability, estimation performance, and computational cost. However, traditional methods are vulnerable to cause estimation performance loss because ISS cannot balance the three aspects simultaneously. In this paper, we propose two stable sparse variable step-size NLMS (VSS-NLMS) algorithms to improve the accuracy of MIMO channel estimators. First, ASCE is formulated in MIMO-OFDM systems. Second, different sparse penalties are introduced to VSS-NLMS algorithm for ASCE. In addition, difference between sparse ISS-NLMS algorithms and sparse VSS-NLMS ones is explained and their lower bounds are also derived. At last, to verify the effectiveness of the proposed algorithms for ASCE, several selected simulation results are shown to prove that the proposed sparse VSS-NLMS algorithms can achieve better estimation performance than the conventional methods via mean square error (MSE) and bit error rate (BER) metrics. PMID:25089286
NASA Technical Reports Server (NTRS)
Shen, Suhung; Leptoukh, Gregory G.; Gerasimov, Irina
2010-01-01
Surface air temperature is a critical variable to describe the energy and water cycle of the Earth-atmosphere system and is a key input element for hydrology and land surface models. It is a very important variable in agricultural applications and climate change studies. This is a preliminary study to examine statistical relationships between ground meteorological station measured surface daily maximum/minimum air temperature and satellite remotely sensed land surface temperature from MODIS over the dry and semiarid regions of northern China. Studies were conducted for both MODIS-Terra and MODIS-Aqua by using year 2009 data. Results indicate that the relationships between surface air temperature and remotely sensed land surface temperature are statistically significant. The relationships between the maximum air temperature and daytime land surface temperature depends significantly on land surface types and vegetation index, but the minimum air temperature and nighttime land surface temperature has little dependence on the surface conditions. Based on linear regression relationship between surface air temperature and MODIS land surface temperature, surface maximum and minimum air temperatures are estimated from 1km MODIS land surface temperature under clear sky conditions. The statistical errors (sigma) of the estimated daily maximum (minimum) air temperature is about 3.8 C(3.7 C).
Mitigating Harmful Cyanobacterial Blooms in a Human- and Climatically-Impacted World
Paerl, Hans W.
2014-01-01
Bloom-forming harmful cyanobacteria (CyanoHABs) are harmful from environmental, ecological and human health perspectives by outcompeting beneficial phytoplankton, creating low oxygen conditions (hypoxia, anoxia), and by producing cyanotoxins. Cyanobacterial genera exhibit optimal growth rates and bloom potentials at relatively high water temperatures; hence, global warming plays a key role in their expansion and persistence. CyanoHABs are regulated by synergistic effects of nutrient (nitrogen:N and phosphorus:P) supplies, light, temperature, vertical stratification, water residence times, and biotic interactions. In most instances, nutrient control strategies should focus on reducing both N and P inputs. Strategies based on physical, chemical (nutrient) and biological manipulations can be effective in reducing CyanoHABs; however, these strategies are largely confined to relatively small systems, and some are prone to ecological and environmental drawbacks, including enhancing release of cyanotoxins, disruption of planktonic and benthic communities and fisheries habitat. All strategies should consider and be adaptive to climatic variability and change in order to be effective for long-term control of CyanoHABs. Rising temperatures and greater hydrologic variability will increase growth rates and alter critical nutrient thresholds for CyanoHAB development; thus, nutrient reductions for bloom control may need to be more aggressively pursued in response to climatic changes globally. PMID:25517134
The impact of 14nm photomask variability and uncertainty on computational lithography solutions
NASA Astrophysics Data System (ADS)
Sturtevant, John; Tejnil, Edita; Buck, Peter D.; Schulze, Steffen; Kalk, Franklin; Nakagawa, Kent; Ning, Guoxiang; Ackmann, Paul; Gans, Fritz; Buergel, Christian
2013-09-01
Computational lithography solutions rely upon accurate process models to faithfully represent the imaging system output for a defined set of process and design inputs. These models rely upon the accurate representation of multiple parameters associated with the scanner and the photomask. Many input variables for simulation are based upon designed or recipe-requested values or independent measurements. It is known, however, that certain measurement methodologies, while precise, can have significant inaccuracies. Additionally, there are known errors associated with the representation of certain system parameters. With shrinking total CD control budgets, appropriate accounting for all sources of error becomes more important, and the cumulative consequence of input errors to the computational lithography model can become significant. In this work, we examine via simulation, the impact of errors in the representation of photomask properties including CD bias, corner rounding, refractive index, thickness, and sidewall angle. The factors that are most critical to be accurately represented in the model are cataloged. CD bias values are based on state of the art mask manufacturing data and other variables changes are speculated, highlighting the need for improved metrology and communication between mask and OPC model experts. The simulations are done by ignoring the wafer photoresist model, and show the sensitivity of predictions to various model inputs associated with the mask. It is shown that the wafer simulations are very dependent upon the 1D/2D representation of the mask and for 3D, that the mask sidewall angle is a very sensitive factor influencing simulated wafer CD results.