Estimating Solar PV Output Using Modern Space/Time Geostatistics (Presentation)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, S. J.; George, R.; Bush, B.
2009-04-29
This presentation describes a project that uses mapping techniques to predict solar output at subhourly resolution at any spatial point, develop a methodology that is applicable to natural resources in general, and demonstrate capability of geostatistical techniques to predict the output of a potential solar plant.
Data-Based Predictive Control with Multirate Prediction Step
NASA Technical Reports Server (NTRS)
Barlow, Jonathan S.
2010-01-01
Data-based predictive control is an emerging control method that stems from Model Predictive Control (MPC). MPC computes current control action based on a prediction of the system output a number of time steps into the future and is generally derived from a known model of the system. Data-based predictive control has the advantage of deriving predictive models and controller gains from input-output data. Thus, a controller can be designed from the outputs of complex simulation code or a physical system where no explicit model exists. If the output data happens to be corrupted by periodic disturbances, the designed controller will also have the built-in ability to reject these disturbances without the need to know them. When data-based predictive control is implemented online, it becomes a version of adaptive control. One challenge of MPC is computational requirements increasing with prediction horizon length. This paper develops a closed-loop dynamic output feedback controller that minimizes a multi-step-ahead receding-horizon cost function with multirate prediction step. One result is a reduced influence of prediction horizon and the number of system outputs on the computational requirements of the controller. Another result is an emphasis on portions of the prediction window that are sampled more frequently. A third result is the ability to include more outputs in the feedback path than in the cost function.
Optimal input selection for neural machine interfaces predicting multiple non-explicit outputs.
Krepkovich, Eileen T; Perreault, Eric J
2008-01-01
This study implemented a novel algorithm that optimally selects inputs for neural machine interface (NMI) devices intended to control multiple outputs and evaluated its performance on systems lacking explicit output. NMIs often incorporate signals from multiple physiological sources and provide predictions for multidimensional control, leading to multiple-input multiple-output systems. Further, NMIs often are used with subjects who have motor disabilities and thus lack explicit motor outputs. Our algorithm was tested on simulated multiple-input multiple-output systems and on electromyogram and kinematic data collected from healthy subjects performing arm reaches. Effects of output noise in simulated systems indicated that the algorithm could be useful for systems with poor estimates of the output states, as is true for systems lacking explicit motor output. To test efficacy on physiological data, selection was performed using inputs from one subject and outputs from a different subject. Selection was effective for these cases, again indicating that this algorithm will be useful for predictions where there is no motor output, as often is the case for disabled subjects. Further, prediction results generalized for different movement types not used for estimation. These results demonstrate the efficacy of this algorithm for the development of neural machine interfaces.
Dynamic Simulation of Human Gait Model With Predictive Capability.
Sun, Jinming; Wu, Shaoli; Voglewede, Philip A
2018-03-01
In this paper, it is proposed that the central nervous system (CNS) controls human gait using a predictive control approach in conjunction with classical feedback control instead of exclusive classical feedback control theory that controls based on past error. To validate this proposition, a dynamic model of human gait is developed using a novel predictive approach to investigate the principles of the CNS. The model developed includes two parts: a plant model that represents the dynamics of human gait and a controller that represents the CNS. The plant model is a seven-segment, six-joint model that has nine degrees-of-freedom (DOF). The plant model is validated using data collected from able-bodied human subjects. The proposed controller utilizes model predictive control (MPC). MPC uses an internal model to predict the output in advance, compare the predicted output to the reference, and optimize the control input so that the predicted error is minimal. To decrease the complexity of the model, two joints are controlled using a proportional-derivative (PD) controller. The developed predictive human gait model is validated by simulating able-bodied human gait. The simulation results show that the developed model is able to simulate the kinematic output close to experimental data.
Electric Vehicles Charging Scheduling Strategy Considering the Uncertainty of Photovoltaic Output
NASA Astrophysics Data System (ADS)
Wei, Xiangxiang; Su, Su; Yue, Yunli; Wang, Wei; He, Luobin; Li, Hao; Ota, Yutaka
2017-05-01
The rapid development of electric vehicles and distributed generation bring new challenges to security and economic operation of the power system, so the collaborative research of the EVs and the distributed generation have important significance in distribution network. Under this background, an EVs charging scheduling strategy considering the uncertainty of photovoltaic(PV) output is proposed. The characteristics of EVs charging are analysed first. A PV output prediction method is proposed with a PV database then. On this basis, an EVs charging scheduling strategy is proposed with the goal to satisfy EVs users’ charging willingness and decrease the power loss in distribution network. The case study proves that the proposed PV output prediction method can predict the PV output accurately and the EVs charging scheduling strategy can reduce the power loss and stabilize the fluctuation of the load in distributed network.
Gaussian functional regression for output prediction: Model assimilation and experimental design
NASA Astrophysics Data System (ADS)
Nguyen, N. C.; Peraire, J.
2016-03-01
In this paper, we introduce a Gaussian functional regression (GFR) technique that integrates multi-fidelity models with model reduction to efficiently predict the input-output relationship of a high-fidelity model. The GFR method combines the high-fidelity model with a low-fidelity model to provide an estimate of the output of the high-fidelity model in the form of a posterior distribution that can characterize uncertainty in the prediction. A reduced basis approximation is constructed upon the low-fidelity model and incorporated into the GFR method to yield an inexpensive posterior distribution of the output estimate. As this posterior distribution depends crucially on a set of training inputs at which the high-fidelity models are simulated, we develop a greedy sampling algorithm to select the training inputs. Our approach results in an output prediction model that inherits the fidelity of the high-fidelity model and has the computational complexity of the reduced basis approximation. Numerical results are presented to demonstrate the proposed approach.
NASA Astrophysics Data System (ADS)
Kim, Jae-Chang; Moon, Sung-Ki; Kwak, Sangshin
2018-04-01
This paper presents a direct model-based predictive control scheme for voltage source inverters (VSIs) with reduced common-mode voltages (CMVs). The developed method directly finds optimal vectors without using repetitive calculation of a cost function. To adjust output currents with the CMVs in the range of -Vdc/6 to +Vdc/6, the developed method uses voltage vectors, as finite control resources, excluding zero voltage vectors which produce the CMVs in the VSI within ±Vdc/2. In a model-based predictive control (MPC), not using zero voltage vectors increases the output current ripples and the current errors. To alleviate these problems, the developed method uses two non-zero voltage vectors in one sampling step. In addition, the voltage vectors scheduled to be used are directly selected at every sampling step once the developed method calculates the future reference voltage vector, saving the efforts of repeatedly calculating the cost function. And the two non-zero voltage vectors are optimally allocated to make the output current approach the reference current as close as possible. Thus, low CMV, rapid current-following capability and sufficient output current ripple performance are attained by the developed method. The results of a simulation and an experiment verify the effectiveness of the developed method.
Reduced order models for assessing CO 2 impacts in shallow unconfined aquifers
Keating, Elizabeth H.; Harp, Dylan H.; Dai, Zhenxue; ...
2016-01-28
Risk assessment studies of potential CO 2 sequestration projects consider many factors, including the possibility of brine and/or CO 2 leakage from the storage reservoir. Detailed multiphase reactive transport simulations have been developed to predict the impact of such leaks on shallow groundwater quality; however, these simulations are computationally expensive and thus difficult to directly embed in a probabilistic risk assessment analysis. Here we present a process for developing computationally fast reduced-order models which emulate key features of the more detailed reactive transport simulations. A large ensemble of simulations that take into account uncertainty in aquifer characteristics and CO 2/brinemore » leakage scenarios were performed. Twelve simulation outputs of interest were used to develop response surfaces (RSs) using a MARS (multivariate adaptive regression splines) algorithm (Milborrow, 2015). A key part of this study is to compare different measures of ROM accuracy. We then show that for some computed outputs, MARS performs very well in matching the simulation data. The capability of the RS to predict simulation outputs for parameter combinations not used in RS development was tested using cross-validation. Again, for some outputs, these results were quite good. For other outputs, however, the method performs relatively poorly. Performance was best for predicting the volume of depressed-pH-plumes, and was relatively poor for predicting organic and trace metal plume volumes. We believe several factors, including the non-linearity of the problem, complexity of the geochemistry, and granularity in the simulation results, contribute to this varied performance. The reduced order models were developed principally to be used in probabilistic performance analysis where a large range of scenarios are considered and ensemble performance is calculated. We demonstrate that they effectively predict the ensemble behavior. But, the performance of the RSs is much less accurate when used to predict time-varying outputs from a single simulation. If an analysis requires only a small number of scenarios to be investigated, computationally expensive physics-based simulations would likely provide more reliable results. Finally, if the aggregate behavior of a large number of realizations is the focus, as will be the case in probabilistic quantitative risk assessment, the methodology presented here is relatively robust.« less
A user-friendly model for spray drying to aid pharmaceutical product development.
Grasmeijer, Niels; de Waard, Hans; Hinrichs, Wouter L J; Frijlink, Henderik W
2013-01-01
The aim of this study was to develop a user-friendly model for spray drying that can aid in the development of a pharmaceutical product, by shifting from a trial-and-error towards a quality-by-design approach. To achieve this, a spray dryer model was developed in commercial and open source spreadsheet software. The output of the model was first fitted to the experimental output of a Büchi B-290 spray dryer and subsequently validated. The predicted outlet temperatures of the spray dryer model matched the experimental values very well over the entire range of spray dryer settings that were tested. Finally, the model was applied to produce glassy sugars by spray drying, an often used excipient in formulations of biopharmaceuticals. For the production of glassy sugars, the model was extended to predict the relative humidity at the outlet, which is not measured in the spray dryer by default. This extended model was then successfully used to predict whether specific settings were suitable for producing glassy trehalose and inulin by spray drying. In conclusion, a spray dryer model was developed that is able to predict the output parameters of the spray drying process. The model can aid the development of spray dried pharmaceutical products by shifting from a trial-and-error towards a quality-by-design approach.
An analytical framework to assist decision makers in the use of forest ecosystem model predictions
Larocque, Guy R.; Bhatti, Jagtar S.; Ascough, J.C.; Liu, J.; Luckai, N.; Mailly, D.; Archambault, L.; Gordon, Andrew M.
2011-01-01
The predictions from most forest ecosystem models originate from deterministic simulations. However, few evaluation exercises for model outputs are performed by either model developers or users. This issue has important consequences for decision makers using these models to develop natural resource management policies, as they cannot evaluate the extent to which predictions stemming from the simulation of alternative management scenarios may result in significant environmental or economic differences. Various numerical methods, such as sensitivity/uncertainty analyses, or bootstrap methods, may be used to evaluate models and the errors associated with their outputs. However, the application of each of these methods carries unique challenges which decision makers do not necessarily understand; guidance is required when interpreting the output generated from each model. This paper proposes a decision flow chart in the form of an analytical framework to help decision makers apply, in an orderly fashion, different steps involved in examining the model outputs. The analytical framework is discussed with regard to the definition of problems and objectives and includes the following topics: model selection, identification of alternatives, modelling tasks and selecting alternatives for developing policy or implementing management scenarios. Its application is illustrated using an on-going exercise in developing silvicultural guidelines for a forest management enterprise in Ontario, Canada.
Interval Predictor Models with a Formal Characterization of Uncertainty and Reliability
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.
2014-01-01
This paper develops techniques for constructing empirical predictor models based on observations. By contrast to standard models, which yield a single predicted output at each value of the model's inputs, Interval Predictors Models (IPM) yield an interval into which the unobserved output is predicted to fall. The IPMs proposed prescribe the output as an interval valued function of the model's inputs, render a formal description of both the uncertainty in the model's parameters and of the spread in the predicted output. Uncertainty is prescribed as a hyper-rectangular set in the space of model's parameters. The propagation of this set through the empirical model yields a range of outputs of minimal spread containing all (or, depending on the formulation, most) of the observations. Optimization-based strategies for calculating IPMs and eliminating the effects of outliers are proposed. Outliers are identified by evaluating the extent by which they degrade the tightness of the prediction. This evaluation can be carried out while the IPM is calculated. When the data satisfies mild stochastic assumptions, and the optimization program used for calculating the IPM is convex (or, when its solution coincides with the solution to an auxiliary convex program), the model's reliability (that is, the probability that a future observation would be within the predicted range of outputs) can be bounded rigorously by a non-asymptotic formula.
Nabavi-Pelesaraei, Ashkan; Rafiee, Shahin; Mohtasebi, Seyed Saeid; Hosseinzadeh-Bandbafha, Homa; Chau, Kwok-Wing
2018-08-01
Prediction of agricultural energy output and environmental impacts play important role in energy management and conservation of environment as it can help us to evaluate agricultural energy efficiency, conduct crops production system commissioning, and detect and diagnose faults of crop production system. Agricultural energy output and environmental impacts can be readily predicted by artificial intelligence (AI), owing to the ease of use and adaptability to seek optimal solutions in a rapid manner as well as the use of historical data to predict future agricultural energy use pattern under constraints. This paper conducts energy output and environmental impact prediction of paddy production in Guilan province, Iran based on two AI methods, artificial neural networks (ANNs), and adaptive neuro fuzzy inference system (ANFIS). The amounts of energy input and output are 51,585.61MJkg -1 and 66,112.94MJkg -1 , respectively, in paddy production. Life Cycle Assessment (LCA) is used to evaluate environmental impacts of paddy production. Results show that, in paddy production, in-farm emission is a hotspot in global warming, acidification and eutrophication impact categories. ANN model with 12-6-8-1 structure is selected as the best one for predicting energy output. The correlation coefficient (R) varies from 0.524 to 0.999 in training for energy input and environmental impacts in ANN models. ANFIS model is developed based on a hybrid learning algorithm, with R for predicting output energy being 0.860 and, for environmental impacts, varying from 0.944 to 0.997. Results indicate that the multi-level ANFIS is a useful tool to managers for large-scale planning in forecasting energy output and environmental indices of agricultural production systems owing to its higher speed of computation processes compared to ANN model, despite ANN's higher accuracy. Copyright © 2018 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zawisza, I; Yan, H; Yin, F
Purpose: To assure that tumor motion is within the radiation field during high-dose and high-precision radiosurgery, real-time imaging and surrogate monitoring are employed. These methods are useful in providing real-time tumor/surrogate motion but no future information is available. In order to anticipate future tumor/surrogate motion and track target location precisely, an algorithm is developed and investigated for estimating surrogate motion multiple-steps ahead. Methods: The study utilized a one-dimensional surrogate motion signal divided into three components: (a) training component containing the primary data including the first frame to the beginning of the input subsequence; (b) input subsequence component of the surrogatemore » signal used as input to the prediction algorithm: (c) output subsequence component is the remaining signal used as the known output of the prediction algorithm for validation. The prediction algorithm consists of three major steps: (1) extracting subsequences from training component which best-match the input subsequence according to given criterion; (2) calculating weighting factors from these best-matched subsequence; (3) collecting the proceeding parts of the subsequences and combining them together with assigned weighting factors to form output. The prediction algorithm was examined for several patients, and its performance is assessed based on the correlation between prediction and known output. Results: Respiratory motion data was collected for 20 patients using the RPM system. The output subsequence is the last 50 samples (∼2 seconds) of a surrogate signal, and the input subsequence was 100 (∼3 seconds) frames prior to the output subsequence. Based on the analysis of correlation coefficient between predicted and known output subsequence, the average correlation is 0.9644±0.0394 and 0.9789±0.0239 for equal-weighting and relative-weighting strategies, respectively. Conclusion: Preliminary results indicate that the prediction algorithm is effective in estimating surrogate motion multiple-steps in advance. Relative-weighting method shows better prediction accuracy than equal-weighting method. More parameters of this algorithm are under investigation.« less
A User-Friendly Model for Spray Drying to Aid Pharmaceutical Product Development
Grasmeijer, Niels; de Waard, Hans; Hinrichs, Wouter L. J.; Frijlink, Henderik W.
2013-01-01
The aim of this study was to develop a user-friendly model for spray drying that can aid in the development of a pharmaceutical product, by shifting from a trial-and-error towards a quality-by-design approach. To achieve this, a spray dryer model was developed in commercial and open source spreadsheet software. The output of the model was first fitted to the experimental output of a Büchi B-290 spray dryer and subsequently validated. The predicted outlet temperatures of the spray dryer model matched the experimental values very well over the entire range of spray dryer settings that were tested. Finally, the model was applied to produce glassy sugars by spray drying, an often used excipient in formulations of biopharmaceuticals. For the production of glassy sugars, the model was extended to predict the relative humidity at the outlet, which is not measured in the spray dryer by default. This extended model was then successfully used to predict whether specific settings were suitable for producing glassy trehalose and inulin by spray drying. In conclusion, a spray dryer model was developed that is able to predict the output parameters of the spray drying process. The model can aid the development of spray dried pharmaceutical products by shifting from a trial-and-error towards a quality-by-design approach. PMID:24040240
NASA Technical Reports Server (NTRS)
Rockey, D. E.
1979-01-01
A general approach is developed for predicting the power output of a concentrator enhanced photovoltaic space array. A ray trace routine determines the concentrator intensity arriving at each solar cell. An iterative calculation determines the cell's operating temperature since cell temperature and cell efficiency are functions of one another. The end result of the iterative calculation is that the individual cell's power output is determined as a function of temperature and intensity. Circuit output is predicted by combining the individual cell outputs using the single diode model of a solar cell. Concentrated array characteristics such as uniformity of intensity and operating temperature at various points across the array are examined using computer modeling techniques. An illustrative example is given showing how the output of an array can be enhanced using solar concentration techniques.
Stochastic Short-term High-resolution Prediction of Solar Irradiance and Photovoltaic Power Output
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melin, Alexander M.; Olama, Mohammed M.; Dong, Jin
The increased penetration of solar photovoltaic (PV) energy sources into electric grids has increased the need for accurate modeling and prediction of solar irradiance and power production. Existing modeling and prediction techniques focus on long-term low-resolution prediction over minutes to years. This paper examines the stochastic modeling and short-term high-resolution prediction of solar irradiance and PV power output. We propose a stochastic state-space model to characterize the behaviors of solar irradiance and PV power output. This prediction model is suitable for the development of optimal power controllers for PV sources. A filter-based expectation-maximization and Kalman filtering mechanism is employed tomore » estimate the parameters and states in the state-space model. The mechanism results in a finite dimensional filter which only uses the first and second order statistics. The structure of the scheme contributes to a direct prediction of the solar irradiance and PV power output without any linearization process or simplifying assumptions of the signal’s model. This enables the system to accurately predict small as well as large fluctuations of the solar signals. The mechanism is recursive allowing the solar irradiance and PV power to be predicted online from measurements. The mechanism is tested using solar irradiance and PV power measurement data collected locally in our lab.« less
Assessment of the Uniqueness of Wind Tunnel Strain-Gage Balance Load Predictions
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2016-01-01
A new test was developed to assess the uniqueness of wind tunnel strain-gage balance load predictions that are obtained from regression models of calibration data. The test helps balance users to gain confidence in load predictions of non-traditional balance designs. It also makes it possible to better evaluate load predictions of traditional balances that are not used as originally intended. The test works for both the Iterative and Non-Iterative Methods that are used in the aerospace testing community for the prediction of balance loads. It is based on the hypothesis that the total number of independently applied balance load components must always match the total number of independently measured bridge outputs or bridge output combinations. This hypothesis is supported by a control volume analysis of the inputs and outputs of a strain-gage balance. It is concluded from the control volume analysis that the loads and bridge outputs of a balance calibration data set must separately be tested for linear independence because it cannot always be guaranteed that a linearly independent load component set will result in linearly independent bridge output measurements. Simple linear math models for the loads and bridge outputs in combination with the variance inflation factor are used to test for linear independence. A highly unique and reversible mapping between the applied load component set and the measured bridge output set is guaranteed to exist if the maximum variance inflation factor of both sets is less than the literature recommended threshold of five. Data from the calibration of a six{component force balance is used to illustrate the application of the new test to real-world data.
Dynamic Modeling and Very Short-term Prediction of Wind Power Output Using Box-Cox Transformation
NASA Astrophysics Data System (ADS)
Urata, Kengo; Inoue, Masaki; Murayama, Dai; Adachi, Shuichi
2016-09-01
We propose a statistical modeling method of wind power output for very short-term prediction. The modeling method with a nonlinear model has cascade structure composed of two parts. One is a linear dynamic part that is driven by a Gaussian white noise and described by an autoregressive model. The other is a nonlinear static part that is driven by the output of the linear part. This nonlinear part is designed for output distribution matching: we shape the distribution of the model output to match with that of the wind power output. The constructed model is utilized for one-step ahead prediction of the wind power output. Furthermore, we study the relation between the prediction accuracy and the prediction horizon.
NASA Technical Reports Server (NTRS)
Ko, William L.; Fleischer, Van Tran
2011-01-01
The Ko displacement theory originally developed for shape predictions of straight beams is extended to shape predictions of curved beams. The surface strains needed for shape predictions were analytically generated from finite-element nodal stress outputs. With the aid of finite-element displacement outputs, mathematical functional forms for curvature-effect correction terms are established and incorporated into straight-beam deflection equations for shape predictions of both cantilever and two-point supported curved beams. The newly established deflection equations for cantilever curved beams could provide quite accurate shape predictions for different cantilever curved beams, including the quarter-circle cantilever beam. Furthermore, the newly formulated deflection equations for two-point supported curved beams could provide accurate shape predictions for a range of two-point supported curved beams, including the full-circular ring. Accuracy of the newly developed curved-beam deflection equations is validated through shape prediction analysis of curved beams embedded in the windward shallow spherical shell of a generic crew exploration vehicle. A single-point collocation method for optimization of shape predictions is discussed in detail
An application of quantile random forests for predictive mapping of forest attributes
E.A. Freeman; G.G. Moisen
2015-01-01
Increasingly, random forest models are used in predictive mapping of forest attributes. Traditional random forests output the mean prediction from the random trees. Quantile regression forests (QRF) is an extension of random forests developed by Nicolai Meinshausen that provides non-parametric estimates of the median predicted value as well as prediction quantiles. It...
An analytical framework to assist decision makers in the use of forest ecosystem model predictions
USDA-ARS?s Scientific Manuscript database
The predictions of most terrestrial ecosystem models originate from deterministic simulations. Relatively few uncertainty evaluation exercises in model outputs are performed by either model developers or users. This issue has important consequences for decision makers who rely on models to develop n...
Uncertainties in predicting solar panel power output
NASA Technical Reports Server (NTRS)
Anspaugh, B.
1974-01-01
The problem of calculating solar panel power output at launch and during a space mission is considered. The major sources of uncertainty and error in predicting the post launch electrical performance of the panel are considered. A general discussion of error analysis is given. Examples of uncertainty calculations are included. A general method of calculating the effect on the panel of various degrading environments is presented, with references supplied for specific methods. A technique for sizing a solar panel for a required mission power profile is developed.
Jet Measurements for Development of Jet Noise Prediction Tools
NASA Technical Reports Server (NTRS)
Bridges, James E.
2006-01-01
The primary focus of my presentation is the development of the jet noise prediction code JeNo with most examples coming from the experimental work that drove the theoretical development and validation. JeNo is a statistical jet noise prediction code, based upon the Lilley acoustic analogy. Our approach uses time-average 2-D or 3-D mean and turbulent statistics of the flow as input. The output is source distributions and spectral directivity.
Holbek, Bo Laksáfoss; Petersen, René Horsleben; Kehlet, Henrik
2017-01-01
The objective of this study was to evaluate the potential of predicting the pleural fluid output in patients after video-assisted thoracoscopic lobectomy of the lung. Detailed measurements of continuous fluid output were obtained prospectively using an electronic thoracic drainage device (Thopaz+™, Medela AG, Switzerland). Patients were divided into high (≥500 mL) and low (<500 mL) 24-hour fluid output, and detailed flow curves were plotted graphically to identify arithmetic patterns predicting fluid output in the early (≤24 hours) and later (24–48 hours) post-operative phase. Furthermore, multiple logistic regression analysis was used to predict high 24-hour fluid output using baseline data. Data were obtained from 50 patients, where 52% had a fluid output of <500 mL/24 hours. From visual assessment of flow curves, patients were grouped according to fluid output 6 hours postoperatively. An output ≥200 mL/6 hours was predictive of ‘high 24-hour fluid output’ (P<0.0001). However, 33% of patients with <200 mL/6 hours ended with a ‘high 24-hour fluid output’. Baseline data showed no predictive value of fluid production, and 24-hour fluid output had no predictive value of fluid output between 24 and 48 hours. Assessment of initial fluid production may predict high 24-hour fluid output (≥500 mL) but seems to lack clinical value in drain removal criteria. PMID:28840021
Application of Interval Predictor Models to Space Radiation Shielding
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy,Daniel P.; Norman, Ryan B.; Blattnig, Steve R.
2016-01-01
This paper develops techniques for predicting the uncertainty range of an output variable given input-output data. These models are called Interval Predictor Models (IPM) because they yield an interval valued function of the input. This paper develops IPMs having a radial basis structure. This structure enables the formal description of (i) the uncertainty in the models parameters, (ii) the predicted output interval, and (iii) the probability that a future observation would fall in such an interval. In contrast to other metamodeling techniques, this probabilistic certi cate of correctness does not require making any assumptions on the structure of the mechanism from which data are drawn. Optimization-based strategies for calculating IPMs having minimal spread while containing all the data are developed. Constraints for bounding the minimum interval spread over the continuum of inputs, regulating the IPMs variation/oscillation, and centering its spread about a target point, are used to prevent data over tting. Furthermore, we develop an approach for using expert opinion during extrapolation. This metamodeling technique is illustrated using a radiation shielding application for space exploration. In this application, we use IPMs to describe the error incurred in predicting the ux of particles resulting from the interaction between a high-energy incident beam and a target.
Active optimal control strategies for increasing the efficiency of photovoltaic cells
NASA Astrophysics Data System (ADS)
Aljoaba, Sharif Zidan Ahmad
Energy consumption has increased drastically during the last century. Currently, the worldwide energy consumption is about 17.4 TW and is predicted to reach 25 TW by 2035. Solar energy has emerged as one of the potential renewable energy sources. Since its first physical recognition in 1887 by Adams and Day till nowadays, research in solar energy is continuously developing. This has lead to many achievements and milestones that introduced it as one of the most reliable and sustainable energy sources. Recently, the International Energy Agency declared that solar energy is predicted to be one of the major electricity production energy sources by 2035. Enhancing the efficiency and lifecycle of photovoltaic (PV) modules leads to significant cost reduction. Reducing the temperature of the PV module improves its efficiency and enhances its lifecycle. To better understand the PV module performance, it is important to study the interaction between the output power and the temperature. A model that is capable of predicting the PV module temperature and its effects on the output power considering the individual contribution of the solar spectrum wavelengths significantly advances the PV module edsigns toward higher efficiency. In this work, a thermoelectrical model is developed to predict the effects of the solar spectrum wavelengths on the PV module performance. The model is characterized and validated under real meteorological conditions where experimental temperature and output power of the PV module measurements are shown to agree with the predicted results. The model is used to validate the concept of active optical filtering. Since this model is wavelength-based, it is used to design an active optical filter for PV applications. Applying this filter to the PV module is expected to increase the output power of the module by filtering the spectrum wavelengths. The active filter performance is optimized, where different cutoff wavelengths are used to maximize the module output power. It is predicted that if the optimized active optical filter is applied to the PV module, the module efficiency is predicted to increase by about 1%. Different technologies are considered for physical implementation of the active optical filter.
Application of a neural network as a potential aid in predicting NTF pump failure
NASA Technical Reports Server (NTRS)
Rogers, James L.; Hill, Jeffrey S.; Lamarsh, William J., II; Bradley, David E.
1993-01-01
The National Transonic Facility has three centrifugal multi-stage pumps to supply liquid nitrogen to the wind tunnel. Pump reliability is critical to facility operation and test capability. A highly desirable goal is to be able to detect a pump rotating component problem as early as possible during normal operation and avoid serious damage to other pump components. If a problem is detected before serious damage occurs, the repair cost and downtime could be reduced significantly. A neural network-based tool was developed for monitoring pump performance and aiding in predicting pump failure. Once trained, neural networks can rapidly process many combinations of input values other than those used for training to approximate previously unknown output values. This neural network was applied to establish relationships among the critical frequencies and aid in predicting failures. Training pairs were developed from frequency scans from typical tunnel operations. After training, various combinations of critical pump frequencies were propagated through the neural network. The approximated output was used to create a contour plot depicting the relationships of the input frequencies to the output pump frequency.
Multi input single output model predictive control of non-linear bio-polymerization process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arumugasamy, Senthil Kumar; Ahmad, Z.
This paper focuses on Multi Input Single Output (MISO) Model Predictive Control of bio-polymerization process in which mechanistic model is developed and linked with the feedforward neural network model to obtain a hybrid model (Mechanistic-FANN) of lipase-catalyzed ring-opening polymerization of ε-caprolactone (ε-CL) for Poly (ε-caprolactone) production. In this research, state space model was used, in which the input to the model were the reactor temperatures and reactor impeller speeds and the output were the molecular weight of polymer (M{sub n}) and polymer polydispersity index. State space model for MISO created using System identification tool box of Matlab™. This state spacemore » model is used in MISO MPC. Model predictive control (MPC) has been applied to predict the molecular weight of the biopolymer and consequently control the molecular weight of biopolymer. The result shows that MPC is able to track reference trajectory and give optimum movement of manipulated variable.« less
NASA Technical Reports Server (NTRS)
Egolf, T. Alan; Anderson, Olof L.; Edwards, David E.; Landgrebe, Anton J.
1988-01-01
A user's manual for the computer program developed for the prediction of propeller-nacelle aerodynamic performance reported in, An Analysis for High Speed Propeller-Nacelle Aerodynamic Performance Prediction: Volume 1 -- Theory and Application, is presented. The manual describes the computer program mode of operation requirements, input structure, input data requirements and the program output. In addition, it provides the user with documentation of the internal program structure and the software used in the computer program as it relates to the theory presented in Volume 1. Sample input data setups are provided along with selected printout of the program output for one of the sample setups.
Peak expiratory flow profiles delivered by pump systems. Limitations due to wave action.
Miller, M R; Jones, B; Xu, Y; Pedersen, O F; Quanjer, P H
2000-06-01
Pump systems are currently used to test the performance of both spirometers and peak expiratory flow (PEF) meters, but for certain flow profiles the input signal (i.e., requested profile) and the output profile can differ. We developed a mathematical model of wave action within a pump and compared the recorded flow profiles with both the input profiles and the output predicted by the model. Three American Thoracic Society (ATS) flow profiles and four artificial flow-versus-time profiles were delivered by a pump, first to a pneumotachograph (PT) on its own, then to the PT with a 32-cm upstream extension tube (which would favor wave action), and lastly with the PT in series with and immediately downstream to a mini-Wright peak flow meter. With the PT on its own, recorded flow for the seven profiles was 2.4 +/- 1.9% (mean +/- SD) higher than the pump's input flow, and similarly was 2.3 +/- 2.3% higher than the pump's output flow as predicted by the model. With the extension tube in place, the recorded flow was 6.6 +/- 6.4% higher than the input flow (range: 0.1 to 18.4%), but was only 1.2 +/- 2.5% higher than the output flow predicted by the model (range: -0.8 to 5.2%). With the mini-Wright meter in series, the flow recorded by the PT was on average 6.1 +/- 9.1% below the input flow (range: -23.8 to 2. 5%), but was only 0.6 +/- 3.3% above the pump's output flow predicted by the model (range: -5.5 to 3.9%). The mini-Wright meter's reading (corrected for its nonlinearity) was on average 1.3 +/- 3.6% below the model's predicted output flow (range: -9.0 to 1. 5%). The mini-Wright meter would be deemed outside ATS limits for accuracy for three of the seven profiles when compared with the pump's input PEF, but this would be true for only one profile when compared with the pump's output PEF as predicted by the model. Our study shows that the output flow from pump systems can differ from the input waveform depending on the operating configuration. This effect can be predicted with reasonable accuracy using a model based on nonsteady flow analysis that takes account of pressure wave reflections within pump systems.
Roche, Daniel B; Buenavista, Maria T; Tetchner, Stuart J; McGuffin, Liam J
2011-07-01
The IntFOLD server is a novel independent server that integrates several cutting edge methods for the prediction of structure and function from sequence. Our guiding principles behind the server development were as follows: (i) to provide a simple unified resource that makes our prediction software accessible to all and (ii) to produce integrated output for predictions that can be easily interpreted. The output for predictions is presented as a simple table that summarizes all results graphically via plots and annotated 3D models. The raw machine readable data files for each set of predictions are also provided for developers, which comply with the Critical Assessment of Methods for Protein Structure Prediction (CASP) data standards. The server comprises an integrated suite of five novel methods: nFOLD4, for tertiary structure prediction; ModFOLD 3.0, for model quality assessment; DISOclust 2.0, for disorder prediction; DomFOLD 2.0 for domain prediction; and FunFOLD 1.0, for ligand binding site prediction. Predictions from the IntFOLD server were found to be competitive in several categories in the recent CASP9 experiment. The IntFOLD server is available at the following web site: http://www.reading.ac.uk/bioinf/IntFOLD/.
Cunha, B C N; Belk, K E; Scanga, J A; LeValley, S B; Tatum, J D; Smith, G C
2004-07-01
This study was performed to validate previous equations and to develop and evaluate new regression equations for predicting lamb carcass fabrication yields using outputs from a lamb vision system-hot carcass component (LVS-HCC) and the lamb vision system-chilled carcass LM imaging component (LVS-CCC). Lamb carcasses (n = 149) were selected after slaughter, imaged hot using the LVS-HCC, and chilled for 24 to 48 h at -3 to 1 degrees C. Chilled carcasses yield grades (YG) were assigned on-line by USDA graders and by expert USDA grading supervisors with unlimited time and access to the carcasses. Before fabrication, carcasses were ribbed between the 12th and 13th ribs and imaged using the LVS-CCC. Carcasses were fabricated into bone-in subprimal/primal cuts. Yields calculated included 1) saleable meat yield (SMY); 2) subprimal yield (SPY); and 3) fat yield (FY). On-line (whole-number) USDA YG accounted for 59, 58, and 64%; expert (whole-number) USDA YG explained 59, 59, and 65%; and expert (nearest-tenth) USDA YG accounted for 60, 60, and 67% of the observed variation in SMY, SPY, and FY, respectively. The best prediction equation developed in this trial using LVS-HCC output and hot carcass weight as independent variables explained 68, 62, and 74% of the variation in SMY, SPY, and FY, respectively. Addition of output from LVS-CCC improved predictive accuracy of the equations; the combined output equations explained 72 and 66% of the variability in SMY and SPY, respectively. Accuracy and repeatability of measurement of LM area made with the LVS-CCC also was assessed, and results suggested that use of LVS-CCC provided reasonably accurate (R2 = 0.59) and highly repeatable (repeatability = 0.98) measurements of LM area. Compared with USDA YG, use of the dual-component lamb vision system to predict cut yields of lamb carcasses improved accuracy and precision, suggesting that this system could have an application as an objective means for pricing carcasses in a value-based marketing system.
An application of hybrid downscaling model to forecast summer precipitation at stations in China
NASA Astrophysics Data System (ADS)
Liu, Ying; Fan, Ke
2014-06-01
A pattern prediction hybrid downscaling method was applied to predict summer (June-July-August) precipitation at China 160 stations. The predicted precipitation from the downscaling scheme is available one month before. Four predictors were chosen to establish the hybrid downscaling scheme. The 500-hPa geopotential height (GH5) and 850-hPa specific humidity (q85) were from the skillful predicted output of three DEMETER (Development of a European Multi-model Ensemble System for Seasonal to Interannual Prediction) general circulation models (GCMs). The 700-hPa geopotential height (GH7) and sea level pressure (SLP) were from reanalysis datasets. The hybrid downscaling scheme (HD-4P) has better prediction skill than a conventional statistical downscaling model (SD-2P) which contains two predictors derived from the output of GCMs, although two downscaling schemes were performed to improve the seasonal prediction of summer rainfall in comparison with the original output of the DEMETER GCMs. In particular, HD-4P downscaling predictions showed lower root mean square errors than those based on the SD-2P model. Furthermore, the HD-4P downscaling model reproduced the China summer precipitation anomaly centers more accurately than the scenario of the SD-2P model in 1998. A hybrid downscaling prediction should be effective to improve the prediction skill of summer rainfall at stations in China.
Shape Optimization by Bayesian-Validated Computer-Simulation Surrogates
NASA Technical Reports Server (NTRS)
Patera, Anthony T.
1997-01-01
A nonparametric-validated, surrogate approach to optimization has been applied to the computational optimization of eddy-promoter heat exchangers and to the experimental optimization of a multielement airfoil. In addition to the baseline surrogate framework, a surrogate-Pareto framework has been applied to the two-criteria, eddy-promoter design problem. The Pareto analysis improves the predictability of the surrogate results, preserves generality, and provides a means to rapidly determine design trade-offs. Significant contributions have been made in the geometric description used for the eddy-promoter inclusions as well as to the surrogate framework itself. A level-set based, geometric description has been developed to define the shape of the eddy-promoter inclusions. The level-set technique allows for topology changes (from single-body,eddy-promoter configurations to two-body configurations) without requiring any additional logic. The continuity of the output responses for input variations that cross the boundary between topologies has been demonstrated. Input-output continuity is required for the straightforward application of surrogate techniques in which simplified, interpolative models are fitted through a construction set of data. The surrogate framework developed previously has been extended in a number of ways. First, the formulation for a general, two-output, two-performance metric problem is presented. Surrogates are constructed and validated for the outputs. The performance metrics can be functions of both outputs, as well as explicitly of the inputs, and serve to characterize the design preferences. By segregating the outputs and the performance metrics, an additional level of flexibility is provided to the designer. The validated outputs can be used in future design studies and the error estimates provided by the output validation step still apply, and require no additional appeals to the expensive analysis. Second, a candidate-based a posteriori error analysis capability has been developed which provides probabilistic error estimates on the true performance for a design randomly selected near the surrogate-predicted optimal design.
Li, Kai; Poirier, Dale J
2003-11-30
The goal of this study is to address directly the predictive value of birth inputs and outputs, particularly birth weight, for measures of early childhood development in a simultaneous equations modelling framework. Strikingly, birth outputs have virtually no structural/causal effects on early childhood developmental outcomes, and only maternal smoking and drinking during pregnancy have some effects on child height. Not surprisingly, family child-rearing environment has sizeable negative and positive effects on a behavioural problems index and a mathematics/reading test score, respectively, and a mildly surprising negative effect on child height. Despite little evidence of a structural/causal effect of birth weight on early childhood developmental outcomes, our results demonstrate that birth weight nonetheless has strong predictive effects on early childhood outcomes. Furthermore, these effects are largely invariant to whether family child-rearing environment is taken into account. Family child-rearing environment has both structural and predictive effects on early childhood outcomes, but they are largely orthogonal and in addition to the effects of birth weight. Copyright 2003 John Wiley & Sons, Ltd.
Prediction of properties of wheat dough using intelligent deep belief networks
NASA Astrophysics Data System (ADS)
Guha, Paramita; Bhatnagar, Taru; Pal, Ishan; Kamboj, Uma; Mishra, Sunita
2017-11-01
In this paper, the rheological and chemical properties of wheat dough are predicted using deep belief networks. Wheat grains are stored at controlled environmental conditions. The internal parameters of grains viz., protein, fat, carbohydrates, moisture, ash are determined using standard chemical analysis and viscosity of the dough is measured using Rheometer. Here, fat, carbohydrates, moisture, ash and temperature are considered as inputs whereas protein and viscosity are chosen as outputs. The prediction algorithm is developed using deep neural network where each layer is trained greedily using restricted Boltzmann machine (RBM) networks. The overall network is finally fine-tuned using standard neural network technique. In most literature, it has been found that fine-tuning is done using back-propagation technique. In this paper, a new algorithm is proposed in which each layer is tuned using RBM and the final network is fine-tuned using deep neural network (DNN). It has been observed that with the proposed algorithm, errors between the actual and predicted outputs are less compared to the conventional algorithm. Hence, the given network can be considered as beneficial as it predicts the outputs more accurately. Numerical results along with discussions are presented.
González-Domínguez, Elisa; Armengol, Josep; Rossi, Vittorio
2014-01-01
A mechanistic, dynamic model was developed to predict infection of loquat fruit by conidia of Fusicladium eriobotryae, the causal agent of loquat scab. The model simulates scab infection periods and their severity through the sub-processes of spore dispersal, infection, and latency (i.e., the state variables); change from one state to the following one depends on environmental conditions and on processes described by mathematical equations. Equations were developed using published data on F. eriobotryae mycelium growth, conidial germination, infection, and conidial dispersion pattern. The model was then validated by comparing model output with three independent data sets. The model accurately predicts the occurrence and severity of infection periods as well as the progress of loquat scab incidence on fruit (with concordance correlation coefficients >0.95). Model output agreed with expert assessment of the disease severity in seven loquat-growing seasons. Use of the model for scheduling fungicide applications in loquat orchards may help optimise scab management and reduce fungicide applications. PMID:25233340
Development of metamodels for predicting aerosol dispersion in ventilated spaces
NASA Astrophysics Data System (ADS)
Hoque, Shamia; Farouk, Bakhtier; Haas, Charles N.
2011-04-01
Artificial neural network (ANN) based metamodels were developed to describe the relationship between the design variables and their effects on the dispersion of aerosols in a ventilated space. A Hammersley sequence sampling (HSS) technique was employed to efficiently explore the multi-parameter design space and to build numerical simulation scenarios. A detailed computational fluid dynamics (CFD) model was applied to simulate these scenarios. The results derived from the CFD simulations were used to train and test the metamodels. Feed forward ANN's were developed to map the relationship between the inputs and the outputs. The predictive ability of the neural network based metamodels was compared to linear and quadratic metamodels also derived from the same CFD simulation results. The ANN based metamodel performed well in predicting the independent data sets including data generated at the boundaries. Sensitivity analysis showed that particle tracking time to residence time and the location of input and output with relation to the height of the room had more impact than the other dimensionless groups on particle behavior.
Modelling of Two-Stage Methane Digestion With Pretreatment of Biomass
NASA Astrophysics Data System (ADS)
Dychko, A.; Remez, N.; Opolinskyi, I.; Kraychuk, S.; Ostapchuk, N.; Yevtieieva, L.
2018-04-01
Systems of anaerobic digestion should be used for processing of organic waste. Managing the process of anaerobic recycling of organic waste requires reliable predicting of biogas production. Development of mathematical model of process of organic waste digestion allows determining the rate of biogas output at the two-stage process of anaerobic digestion considering the first stage. Verification of Konto's model, based on the studied anaerobic processing of organic waste, is implemented. The dependencies of biogas output and its rate from time are set and may be used to predict the process of anaerobic processing of organic waste.
A phenomenological model of muscle fatigue and the power-endurance relationship.
James, A; Green, S
2012-11-01
The relationship between power output and the time that it can be sustained during exercise (i.e., endurance) at high intensities is curvilinear. Although fatigue is implicit in this relationship, there is little evidence pertaining to it. To address this, we developed a phenomenological model that predicts the temporal response of muscle power during submaximal and maximal exercise and which was based on the type, contractile properties (e.g., fatiguability), and recruitment of motor units (MUs) during exercise. The model was first used to predict power outputs during all-out exercise when fatigue is clearly manifest and for several distributions of MU type. The model was then used to predict times that different submaximal power outputs could be sustained for several MU distributions, from which several power-endurance curves were obtained. The model was simultaneously fitted to two sets of human data pertaining to all-out exercise (power-time profile) and submaximal exercise (power-endurance relationship), yielding a high goodness of fit (R(2) = 0.96-0.97). This suggested that this simple model provides an accurate description of human power output during submaximal and maximal exercise and that fatigue-related processes inherent in it account for the curvilinearity of the power-endurance relationship.
Ławryńczuk, Maciej
2017-03-01
This paper details development of a Model Predictive Control (MPC) algorithm for a boiler-turbine unit, which is a nonlinear multiple-input multiple-output process. The control objective is to follow set-point changes imposed on two state (output) variables and to satisfy constraints imposed on three inputs and one output. In order to obtain a computationally efficient control scheme, the state-space model is successively linearised on-line for the current operating point and used for prediction. In consequence, the future control policy is easily calculated from a quadratic optimisation problem. For state estimation the extended Kalman filter is used. It is demonstrated that the MPC strategy based on constant linear models does not work satisfactorily for the boiler-turbine unit whereas the discussed algorithm with on-line successive model linearisation gives practically the same trajectories as the truly nonlinear MPC controller with nonlinear optimisation repeated at each sampling instant. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Predictive control and estimation algorithms for the NASA/JPL 70-meter antennas
NASA Technical Reports Server (NTRS)
Gawronski, W.
1991-01-01
A modified output prediction procedure and a new controller design is presented based on the predictive control law. Also, a new predictive estimator is developed to complement the controller and to enhance system performance. The predictive controller is designed and applied to the tracking control of the Deep Space Network 70 m antennas. Simulation results show significant improvement in tracking performance over the linear quadratic controller and estimator presently in use.
A spectral method for spatial downscaling | Science Inventory ...
Complex computer models play a crucial role in air quality research. These models are used to evaluate potential regulatory impacts of emission control strategies and to estimate air quality in areas without monitoring data. For both of these purposes, it is important to calibrate model output with monitoring data to adjust for model biases and improve spatial prediction. In this paper, we propose a new spectral method to study and exploit complex relationships between model output and monitoring data. Spectral methods allow us to estimate the relationship between model output and monitoring data separately at different spatial scales, and to use model output for prediction only at the appropriate scales. The proposed method is computationally efficient and can be implemented using standard software. We apply the method to compare Community Multiscale Air Quality (CMAQ) model output with ozone measurements in the United States in July, 2005. We find that CMAQ captures large-scale spatial trends, but has low correlation with the monitoring data at small spatial scales. The National Exposure Research Laboratory′s (NERL′s)Atmospheric Modeling Division (AMAD) conducts research in support of EPA′s mission to protect human health and the environment. AMAD′s research program is engaged in developing and evaluating predictive atmospheric models on all spatial and temporal scales for forecasting the Nation′s air quality and for assessing ch
Bit selection using field drilling data and mathematical investigation
NASA Astrophysics Data System (ADS)
Momeni, M. S.; Ridha, S.; Hosseini, S. J.; Meyghani, B.; Emamian, S. S.
2018-03-01
A drilling process will not be complete without the usage of a drill bit. Therefore, bit selection is considered to be an important task in drilling optimization process. To select a bit is considered as an important issue in planning and designing a well. This is simply because the cost of drilling bit in total cost is quite high. Thus, to perform this task, aback propagation ANN Model is developed. This is done by training the model using several wells and it is done by the usage of drilling bit records from offset wells. In this project, two models are developed by the usage of the ANN. One is to find predicted IADC bit code and one is to find Predicted ROP. Stage 1 was to find the IADC bit code by using all the given filed data. The output is the Targeted IADC bit code. Stage 2 was to find the Predicted ROP values using the gained IADC bit code in Stage 1. Next is Stage 3 where the Predicted ROP value is used back again in the data set to gain Predicted IADC bit code value. The output is the Predicted IADC bit code. Thus, at the end, there are two models that give the Predicted ROP values and Predicted IADC bit code values.
Sang-Kyun Han; Han-Sup Han; William J. Elliot; Edward M. Bilek
2017-01-01
We developed a spreadsheet-based model, named ThinTool, to evaluate the cost of mechanical fuel reduction thinning including biomass removal, to predict net energy output, and to assess nutrient impacts from thinning treatments in northern California and southern Oregon. A combination of literature reviews, field-based studies, and contractor surveys was used to...
Gao, Xiang-Ming; Yang, Shi-Feng; Pan, San-Bo
2017-01-01
Predicting the output power of photovoltaic system with nonstationarity and randomness, an output power prediction model for grid-connected PV systems is proposed based on empirical mode decomposition (EMD) and support vector machine (SVM) optimized with an artificial bee colony (ABC) algorithm. First, according to the weather forecast data sets on the prediction date, the time series data of output power on a similar day with 15-minute intervals are built. Second, the time series data of the output power are decomposed into a series of components, including some intrinsic mode components IMFn and a trend component Res, at different scales using EMD. The corresponding SVM prediction model is established for each IMF component and trend component, and the SVM model parameters are optimized with the artificial bee colony algorithm. Finally, the prediction results of each model are reconstructed, and the predicted values of the output power of the grid-connected PV system can be obtained. The prediction model is tested with actual data, and the results show that the power prediction model based on the EMD and ABC-SVM has a faster calculation speed and higher prediction accuracy than do the single SVM prediction model and the EMD-SVM prediction model without optimization.
2017-01-01
Predicting the output power of photovoltaic system with nonstationarity and randomness, an output power prediction model for grid-connected PV systems is proposed based on empirical mode decomposition (EMD) and support vector machine (SVM) optimized with an artificial bee colony (ABC) algorithm. First, according to the weather forecast data sets on the prediction date, the time series data of output power on a similar day with 15-minute intervals are built. Second, the time series data of the output power are decomposed into a series of components, including some intrinsic mode components IMFn and a trend component Res, at different scales using EMD. The corresponding SVM prediction model is established for each IMF component and trend component, and the SVM model parameters are optimized with the artificial bee colony algorithm. Finally, the prediction results of each model are reconstructed, and the predicted values of the output power of the grid-connected PV system can be obtained. The prediction model is tested with actual data, and the results show that the power prediction model based on the EMD and ABC-SVM has a faster calculation speed and higher prediction accuracy than do the single SVM prediction model and the EMD-SVM prediction model without optimization. PMID:28912803
Satellite Remote Sensing is Key to Water Cycle Integrator
NASA Astrophysics Data System (ADS)
Koike, T.
2016-12-01
To promote effective multi-sectoral, interdisciplinary collaboration based on coordinated and integrated efforts, the Global Earth Observation System of Systems (GEOSS) is now developing a "GEOSS Water Cycle Integrator (WCI)", which integrates "Earth observations", "modeling", "data and information", "management systems" and "education systems". GEOSS/WCI sets up "work benches" by which partners can share data, information and applications in an interoperable way, exchange knowledge and experiences, deepen mutual understanding and work together effectively to ultimately respond to issues of both mitigation and adaptation. (A work bench is a virtual geographical or phenomenological space where experts and managers collaborate to use information to address a problem within that space). GEOSS/WCI enhances the coordination of efforts to strengthen individual, institutional and infrastructure capacities, especially for effective interdisciplinary coordination and integration. GEOSS/WCI archives various satellite data to provide various hydrological information such as cloud, rainfall, soil moisture, or land-surface snow. These satellite products were validated using land observation in-situ data. Water cycle models can be developed by coupling in-situ and satellite data. River flows and other hydrological parameters can be simulated and validated by in-situ data. Model outputs from weather-prediction, seasonal-prediction, and climate-prediction models are archived. Some of these model outputs are archived on an online basis, but other models, e.g., climate-prediction models are archived on an offline basis. After models are evaluated and biases corrected, the outputs can be used as inputs into the hydrological models for predicting the hydrological parameters. Additionally, we have already developed a data-assimilation system by combining satellite data and the models. This system can improve our capability to predict hydrological phenomena. The WCI can provide better predictions of the hydrological parameters for integrated water resources management (IWRM) and also assess the impact of climate change and calculate adaptation needs.
Baysal, Ayse; Saşmazel, Ahmet; Yildirim, Ayse; Ozyaprak, Buket; Gundogus, Narin; Kocak, Tuncer
2014-01-01
In children undergoing congenital heart surgery, plasma brain natriuretic peptide levels may have a role in development of low cardiac output syndrome that is defined as a combination of clinical findings and interventions to augment cardiac output in children with pulmonary hypertension. In a prospective observational study, fifty-one children undergoing congenital heart surgery with preoperative echocardiographic study showing pulmonary hypertension were enrolled. The plasma brain natriuretic peptide levels were collected before operation, 12, 24 and 48h after operation. The patients enrolled into the study were divided into two groups depending on: (1) Development of LCOS which is defined as a combination of clinical findings or interventions to augment cardiac output postoperatively; (2) Determination of preoperative brain natriuretic peptide cut-off value by receiver operating curve analysis for low cardiac output syndrome. The secondary end points were: (1) duration of mechanical ventilation ≥72h, (2) intensive care unit stay >7days, and (3) mortality. The differences in preoperative and postoperative brain natriuretic peptide levels of patients with or without low cardiac output syndrome (n=35, n=16, respectively) showed significant differences in repeated measurement time points (p=0.0001). The preoperative brain natriuretic peptide cut-off value of 125.5pgmL-1 was found to have the highest sensitivity of 88.9% and specificity of 96.9% in predicting low cardiac output syndrome in patients with pulmonary hypertension. A good correlation was found between preoperative plasma brain natriuretic peptide level and duration of mechanical ventilation (r=0.67, p=0.0001). In patients with pulmonary hypertension undergoing congenital heart surgery, 91% of patients with preoperative plasma brain natriuretic peptide levels above 125.5pgmL-1 are at risk of developing low cardiac output syndrome which is an important postoperative outcome. Copyright © 2013 Sociedade Brasileira de Anestesiologia. Publicado por Elsevier Editora Ltda. All rights reserved.
Forecasting hotspots using predictive visual analytics approach
Maciejewski, Ross; Hafen, Ryan; Rudolph, Stephen; Cleveland, William; Ebert, David
2014-12-30
A method for forecasting hotspots is provided. The method may include the steps of receiving input data at an input of the computational device, generating a temporal prediction based on the input data, generating a geospatial prediction based on the input data, and generating output data based on the time series and geospatial predictions. The output data may be configured to display at least one user interface at an output of the computational device.
NASA Technical Reports Server (NTRS)
Jumper, S. J.
1979-01-01
A method was developed for predicting the potential flow velocity field at the plane of a propeller operating under the influence of a wing-fuselage-cowl or nacelle combination. A computer program was written which predicts the three dimensional potential flow field. The contents of the program, its input data, and its output results are described.
A Generalized Mixture Framework for Multi-label Classification
Hong, Charmgil; Batal, Iyad; Hauskrecht, Milos
2015-01-01
We develop a novel probabilistic ensemble framework for multi-label classification that is based on the mixtures-of-experts architecture. In this framework, we combine multi-label classification models in the classifier chains family that decompose the class posterior distribution P(Y1, …, Yd|X) using a product of posterior distributions over components of the output space. Our approach captures different input–output and output–output relations that tend to change across data. As a result, we can recover a rich set of dependency relations among inputs and outputs that a single multi-label classification model cannot capture due to its modeling simplifications. We develop and present algorithms for learning the mixtures-of-experts models from data and for performing multi-label predictions on unseen data instances. Experiments on multiple benchmark datasets demonstrate that our approach achieves highly competitive results and outperforms the existing state-of-the-art multi-label classification methods. PMID:26613069
Computer program to predict aircraft noise levels
NASA Technical Reports Server (NTRS)
Clark, B. J.
1981-01-01
Methods developed at the NASA Lewis Research Center for predicting the noise contributions from various aircraft noise sources were programmed to predict aircraft noise levels either in flight or in ground tests. The noise sources include fan inlet and exhaust, jet, flap (for powered lift), core (combustor), turbine, and airframe. Noise propagation corrections are available for atmospheric attenuation, ground reflections, extra ground attenuation, and shielding. Outputs can include spectra, overall sound pressure level, perceived noise level, tone-weighted perceived noise level, and effective perceived noise level at locations specified by the user. Footprint contour coordinates and approximate footprint areas can also be calculated. Inputs and outputs can be in either System International or U.S. customary units. The subroutines for each noise source and propagation correction are described. A complete listing is given.
Dynamic analysis of a buckled asymmetric piezoelectric beam for energy harvesting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Blarigan, Louis, E-mail: louis01@umail.ucsb.edu; Moehlis, Jeff
2016-03-15
A model of a buckled beam energy harvester is analyzed to determine the phenomena behind the transition between high and low power output levels. It is shown that the presence of a chaotic attractor is a sufficient condition to predict high power output, though there are relatively small areas where high output is achieved without a chaotic attractor. The chaotic attractor appears as a product of a period doubling cascade or a boundary crisis. Bifurcation diagrams provide insight into the development of the chaotic region as the input power level is varied, as well as the intermixed periodic windows.
RNA Polymerase II cluster dynamics predict mRNA output in living cells
Cho, Won-Ki; Jayanth, Namrata; English, Brian P; Inoue, Takuma; Andrews, J Owen; Conway, William; Grimm, Jonathan B; Spille, Jan-Hendrik; Lavis, Luke D; Lionnet, Timothée; Cisse, Ibrahim I
2016-01-01
Protein clustering is a hallmark of genome regulation in mammalian cells. However, the dynamic molecular processes involved make it difficult to correlate clustering with functional consequences in vivo. We developed a live-cell super-resolution approach to uncover the correlation between mRNA synthesis and the dynamics of RNA Polymerase II (Pol II) clusters at a gene locus. For endogenous β-actin genes in mouse embryonic fibroblasts, we observe that short-lived (~8 s) Pol II clusters correlate with basal mRNA output. During serum stimulation, a stereotyped increase in Pol II cluster lifetime correlates with a proportionate increase in the number of mRNAs synthesized. Our findings suggest that transient clustering of Pol II may constitute a pre-transcriptional regulatory event that predictably modulates nascent mRNA output. DOI: http://dx.doi.org/10.7554/eLife.13617.001 PMID:27138339
Berger, Theodore W.; Song, Dong; Chan, Rosa H. M.; Marmarelis, Vasilis Z.; LaCoss, Jeff; Wills, Jack; Hampson, Robert E.; Deadwyler, Sam A.; Granacki, John J.
2012-01-01
This paper describes the development of a cognitive prosthesis designed to restore the ability to form new long-term memories typically lost after damage to the hippocampus. The animal model used is delayed nonmatch-to-sample (DNMS) behavior in the rat, and the “core” of the prosthesis is a biomimetic multi-input/multi-output (MIMO) nonlinear model that provides the capability for predicting spatio-temporal spike train output of hippocampus (CA1) based on spatio-temporal spike train inputs recorded presynaptically to CA1 (e.g., CA3). We demonstrate the capability of the MIMO model for highly accurate predictions of CA1 coded memories that can be made on a single-trial basis and in real-time. When hippocampal CA1 function is blocked and long-term memory formation is lost, successful DNMS behavior also is abolished. However, when MIMO model predictions are used to reinstate CA1 memory-related activity by driving spatio-temporal electrical stimulation of hippocampal output to mimic the patterns of activity observed in control conditions, successful DNMS behavior is restored. We also outline the design in very-large-scale integration for a hardware implementation of a 16-input, 16-output MIMO model, along with spike sorting, amplification, and other functions necessary for a total system, when coupled together with electrode arrays to record extracellularly from populations of hippocampal neurons, that can serve as a cognitive prosthesis in behaving animals. PMID:22438335
Revisiting the effect of colonial institutions on comparative economic development
Regele, Matthew
2017-01-01
European settler mortality has been proposed as an instrument to predict the causal effect of colonial institutions on differences in economic development. We examine the relationship between mortality, temperature, and economic development in former European colonies in Asia, Africa, and the Americas. We find that (i) European settler mortality rates increased with regional temperatures and (ii) economic output decreased with regional temperatures. Conditioning on the continent of settlement and accounting for colonies that were not independent as of 1900 undermines the causal effect of colonial institutions on comparative economic development. Our findings run counter to the institutions hypothesis of economic development, showing instead that geography affected both historic mortality rates and present-day economic output. PMID:28481920
Shot-by-shot Spectrum Model for Rod-pinch, Pulsed Radiography Machines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, William Monford
A simplified model of bremsstrahlung production is developed for determining the x-ray spectrum output of a rod-pinch radiography machine, on a shot-by-shot basis, using the measured voltage, V(t), and current, I(t). The motivation for this model is the need for an agile means of providing shot-by-shot spectrum prediction, from a laptop or desktop computer, for quantitative radiographic analysis. Simplifying assumptions are discussed, and the model is applied to the Cygnus rod-pinch machine. Output is compared to wedge transmission data for a series of radiographs from shots with identical target objects. Resulting model enables variation of parameters in real time, thusmore » allowing for rapid optimization of the model across many shots. “Goodness of fit” is compared with output from LSP Particle-In-Cell code, as well as the Monte Carlo Neutron Propagation with Xrays (“MCNPX”) model codes, and is shown to provide an excellent predictive representation of the spectral output of the Cygnus machine. In conclusion, improvements to the model, specifically for application to other geometries, are discussed.« less
Shot-by-shot Spectrum Model for Rod-pinch, Pulsed Radiography Machines
Wood, William Monford
2018-02-07
A simplified model of bremsstrahlung production is developed for determining the x-ray spectrum output of a rod-pinch radiography machine, on a shot-by-shot basis, using the measured voltage, V(t), and current, I(t). The motivation for this model is the need for an agile means of providing shot-by-shot spectrum prediction, from a laptop or desktop computer, for quantitative radiographic analysis. Simplifying assumptions are discussed, and the model is applied to the Cygnus rod-pinch machine. Output is compared to wedge transmission data for a series of radiographs from shots with identical target objects. Resulting model enables variation of parameters in real time, thusmore » allowing for rapid optimization of the model across many shots. “Goodness of fit” is compared with output from LSP Particle-In-Cell code, as well as the Monte Carlo Neutron Propagation with Xrays (“MCNPX”) model codes, and is shown to provide an excellent predictive representation of the spectral output of the Cygnus machine. In conclusion, improvements to the model, specifically for application to other geometries, are discussed.« less
Shot-by-shot spectrum model for rod-pinch, pulsed radiography machines
NASA Astrophysics Data System (ADS)
Wood, Wm M.
2018-02-01
A simplified model of bremsstrahlung production is developed for determining the x-ray spectrum output of a rod-pinch radiography machine, on a shot-by-shot basis, using the measured voltage, V(t), and current, I(t). The motivation for this model is the need for an agile means of providing shot-by-shot spectrum prediction, from a laptop or desktop computer, for quantitative radiographic analysis. Simplifying assumptions are discussed, and the model is applied to the Cygnus rod-pinch machine. Output is compared to wedge transmission data for a series of radiographs from shots with identical target objects. Resulting model enables variation of parameters in real time, thus allowing for rapid optimization of the model across many shots. "Goodness of fit" is compared with output from LSP Particle-In-Cell code, as well as the Monte Carlo Neutron Propagation with Xrays ("MCNPX") model codes, and is shown to provide an excellent predictive representation of the spectral output of the Cygnus machine. Improvements to the model, specifically for application to other geometries, are discussed.
Integrating predictive information into an agro-economic model to guide agricultural management
NASA Astrophysics Data System (ADS)
Zhang, Y.; Block, P.
2016-12-01
Skillful season-ahead climate predictions linked with responsive agricultural planning and management have the potential to reduce losses, if adopted by farmers, particularly for rainfed-dominated agriculture such as in Ethiopia. Precipitation predictions during the growing season in major agricultural regions of Ethiopia are used to generate predicted climate yield factors, which reflect the influence of precipitation amounts on crop yields and serve as inputs into an agro-economic model. The adapted model, originally developed by the International Food Policy Research Institute, produces outputs of economic indices (GDP, poverty rates, etc.) at zonal and national levels. Forecast-based approaches, in which farmers' actions are in response to forecasted conditions, are compared with no-forecast approaches in which farmers follow business as usual practices, expecting "average" climate conditions. The effects of farmer adoption rates, including the potential for reduced uptake due to poor predictions, and increasing forecast lead-time on economic outputs are also explored. Preliminary results indicate superior gains under forecast-based approaches.
Predicting High-Power Performance in Professional Cyclists.
Sanders, Dajo; Heijboer, Mathieu; Akubat, Ibrahim; Meijer, Kenneth; Hesselink, Matthijs K
2017-03-01
To assess if short-duration (5 to ~300 s) high-power performance can accurately be predicted using the anaerobic power reserve (APR) model in professional cyclists. Data from 4 professional cyclists from a World Tour cycling team were used. Using the maximal aerobic power, sprint peak power output, and an exponential constant describing the decrement in power over time, a power-duration relationship was established for each participant. To test the predictive accuracy of the model, several all-out field trials of different durations were performed by each cyclist. The power output achieved during the all-out trials was compared with the predicted power output by the APR model. The power output predicted by the model showed very large to nearly perfect correlations to the actual power output obtained during the all-out trials for each cyclist (r = .88 ± .21, .92 ± .17, .95 ± .13, and .97 ± .09). Power output during the all-out trials remained within an average of 6.6% (53 W) of the predicted power output by the model. This preliminary pilot study presents 4 case studies on the applicability of the APR model in professional cyclists using a field-based approach. The decrement in all-out performance during high-intensity exercise seems to conform to a general relationship with a single exponential-decay model describing the decrement in power vs increasing duration. These results are in line with previous studies using the APR model to predict performance during brief all-out trials. Future research should evaluate the APR model with a larger sample size of elite cyclists.
NASA Technical Reports Server (NTRS)
Bole, Brian; Goebel, Kai; Vachtsevanos, George
2012-01-01
This paper introduces a novel Markov process formulation of stochastic fault growth modeling, in order to facilitate the development and analysis of prognostics-based control adaptation. A metric representing the relative deviation between the nominal output of a system and the net output that is actually enacted by an implemented prognostics-based control routine, will be used to define the action space of the formulated Markov process. The state space of the Markov process will be defined in terms of an abstracted metric representing the relative health remaining in each of the system s components. The proposed formulation of component fault dynamics will conveniently relate feasible system output performance modifications to predictions of future component health deterioration.
The Association Between Urine Output, Creatinine Elevation, and Death.
Engoren, Milo; Maile, Michael D; Heung, Michael; Jewell, Elizabeth S; Vahabzadeh, Christie; Haft, Jonathan W; Kheterpal, Sachin
2017-04-01
Acute kidney injury can be defined by a fall in urine output, and urine output criteria may be more sensitive in identifying acute kidney injury than traditional serum creatinine criteria. However, as pointed out in the Kidney Disease Improving Global Outcome guidelines, the association of urine output with subsequent creatinine elevations and death is poorly characterized. The purpose of this study was to determine what degrees of reduced urine output are associated with subsequent creatinine elevation and death. This was a retrospective cohort study of adult patients (age ≥18 years) cared for in a cardiovascular intensive care unit after undergoing cardiac operations in a tertiary care university medical center. All adult patients who underwent cardiac operations and were not receiving dialysis preoperatively were studied. The development of acute kidney injury was defined as an increase in creatinine of more than 0.3 mg/dL or by more than 50% above baseline by postoperative day 3. Acute kidney injury developed in 1,061 of 4,195 patients (25%). Urine output had moderate discrimination in predicting subsequent acute kidney injury (C statistic = .637 ± .054). Lower urine output and longer duration of low urine output were associated with greater odds of developing acute kidney injury and death. We found that there is similar accuracy in using urine output corrected for actual, ideal, or adjusted weight to discriminate future acute kidney injury by creatinine elevation and recommend using actual weight for its simplicity. We also found that low urine output is associated with subsequent acute kidney injury and that the association is greater for lower urine output and for low urine output of longer durations. Low urine output (<0.2 mL · kg -1 · h -1 ), even in the absence of acute kidney injury by creatinine elevation, is independently associated with mortality. Copyright © 2017 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
Two models for identification and predicting behaviour of an induction motor system
NASA Astrophysics Data System (ADS)
Kuo, Chien-Hsun
2018-01-01
System identification or modelling is the process of building mathematical models of dynamical systems based on the available input and output data from the systems. This paper introduces system identification by using ARX (Auto Regressive with eXogeneous input) and ARMAX (Auto Regressive Moving Average with eXogeneous input) models. Through the identified system model, the predicted output could be compared with the measured one to help prevent the motor faults from developing into a catastrophic machine failure and avoid unnecessary costs and delays caused by the need to carry out unscheduled repairs. The induction motor system is illustrated as an example. Numerical and experimental results are shown for the identified induction motor system.
NASA Technical Reports Server (NTRS)
Whitney, W. J.; Behning, F. P.; Moffitt, T. P.; Hotz, G. M.
1977-01-01
The turbine developed design specific work output at design speed at a total pressure ratio of 6.745 with a corresponding efficiency of 0.855. The efficiency (0.855)was 3.1 points lower than the estimated efficiency quoted by the contractor in the design report and 0.7 of a point lower than that determined by a reference prediction method. The performance of the turbine, which was a forced vortex design, agreed with the performance determined by the prediction method to about the same extent as did the performance of three reference high stage loading factor turbines, which were free vortex designs.
Bootstrap Prediction Intervals in Non-Parametric Regression with Applications to Anomaly Detection
NASA Technical Reports Server (NTRS)
Kumar, Sricharan; Srivistava, Ashok N.
2012-01-01
Prediction intervals provide a measure of the probable interval in which the outputs of a regression model can be expected to occur. Subsequently, these prediction intervals can be used to determine if the observed output is anomalous or not, conditioned on the input. In this paper, a procedure for determining prediction intervals for outputs of nonparametric regression models using bootstrap methods is proposed. Bootstrap methods allow for a non-parametric approach to computing prediction intervals with no specific assumptions about the sampling distribution of the noise or the data. The asymptotic fidelity of the proposed prediction intervals is theoretically proved. Subsequently, the validity of the bootstrap based prediction intervals is illustrated via simulations. Finally, the bootstrap prediction intervals are applied to the problem of anomaly detection on aviation data.
Hunter Ball, B; Pitães, Margarida; Brewer, Gene A
2018-02-07
Output monitoring refers to memory for one's previously completed actions. In the context of prospective memory (PM) (e.g., remembering to take medication), failures of output monitoring can result in repetitions and omissions of planned actions (e.g., over- or under-medication). To be successful in output monitoring paradigms, participants must flexibly control attention to detect PM cues as well as engage controlled retrieval of previous actions whenever a particular cue is encountered. The current study examined individual differences in output monitoring abilities in a group of younger adults differing in attention control (AC) and episodic memory (EM) abilities. The results showed that AC ability uniquely predicted successful cue detection on the first presentation, whereas EM ability uniquely predicted successful output monitoring on the second presentation. The current study highlights the importance of examining external correlates of PM abilities and contributes to the growing body of research on individual differences in PM.
Simulation of medical Q-switch flash-pumped Er:YAG laser
NASA Astrophysics Data System (ADS)
-Yan-lin, Wang; Huang-Chuyun; Yao-Yucheng; Xiaolin, Zou
2011-01-01
Er: YAG laser, the wavelength is 2940nm, can be absorbed strongly by water. The absorption coefficient is as high as 13000 cm-1. As the water strong absorption, Erbium laser can bring shallow penetration depth and smaller surrounding tissue injury in most soft tissue and hard tissue. At the same time, the interaction between 2940nm radiation and biological tissue saturated with water is equivalent to instantaneous heating within limited volume, thus resulting in the phenomenon of micro-explosion to removal organization. Different parameters can be set up to cut enamel, dentin, caries and soft tissue. For the development and optimization of laser system, it is a practical choice to use laser modeling to predict the influence of various parameters for laser performance. Aim at the status of low Erbium laser output power, flash-pumped Er: YAG laser performance was simulated to obtain optical output in theory. the rate equation model was obtained and used to predict the change of population densities in various manifolds and use the technology of Q-switch the simulate laser output for different design parameters and results showed that Er: YAG laser output energy can achieve the maximum average output power of 9.8W under the given parameters. The model can be used to find the potential laser systems that meet application requirements.
Predictive sensor method and apparatus
NASA Technical Reports Server (NTRS)
Cambridge, Vivien J.; Koger, Thomas L.
1993-01-01
A microprocessor and electronics package employing predictive methodology was developed to accelerate the response time of slowly responding hydrogen sensors. The system developed improved sensor response time from approximately 90 seconds to 8.5 seconds. The microprocessor works in real-time providing accurate hydrogen concentration corrected for fluctuations in sensor output resulting from changes in atmospheric pressure and temperature. Following the successful development of the hydrogen sensor system, the system and predictive methodology was adapted to a commercial medical thermometer probe. Results of the experiment indicate that, with some customization of hardware and software, response time improvements are possible for medical thermometers as well as other slowly responding sensors.
Evaluation of Data-Driven Models for Predicting Solar Photovoltaics Power Output
Moslehi, Salim; Reddy, T. Agami; Katipamula, Srinivas
2017-09-10
This research was undertaken to evaluate different inverse models for predicting power output of solar photovoltaic (PV) systems under different practical scenarios. In particular, we have investigated whether PV power output prediction accuracy can be improved if module/cell temperature was measured in addition to climatic variables, and also the extent to which prediction accuracy degrades if solar irradiation is not measured on the plane of array but only on a horizontal surface. We have also investigated the significance of different independent or regressor variables, such as wind velocity and incident angle modifier in predicting PV power output and cell temperature.more » The inverse regression model forms have been evaluated both in terms of their goodness-of-fit, and their accuracy and robustness in terms of their predictive performance. Given the accuracy of the measurements, expected CV-RMSE of hourly power output prediction over the year varies between 3.2% and 8.6% when only climatic data are used. Depending on what type of measured climatic and PV performance data is available, different scenarios have been identified and the corresponding appropriate modeling pathways have been proposed. The corresponding models are to be implemented on a controller platform for optimum operational planning of microgrids and integrated energy systems.« less
Predicting Item Difficulty in a Reading Comprehension Test with an Artificial Neural Network.
ERIC Educational Resources Information Center
Perkins, Kyle; And Others
This paper reports the results of using a three-layer backpropagation artificial neural network to predict item difficulty in a reading comprehension test. Two network structures were developed, one with and one without a sigmoid function in the output processing unit. The data set, which consisted of a table of coded test items and corresponding…
Salmon, P; Williamson, A; Lenné, M; Mitsopoulos-Rubens, E; Rudin-Brown, C M
2010-08-01
Safety-compromising accidents occur regularly in the led outdoor activity domain. Formal accident analysis is an accepted means of understanding such events and improving safety. Despite this, there remains no universally accepted framework for collecting and analysing accident data in the led outdoor activity domain. This article presents an application of Rasmussen's risk management framework to the analysis of the Lyme Bay sea canoeing incident. This involved the development of an Accimap, the outputs of which were used to evaluate seven predictions made by the framework. The Accimap output was also compared to an analysis using an existing model from the led outdoor activity domain. In conclusion, the Accimap output was found to be more comprehensive and supported all seven of the risk management framework's predictions, suggesting that it shows promise as a theoretically underpinned approach for analysing, and learning from, accidents in the led outdoor activity domain. STATEMENT OF RELEVANCE: Accidents represent a significant problem within the led outdoor activity domain. This article presents an evaluation of a risk management framework that can be used to understand such accidents and to inform the development of accident countermeasures and mitigation strategies for the led outdoor activity domain.
Effect of accuracy of wind power prediction on power system operator
NASA Technical Reports Server (NTRS)
Schlueter, R. A.; Sigari, G.; Costi, T.
1985-01-01
This research project proposed a modified unit commitment that schedules connection and disconnection of generating units in response to load. A modified generation control is also proposed that controls steam units under automatic generation control, fast responding diesels, gas turbines and hydro units under a feedforward control, and wind turbine array output under a closed loop array control. This modified generation control and unit commitment require prediction of trend wind power variation one hour ahead and the prediction of error in this trend wind power prediction one half hour ahead. An improved meter for predicting trend wind speed variation is developed. Methods for accurately simulating the wind array power from a limited number of wind speed prediction records was developed. Finally, two methods for predicting the error in the trend wind power prediction were developed. This research provides a foundation for testing and evaluating the modified unit commitment and generation control that was developed to maintain operating reliability at a greatly reduced overall production cost for utilities with wind generation capacity.
Overview of Photovoltaic Calibration and Measurement Standards at GRC
NASA Technical Reports Server (NTRS)
Baraona, Cosmo; Snyder, David; Brinker, David; Bailey, Sheila; Curtis, Henry; Scheiman, David; Jenkins, Phillip
2002-01-01
Photovoltaic (PV) systems (cells and arrays) for spacecraft power have become an international market. This market demands accurate prediction of the solar array power output in space throughout the mission life of the spacecraft. Since the beginning of space flight, space-faring nations have independently developed methods to calibrate solar cells for power output in low Earth orbit (LEO). These methods rely on terrestrial, laboratory, or extraterrestrial light sources to simulate or approximate the air mass zero (AM0) solar intensity and spectrum.
Radiation Hardened Electronics for Space Environments (RHESE)
NASA Technical Reports Server (NTRS)
Keys, Andrew S.; Adams, James H.; Frazier, Donald O.; Patrick, Marshall C.; Watson, Michael D.; Johnson, Michael A.; Cressler, John D.; Kolawa, Elizabeth A.
2007-01-01
Radiation Environmental Modeling is crucial to proper predictive modeling and electronic response to the radiation environment. When compared to on-orbit data, CREME96 has been shown to be inaccurate in predicting the radiation environment. The NEDD bases much of its radiation environment data on CREME96 output. Close coordination and partnership with DoD radiation-hardened efforts will result in leveraged - not duplicated or independently developed - technology capabilities of: a) Radiation-hardened, reconfigurable FPGA-based electronics; and b) High Performance Processors (NOT duplication or independent development).
Correlate Life Predictions and Condition Indicators in Helicopter Tail Gearbox Bearings
NASA Technical Reports Server (NTRS)
Dempsey, Paula J.; Bolander, Nathan; Haynes, Chris; Branning, Jeremy; Wade, Daniel R.
2010-01-01
Research to correlate bearing remaining useful life (RUL) predictions with Helicopter Health Usage Monitoring Systems (HUMS) condition indicators (CI) to indicate the damage state of a transmission component has been developed. Condition indicators were monitored and recorded on UH-60M (Black Hawk) tail gearbox output shaft thrust bearings, which had been removed from helicopters and installed in a bearing spall propagation test rig. Condition indicators monitoring the tail gearbox output shaft thrust bearings in UH-60M helicopters were also recorded from an on-board HUMS. The spal-lpropagation data collected in the test rig was used to generate condition indicators for bearing fault detection. A damage progression model was also developed from this data. Determining the RUL of this component in a helicopter requires the CI response to be mapped to the damage state. The data from helicopters and a test rig were analyzed to determine if bearing remaining useful life predictions could be correlated with HUMS condition indicators (CI). Results indicate data fusion analysis techniques can be used to map the CI response to the damage levels.
A 30 MW, 200 MHz Inductive Output Tube for RF Accelerators
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. Lawrence Ives; Michael Read
2008-06-19
This program investigated development of a multiple beam inductive output tube (IOT) to produce 30 MW pulses at 200 MHz. The program was successful in demonstrating feasibility of developing the source to achieve the desired power in microsecond pulses with 70% efficiency. The predicted gain of the device is 24 dB. Consequently, a 200 kW driver would be required for the RF input. Estimated cost of this driver is approximately $1.25 M. Given the estimated development cost of the IOT of approximately $750K and the requirements for a test set that would significantly increase the cost, it was determined thatmore » development could not be achieved within the funding constraints of a Phase II program.« less
NASA Astrophysics Data System (ADS)
Obukhov, S. G.; Plotnikov, I. A.; Surzhikova, O. A.; Savkin, K. D.
2017-04-01
Solar photovoltaic technology is one of the most rapidly growing renewable sources of electricity that has practical application in various fields of human activity due to its high availability, huge potential and environmental compatibility. The original simulation model of the photovoltaic power plant has been developed to simulate and investigate the plant operating modes under actual operating conditions. The proposed model considers the impact of the external climatic factors on the solar panel energy characteristics that improves accuracy in the power output prediction. The data obtained through the photovoltaic power plant operation simulation enable a well-reasoned choice of the required capacity for storage devices and determination of the rational algorithms to control the energy complex.
Power output measurement during treadmill cycling.
Coleman, D A; Wiles, J D; Davison, R C R; Smith, M F; Swaine, I L
2007-06-01
The study aim was to consider the use of a motorised treadmill as a cycling ergometry system by assessing predicted and recorded power output values during treadmill cycling. Fourteen male cyclists completed repeated cycling trials on a motorised treadmill whilst riding their own bicycle fitted with a mobile ergometer. The speed, gradient and loading via an external pulley system were recorded during 20-s constant speed trials and used to estimate power output with an assumption about the contribution of rolling resistance. These values were then compared with mobile ergometer measurements. To assess the reliability of measured power output values, four repeated trials were conducted on each cyclist. During level cycling, the recorded power output was 257.2 +/- 99.3 W compared to the predicted power output of 258.2 +/- 99.9 W (p > 0.05). For graded cycling, there was no significant difference between measured and predicted power output, 268.8 +/- 109.8 W vs. 270.1 +/- 111.7 W, p > 0.05, SEE 1.2 %. The coefficient of variation for mobile ergometer power output measurements during repeated trials ranged from 1.5 % (95 % CI 1.2 - 2.0 %) to 1.8 % (95 % CI 1.5 - 2.4 %). These results indicate that treadmill cycling can be used as an ergometry system to assess power output in cyclists with acceptable accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferguson, S; Ahmad, S; Chen, Y
2016-06-15
Purpose: To commission and investigate the accuracy of an output (cGy/MU) prediction model for a compact passively scattered proton therapy system. Methods: A previously published output prediction model (Sahoo et al, Med Phys, 35, 5088–5097, 2008) was commissioned for our Mevion S250 proton therapy system. This model is a correction-based model that multiplies correction factors (d/MUwnc=ROFxSOBPF xRSFxSOBPOCFxOCRxFSFxISF). These factors accounted for changes in output due to options (12 large, 5 deep, and 7 small), modulation width M, range R, off-center, off-axis, field-size, and off-isocenter. In this study, the model was modified to ROFxSOBPFxRSFxOCRxFSFxISF-OCFxGACF by merging SOBPOCF and ISF for simplicitymore » and introducing a gantry angle correction factor (GACF). To commission the model, outputs over 1,000 data points were taken at the time of the system commissioning. The output was predicted by interpolation (1D for SOBPF, FSF, and GACF; 2D for RSF and OCR) with inverse-square calculation (ISF-OCR). The outputs of 273 combinations of R and M covering total 24 options were measured to test the model. To minimize fluence perturbation, scattered dose from range compensator and patient was not considered. The percent differences between the predicted (P) and measured (M) outputs were calculated to test the prediction accuracy ([P-M]/Mx100%). Results: GACF was required because of up to 3.5% output variation dependence on the gantry angle. A 2D interpolation was required for OCR because the dose distribution was not radially symmetric especially for the deep options. The average percent differences were −0.03±0.98% (mean±SD) and the differences of all the measurements fell within ±3%. Conclusion: It is concluded that the model can be clinically used for the compact passively scattered proton therapy system. However, great care should be taken when the field-size is less than 5×5 cm{sup 2} where a direct output measurement is required due to substantial output change by irregular block shape.« less
SCOUT: A Fast Monte-Carlo Modeling Tool of Scintillation Camera Output
Hunter, William C. J.; Barrett, Harrison H.; Lewellen, Thomas K.; Miyaoka, Robert S.; Muzi, John P.; Li, Xiaoli; McDougald, Wendy; MacDonald, Lawrence R.
2011-01-01
We have developed a Monte-Carlo photon-tracking and readout simulator called SCOUT to study the stochastic behavior of signals output from a simplified rectangular scintillation-camera design. SCOUT models the salient processes affecting signal generation, transport, and readout. Presently, we compare output signal statistics from SCOUT to experimental results for both a discrete and a monolithic camera. We also benchmark the speed of this simulation tool and compare it to existing simulation tools. We find this modeling tool to be relatively fast and predictive of experimental results. Depending on the modeled camera geometry, we found SCOUT to be 4 to 140 times faster than other modeling tools. PMID:22072297
Predictive sensor method and apparatus
NASA Technical Reports Server (NTRS)
Nail, William L. (Inventor); Koger, Thomas L. (Inventor); Cambridge, Vivien (Inventor)
1990-01-01
A predictive algorithm is used to determine, in near real time, the steady state response of a slow responding sensor such as hydrogen gas sensor of the type which produces an output current proportional to the partial pressure of the hydrogen present. A microprocessor connected to the sensor samples the sensor output at small regular time intervals and predicts the steady state response of the sensor in response to a perturbation in the parameter being sensed, based on the beginning and end samples of the sensor output for the current sample time interval.
NASA Astrophysics Data System (ADS)
Obara, Shin'ya
A micro-grid with the capacity for sustainable energy is expected to be a distributed energy system that exhibits quite a small environmental impact. In an independent micro-grid, “green energy,” which is typically thought of as unstable, can be utilized effectively by introducing a battery. In the past study, the production-of-electricity prediction algorithm (PAS) of the solar cell was developed. In PAS, a layered neural network is made to learn based on past weather data and the operation plan of the compound system of a solar cell and other energy systems was examined using this prediction algorithm. In this paper, a dynamic operational scheduling algorithm is developed using a neural network (PAS) and a genetic algorithm (GA) to provide predictions for solar cell power output. We also do a case study analysis in which we use this algorithm to plan the operation of a system that connects nine houses in Sapporo to a micro-grid composed of power equipment and a polycrystalline silicon solar cell. In this work, the relationship between the accuracy of output prediction of the solar cell and the operation plan of the micro-grid was clarified. Moreover, we found that operating the micro-grid according to the plan derived with PAS was far superior, in terms of equipment hours of operation, to that using past average weather data.
NASA Astrophysics Data System (ADS)
Asanuma, H.; Sakamoto, K.; Komatsuzaki, T.; Iwata, Y.
2018-07-01
To increase output power for piezoelectric vibration energy harvesters, considerable attention has recently been focused on a self-powered synchronized switch harvesting on inductor (SSHI) technique employing an electrical and mechanical switch. However, there are two technical issues: in a medium or highly coupled harvester, the piezoelectric coupling force, which increases as the SSHI’s voltage increases, will reduce the harvester’s displacement and the resulting output power, and there are few reports comparing the performance of electrical switch SSHI (ESS) and mechanical switch SSHI (MSS) that include consideration of the piezoelectric coupling force. We developed a simulation technique that allows us to evaluate the output power considering the piezoelectric coupling force, and investigated the performance of stopper-based MSS and ESS, both numerically and experimentally. The numerical investigation predicted the following: (1) the output power for the ESS is lower than that for the MSS at acceleration lower than 3.5 m s‑2 and (2) intriguingly, the output power for the MSS continues to increase, whereas the peak–peak displacement remains constant. The experimental results showed behaviour similar to that of the numerical predictions. The results are attributed to the different switching strategies: the MSS turns on only when the harvester’s displacement exceeds the gap distance, while the ESS turns on at every maximum/minimum displacement.
Quantification of downscaled precipitation uncertainties via Bayesian inference
NASA Astrophysics Data System (ADS)
Nury, A. H.; Sharma, A.; Marshall, L. A.
2017-12-01
Prediction of precipitation from global climate model (GCM) outputs remains critical to decision-making in water-stressed regions. In this regard, downscaling of GCM output has been a useful tool for analysing future hydro-climatological states. Several downscaling approaches have been developed for precipitation downscaling, including those using dynamical or statistical downscaling methods. Frequently, outputs from dynamical downscaling are not readily transferable across regions for significant methodical and computational difficulties. Statistical downscaling approaches provide a flexible and efficient alternative, providing hydro-climatological outputs across multiple temporal and spatial scales in many locations. However these approaches are subject to significant uncertainty, arising due to uncertainty in the downscaled model parameters and in the use of different reanalysis products for inferring appropriate model parameters. Consequently, these will affect the performance of simulation in catchment scale. This study develops a Bayesian framework for modelling downscaled daily precipitation from GCM outputs. This study aims to introduce uncertainties in downscaling evaluating reanalysis datasets against observational rainfall data over Australia. In this research a consistent technique for quantifying downscaling uncertainties by means of Bayesian downscaling frame work has been proposed. The results suggest that there are differences in downscaled precipitation occurrences and extremes.
Bayesian Processor of Output for Probabilistic Quantitative Precipitation Forecasting
NASA Astrophysics Data System (ADS)
Krzysztofowicz, R.; Maranzano, C. J.
2006-05-01
The Bayesian Processor of Output (BPO) is a new, theoretically-based technique for probabilistic forecasting of weather variates. It processes output from a numerical weather prediction (NWP) model and optimally fuses it with climatic data in order to quantify uncertainty about a predictand. The BPO is being tested by producing Probabilistic Quantitative Precipitation Forecasts (PQPFs) for a set of climatically diverse stations in the contiguous U.S. For each station, the PQPFs are produced for the same 6-h, 12-h, and 24-h periods up to 84- h ahead for which operational forecasts are produced by the AVN-MOS (Model Output Statistics technique applied to output fields from the Global Spectral Model run under the code name AVN). The inputs into the BPO are estimated as follows. The prior distribution is estimated from a (relatively long) climatic sample of the predictand; this sample is retrieved from the archives of the National Climatic Data Center. The family of the likelihood functions is estimated from a (relatively short) joint sample of the predictor vector and the predictand; this sample is retrieved from the same archive that the Meteorological Development Laboratory of the National Weather Service utilized to develop the AVN-MOS system. This talk gives a tutorial introduction to the principles and procedures behind the BPO, and highlights some results from the testing: a numerical example of the estimation of the BPO, and a comparative verification of the BPO forecasts and the MOS forecasts. It concludes with a list of demonstrated attributes of the BPO (vis- à-vis the MOS): more parsimonious definitions of predictors, more efficient extraction of predictive information, better representation of the distribution function of predictand, and equal or better performance (in terms of calibration and informativeness).
NASA Astrophysics Data System (ADS)
Sarkar, A.; Chakravartty, J. K.
2013-10-01
A model is developed to predict the constitutive flow behavior of cadmium during compression test using artificial neural network (ANN). The inputs of the neural network are strain, strain rate, and temperature, whereas flow stress is the output. Experimental data obtained from compression tests in the temperature range -30 to 70 °C, strain range 0.1 to 0.6, and strain rate range 10-3 to 1 s-1 are employed to develop the model. A three-layer feed-forward ANN is trained with Levenberg-Marquardt training algorithm. It has been shown that the developed ANN model can efficiently and accurately predict the deformation behavior of cadmium. This trained network could predict the flow stress better than a constitutive equation of the type.
A novel artificial neural network method for biomedical prediction based on matrix pseudo-inversion.
Cai, Binghuang; Jiang, Xia
2014-04-01
Biomedical prediction based on clinical and genome-wide data has become increasingly important in disease diagnosis and classification. To solve the prediction problem in an effective manner for the improvement of clinical care, we develop a novel Artificial Neural Network (ANN) method based on Matrix Pseudo-Inversion (MPI) for use in biomedical applications. The MPI-ANN is constructed as a three-layer (i.e., input, hidden, and output layers) feed-forward neural network, and the weights connecting the hidden and output layers are directly determined based on MPI without a lengthy learning iteration. The LASSO (Least Absolute Shrinkage and Selection Operator) method is also presented for comparative purposes. Single Nucleotide Polymorphism (SNP) simulated data and real breast cancer data are employed to validate the performance of the MPI-ANN method via 5-fold cross validation. Experimental results demonstrate the efficacy of the developed MPI-ANN for disease classification and prediction, in view of the significantly superior accuracy (i.e., the rate of correct predictions), as compared with LASSO. The results based on the real breast cancer data also show that the MPI-ANN has better performance than other machine learning methods (including support vector machine (SVM), logistic regression (LR), and an iterative ANN). In addition, experiments demonstrate that our MPI-ANN could be used for bio-marker selection as well. Copyright © 2013 Elsevier Inc. All rights reserved.
Application of a stochastic snowmelt model for probabilistic decisionmaking
NASA Technical Reports Server (NTRS)
Mccuen, R. H.
1983-01-01
A stochastic form of the snowmelt runoff model that can be used for probabilistic decision-making was developed. The use of probabilistic streamflow predictions instead of single valued deterministic predictions leads to greater accuracy in decisions. While the accuracy of the output function is important in decisionmaking, it is also important to understand the relative importance of the coefficients. Therefore, a sensitivity analysis was made for each of the coefficients.
ERIC Educational Resources Information Center
Zhu, Zheng; Chen, Peijie; Zhuang, Jie
2013-01-01
Purpose: The purpose of this study was to develop and cross-validate an equation based on ActiGraph accelerometer GT3X output to predict children and youth's energy expenditure (EE) of physical activity (PA). Method: Participants were 367 Chinese children and youth (179 boys and 188 girls, aged 9 to 17 years old) who wore 1 ActiGraph GT3X…
NASA Astrophysics Data System (ADS)
Sun, Fengru
2018-01-01
This paper analyzes the characteristics of agricultural products from the perspective of agricultural production, farmers’ income, adjustment of agricultural structure and environmental improvement, and analyzes the characteristics of agricultural products in LanZhou area. Through data mining and empirical analysis, the regional agriculture (1) forecasting model of gray system with dynamic data processing, combined with the output data of lily in 2004-2003, the yield prediction is predicted and the fitting state is good and the error is small. Finally, combined with the relevant characteristics of the local characteristics of the agricultural industry to make reference, by changing the characteristics of agricultural production as the center of the mindset, and agricultural industrialization and organic combination, take the characteristics of efficient industrialization of agricultural products.
Muniz-Pumares, Daniel; Pedlar, Charles; Godfrey, Richard; Glaister, Mark
2017-01-01
The aim of this study was to investigate the relationship between oxygen uptake (V̇O2) and power output at intensities below and above the lactate threshold (LT) in cyclists; and to determine the reliability of supramaximal power outputs linearly projected from these relationships. Nine male cyclists (mean±standard deviation age: 41±8 years; mass: 77±6 kg, height: 1.79±0.05 m and V̇O2max: 54±7 mL∙kg-1∙min-1) completed two cycling trials each consisting of a step test (10×3 min stages at submaximal incremental intensities) followed by a maximal test to exhaustion. The lines of best fit for V̇O2 and power output were determined for: the entire step test; stages below and above the LT, and from rolling clusters of five consecutive stages. Lines were projected to determine a power output predicted to elicit 110% peak V̇O2. There were strong linear correlations (r≥0.953; P<0.01) between V̇O2 and power output using the three approaches; with the slope, intercept, and projected values of these lines unaffected (P≥0.05) by intensity. The coefficient of variation of the predicted power output at 110% V̇O2max was 6.7% when using all ten submaximal stages. Cyclists exhibit a linear V̇O2 and power output relationship when determined using 3 min stages, which allows for prediction of a supramaximal intensity with acceptable reliability.
NASA Astrophysics Data System (ADS)
Çelik, Emre; Uzun, Yunus; Kurt, Erol; Öztürk, Nihat; Topaloğlu, Nurettin
2018-01-01
An application of an artificial neural network (ANN) has been implemented in this article to model the nonlinear relationship of the harvested electrical power of a recently developed piezoelectric pendulum with respect to its resistive load R L and magnetic excitation frequency f. Prediction of harvested power for a wide range is a difficult task, because it increases dramatically when f gets closer to the natural frequency f 0 of the system. The neural model of the concerned system is designed upon the basis of a standard multi-layer network with a back propagation learning algorithm. Input data, termed input patterns, to present to the network and the respective output data, termed output patterns, describing desired network output that are carefully collected from the experiment under several conditions in order to train the developed network accurately. Results have indicated that the designed ANN is an effective means for predicting the harvested power of the piezoelectric harvester as functions of R L and f with a root mean square error of 6.65 × 10-3 for training and 1.40 for different test conditions. Using the proposed approach, the harvested power can be estimated reasonably without tackling the difficulty of experimental studies and complexity of analytical formulas representing the concerned system.
Accelerating Vaccine Formulation Development Using Design of Experiment Stability Studies.
Ahl, Patrick L; Mensch, Christopher; Hu, Binghua; Pixley, Heidi; Zhang, Lan; Dieter, Lance; Russell, Ryann; Smith, William J; Przysiecki, Craig; Kosinski, Mike; Blue, Jeffrey T
2016-10-01
Vaccine drug product thermal stability often depends on formulation input factors and how they interact. Scientific understanding and professional experience typically allows vaccine formulators to accurately predict the thermal stability output based on formulation input factors such as pH, ionic strength, and excipients. Thermal stability predictions, however, are not enough for regulators. Stability claims must be supported by experimental data. The Quality by Design approach of Design of Experiment (DoE) is well suited to describe formulation outputs such as thermal stability in terms of formulation input factors. A DoE approach particularly at elevated temperatures that induce accelerated degradation can provide empirical understanding of how vaccine formulation input factors and interactions affect vaccine stability output performance. This is possible even when clear scientific understanding of particular formulation stability mechanisms are lacking. A DoE approach was used in an accelerated 37(°)C stability study of an aluminum adjuvant Neisseria meningitidis serogroup B vaccine. Formulation stability differences were identified after only 15 days into the study. We believe this study demonstrates the power of combining DoE methodology with accelerated stress stability studies to accelerate and improve vaccine formulation development programs particularly during the preformulation stage. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Recursive Deadbeat Controller Design
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Phan, Minh Q.
1997-01-01
This paper presents a recursive algorithm for a deadbeat predictive controller design. The method combines together the concepts of system identification and deadbeat controller designs. It starts with the multi-step output prediction equation and derives the control force in terms of past input and output time histories. The formulation thus derived satisfies simultaneously system identification and deadbeat controller design requirements. As soon as the coefficient matrices are identified satisfying the output prediction equation, no further work is required to compute the deadbeat control gain matrices. The method can be implemented recursively just as any typical recursive system identification techniques.
Translational medicine: science or wishful thinking?
Wehling, Martin
2008-01-01
"Translational medicine" as a fashionable term is being increasingly used to describe the wish of biomedical researchers to ultimately help patients. Despite increased efforts and investments into R&D, the output of novel medicines has been declining dramatically over the past years. Improvement of translation is thought to become a remedy as one of the reasons for this widening gap between input and output is the difficult transition between preclinical ("basic") and clinical stages in the R&D process. Animal experiments, test tube analyses and early human trials do simply not reflect the patient situation well enough to reliably predict efficacy and safety of a novel compound or device. This goal, however, can only be achieved if the translational processes are scientifically backed up by robust methods some of which still need to be developed. This mainly relates to biomarker development and predictivity assessment, biostatistical methods, smart and accelerated early human study designs and decision algorithms among other features. It is therefore claimed that a new science needs to be developed called 'translational science in medicine'. PMID:18559092
London, Michael; Larkum, Matthew E; Häusser, Michael
2008-11-01
Synaptic information efficacy (SIE) is a statistical measure to quantify the efficacy of a synapse. It measures how much information is gained, on the average, about the output spike train of a postsynaptic neuron if the input spike train is known. It is a particularly appropriate measure for assessing the input-output relationship of neurons receiving dynamic stimuli. Here, we compare the SIE of simulated synaptic inputs measured experimentally in layer 5 cortical pyramidal neurons in vitro with the SIE computed from a minimal model constructed to fit the recorded data. We show that even with a simple model that is far from perfect in predicting the precise timing of the output spikes of the real neuron, the SIE can still be accurately predicted. This arises from the ability of the model to predict output spikes influenced by the input more accurately than those driven by the background current. This indicates that in this context, some spikes may be more important than others. Lastly we demonstrate another aspect where using mutual information could be beneficial in evaluating the quality of a model, by measuring the mutual information between the model's output and the neuron's output. The SIE, thus, could be a useful tool for assessing the quality of models of single neurons in preserving input-output relationship, a property that becomes crucial when we start connecting these reduced models to construct complex realistic neuronal networks.
National Centers for Environmental Prediction
/NDAS Output Fields (contents, format, grid specs, output frequency, archive): The NWP model The horizontal output grid The vertical grid Access to fields Anonymous FTP Access Permanent Tape Archive
Influence of Climate Variability on Brown Planthopper Population Dynamics and Development Time
NASA Astrophysics Data System (ADS)
Romadhon, S.; Koesmaryono, Y.; Hidayati, R.
2017-03-01
Brown planthopper or Nilaparvata lugens (BPH) is one of the rice major pest in Indonesia. BPH can cause extensive damage and almost always appear in each planting season, frequent explosions attack (outbreaks) resulting in very high economic losses. Outbreaks of BPH were often occurred in paddy fields in Indramayu regency and several endemic regency in Java island, where rice is cultivated twice to three times a year both in the rainy and dry cropping seasons. The output of simulation shows the BPH population starts increasing from December to February (rainy season) and from June to August (dry season). The result relatively had same pattern with light trap observation data, but overestimate to predict BPH population. Therefore, the output of simulation had adequately close pattern if it is compares to BPH attacked area observation data. The development time taken by different stages of BPH varied at different temperatures. BPH development time at eggs and adults stage from the simulation output is suitable with BPH real lifestage, but at nymphs stage the result is different with the concept of development time.
Development and Validation of the Texas Best Management Practice Evaluation Tool (TBET)
USDA-ARS?s Scientific Manuscript database
Conservation planners need simple yet accurate tools to predict sediment and nutrient losses from agricultural fields to guide conservation practice implementation and increase cost-effectiveness. The Texas Best management practice Evaluation Tool (TBET), which serves as an input/output interpreter...
Control theory-based regulation of hippocampal CA1 nonlinear dynamics.
Hsiao, Min-Chi; Song, Dong; Berger, Theodore W
2008-01-01
We are developing a biomimetic electronic neural prosthesis to replace regions of the hippocampal brain area that have been damaged by disease or insult. Our previous study has shown that the VLSI implementation of a CA3 nonlinear dynamic model can functionally replace the CA3 subregion of the hippocampal slice. As a result, the propagation of temporal patterns of activity from DG-->VLSI-->CA1 reproduces the activity observed experimentally in the biological DG-->CA3-->CA1 circuit. In this project, we incorporate an open-loop controller to optimize the output (CA1) response. Specifically, we seek to optimize the stimulation signal to CA1 using a predictive dentate gyrus (DG)-CA1 nonlinear model (i.e., DG-CA1 trajectory model) and a CA1 input-output model (i.e., CA1 plant model), such that the ultimate CA1 response (i.e., desired output) can be first predicted by the DG-CA1 trajectory model and then transformed to the desired stimulation through the inversed CA1 plant model. Lastly, the desired CA1 output is evoked by the estimated optimal stimulation. This study will be the first stage of formulating an integrated modeling-control strategy for the hippocampal neural prosthetic system.
NASA Technical Reports Server (NTRS)
Caviness, V. S. Jr; Goto, T.; Tarui, T.; Takahashi, T.; Bhide, P. G.; Nowakowski, R. S.
2003-01-01
The neurons of the neocortex are generated over a 6 day neuronogenetic interval that comprises 11 cell cycles. During these 11 cell cycles, the length of cell cycle increases and the proportion of cells that exits (Q) versus re-enters (P) the cell cycle changes systematically. At the same time, the fate of the neurons produced at each of the 11 cell cycles appears to be specified at least in terms of their laminar destination. As a first step towards determining the causal interrelationships of the proliferative process with the process of laminar specification, we present a two-pronged approach. This consists of (i) a mathematical model that integrates the output of the proliferative process with the laminar fate of the output and predicts the effects of induced changes in Q and P during the neuronogenetic interval on the developing and mature cortex and (ii) an experimental system that allows the manipulation of Q and P in vivo. Here we show that the predictions of the model and the results of the experiments agree. The results indicate that events affecting the output of the proliferative population affect both the number of neurons produced and their specification with regard to their laminar fate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moslehi, Salim; Reddy, T. Agami; Katipamula, Srinivas
This research was undertaken to evaluate different inverse models for predicting power output of solar photovoltaic (PV) systems under different practical scenarios. In particular, we have investigated whether PV power output prediction accuracy can be improved if module/cell temperature was measured in addition to climatic variables, and also the extent to which prediction accuracy degrades if solar irradiation is not measured on the plane of array but only on a horizontal surface. We have also investigated the significance of different independent or regressor variables, such as wind velocity and incident angle modifier in predicting PV power output and cell temperature.more » The inverse regression model forms have been evaluated both in terms of their goodness-of-fit, and their accuracy and robustness in terms of their predictive performance. Given the accuracy of the measurements, expected CV-RMSE of hourly power output prediction over the year varies between 3.2% and 8.6% when only climatic data are used. Depending on what type of measured climatic and PV performance data is available, different scenarios have been identified and the corresponding appropriate modeling pathways have been proposed. The corresponding models are to be implemented on a controller platform for optimum operational planning of microgrids and integrated energy systems.« less
Multi-level emulation of complex climate model responses to boundary forcing data
NASA Astrophysics Data System (ADS)
Tran, Giang T.; Oliver, Kevin I. C.; Holden, Philip B.; Edwards, Neil R.; Sóbester, András; Challenor, Peter
2018-04-01
Climate model components involve both high-dimensional input and output fields. It is desirable to efficiently generate spatio-temporal outputs of these models for applications in integrated assessment modelling or to assess the statistical relationship between such sets of inputs and outputs, for example, uncertainty analysis. However, the need for efficiency often compromises the fidelity of output through the use of low complexity models. Here, we develop a technique which combines statistical emulation with a dimensionality reduction technique to emulate a wide range of outputs from an atmospheric general circulation model, PLASIM, as functions of the boundary forcing prescribed by the ocean component of a lower complexity climate model, GENIE-1. Although accurate and detailed spatial information on atmospheric variables such as precipitation and wind speed is well beyond the capability of GENIE-1's energy-moisture balance model of the atmosphere, this study demonstrates that the output of this model is useful in predicting PLASIM's spatio-temporal fields through multi-level emulation. Meaningful information from the fast model, GENIE-1 was extracted by utilising the correlation between variables of the same type in the two models and between variables of different types in PLASIM. We present here the construction and validation of several PLASIM variable emulators and discuss their potential use in developing a hybrid model with statistical components.
The Effect of Nondeterministic Parameters on Shock-Associated Noise Prediction Modeling
NASA Technical Reports Server (NTRS)
Dahl, Milo D.; Khavaran, Abbas
2010-01-01
Engineering applications for aircraft noise prediction contain models for physical phenomenon that enable solutions to be computed quickly. These models contain parameters that have an uncertainty not accounted for in the solution. To include uncertainty in the solution, nondeterministic computational methods are applied. Using prediction models for supersonic jet broadband shock-associated noise, fixed model parameters are replaced by probability distributions to illustrate one of these methods. The results show the impact of using nondeterministic parameters both on estimating the model output uncertainty and on the model spectral level prediction. In addition, a global sensitivity analysis is used to determine the influence of the model parameters on the output, and to identify the parameters with the least influence on model output.
Method and system for monitoring and displaying engine performance parameters
NASA Technical Reports Server (NTRS)
Abbott, Terence S. (Inventor); Person, Jr., Lee H. (Inventor)
1991-01-01
The invention is a method and system for monitoring and directly displaying the actual thrust produced by a jet aircraft engine under determined operating conditions and the available thrust and predicted (commanded) thrust of a functional model of an ideal engine under the same determined operating conditions. A first set of actual value output signals representative of a plurality of actual performance parameters of the engine under the determined operating conditions is generated and compared with a second set of predicted value output signals representative of the predicted value of corresponding performance parameters of a functional model of the engine under the determined operating conditions to produce a third set of difference value output signals within a range of normal, caution, or warning limit values. A thrust indicator displays when any one of the actual value output signals is in the warning range while shaping function means shape each of the respective difference output signals as each approaches the limit of the respective normal, caution, and warning range limits.
Fast metabolite identification with Input Output Kernel Regression.
Brouard, Céline; Shen, Huibin; Dührkop, Kai; d'Alché-Buc, Florence; Böcker, Sebastian; Rousu, Juho
2016-06-15
An important problematic of metabolomics is to identify metabolites using tandem mass spectrometry data. Machine learning methods have been proposed recently to solve this problem by predicting molecular fingerprint vectors and matching these fingerprints against existing molecular structure databases. In this work we propose to address the metabolite identification problem using a structured output prediction approach. This type of approach is not limited to vector output space and can handle structured output space such as the molecule space. We use the Input Output Kernel Regression method to learn the mapping between tandem mass spectra and molecular structures. The principle of this method is to encode the similarities in the input (spectra) space and the similarities in the output (molecule) space using two kernel functions. This method approximates the spectra-molecule mapping in two phases. The first phase corresponds to a regression problem from the input space to the feature space associated to the output kernel. The second phase is a preimage problem, consisting in mapping back the predicted output feature vectors to the molecule space. We show that our approach achieves state-of-the-art accuracy in metabolite identification. Moreover, our method has the advantage of decreasing the running times for the training step and the test step by several orders of magnitude over the preceding methods. celine.brouard@aalto.fi Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Fast metabolite identification with Input Output Kernel Regression
Brouard, Céline; Shen, Huibin; Dührkop, Kai; d'Alché-Buc, Florence; Böcker, Sebastian; Rousu, Juho
2016-01-01
Motivation: An important problematic of metabolomics is to identify metabolites using tandem mass spectrometry data. Machine learning methods have been proposed recently to solve this problem by predicting molecular fingerprint vectors and matching these fingerprints against existing molecular structure databases. In this work we propose to address the metabolite identification problem using a structured output prediction approach. This type of approach is not limited to vector output space and can handle structured output space such as the molecule space. Results: We use the Input Output Kernel Regression method to learn the mapping between tandem mass spectra and molecular structures. The principle of this method is to encode the similarities in the input (spectra) space and the similarities in the output (molecule) space using two kernel functions. This method approximates the spectra-molecule mapping in two phases. The first phase corresponds to a regression problem from the input space to the feature space associated to the output kernel. The second phase is a preimage problem, consisting in mapping back the predicted output feature vectors to the molecule space. We show that our approach achieves state-of-the-art accuracy in metabolite identification. Moreover, our method has the advantage of decreasing the running times for the training step and the test step by several orders of magnitude over the preceding methods. Availability and implementation: Contact: celine.brouard@aalto.fi Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307628
Improvement of short-term numerical wind predictions
NASA Astrophysics Data System (ADS)
Bedard, Joel
Geophysic Model Output Statistics (GMOS) are developed to optimize the use of NWP for complex sites. GMOS differs from other MOS that are widely used by meteorological centers in the following aspects: it takes into account the surrounding geophysical parameters such as surface roughness, terrain height, etc., along with wind direction; it can be directly applied without any training, although training will further improve the results. The GMOS was applied to improve the Environment Canada GEM-LAM 2.5km forecasts at North Cape (PEI, Canada): It improves the predictions RMSE by 25-30% for all time horizons and almost all meteorological conditions; the topographic signature of the forecast error due to insufficient grid refinement is eliminated and the NWP combined with GMOS outperform the persistence from a 2h horizon, instead of 4h without GMOS. Finally, GMOS was applied at another site (Bouctouche, NB, Canada): similar improvements were observed, thus showing its general applicability. Keywords: wind energy, wind power forecast, numerical weather prediction, complex sites, model output statistics
NASA Technical Reports Server (NTRS)
He, Yuning
2015-01-01
Safety of unmanned aerial systems (UAS) is paramount, but the large number of dynamically changing controller parameters makes it hard to determine if the system is currently stable, and the time before loss of control if not. We propose a hierarchical statistical model using Treed Gaussian Processes to predict (i) whether a flight will be stable (success) or become unstable (failure), (ii) the time-to-failure if unstable, and (iii) time series outputs for flight variables. We first classify the current flight input into success or failure types, and then use separate models for each class to predict the time-to-failure and time series outputs. As different inputs may cause failures at different times, we have to model variable length output curves. We use a basis representation for curves and learn the mappings from input to basis coefficients. We demonstrate the effectiveness of our prediction methods on a NASA neuro-adaptive flight control system.
OceanNOMADS: Real-time and retrospective access to operational U.S. ocean prediction products
NASA Astrophysics Data System (ADS)
Harding, J. M.; Cross, S. L.; Bub, F.; Ji, M.
2011-12-01
The National Oceanic and Atmospheric Administration (NOAA) National Operational Model Archive and Distribution System (NOMADS) provides both real-time and archived atmospheric model output from servers at the National Centers for Environmental Prediction (NCEP) and National Climatic Data Center (NCDC) respectively (http://nomads.ncep.noaa.gov/txt_descriptions/marRutledge-1.pdf). The NOAA National Ocean Data Center (NODC) with NCEP is developing a complementary capability called OceanNOMADS for operational ocean prediction models. An NCEP ftp server currently provides real-time ocean forecast output (http://www.opc.ncep.noaa.gov/newNCOM/NCOM_currents.shtml) with retrospective access through NODC. A joint effort between the Northern Gulf Institute (NGI; a NOAA Cooperative Institute) and the NOAA National Coastal Data Development Center (NCDDC; a division of NODC) created the developmental version of the retrospective OceanNOMADS capability (http://www.northerngulfinstitute.org/edac/ocean_nomads.php) under the NGI Ecosystem Data Assembly Center (EDAC) project (http://www.northerngulfinstitute.org/edac/). Complementary funding support for the developmental OceanNOMADS from U.S. Integrated Ocean Observing System (IOOS) through the Southeastern University Research Association (SURA) Model Testbed (http://testbed.sura.org/) this past year provided NODC the analogue that facilitated the creation of an NCDDC production version of OceanNOMADS (http://www.ncddc.noaa.gov/ocean-nomads/). Access tool development and storage of initial archival data sets occur on the NGI/NCDDC developmental servers with transition to NODC/NCCDC production servers as the model archives mature and operational space and distribution capability grow. Navy operational global ocean forecast subsets for U.S waters comprise the initial ocean prediction fields resident on the NCDDC production server. The NGI/NCDDC developmental server currently includes the Naval Research Laboratory Inter-America Seas Nowcast/Forecast System over the Gulf of Mexico from 2004-Mar 2011, the operational Naval Oceanographic Office (NAVOCEANO) regional USEast ocean nowcast/forecast system from early 2009 to present, and the NAVOCEANO operational regional AMSEAS (Gulf of Mexico/Caribbean) ocean nowcast/forecast system from its inception 25 June 2010 to present. AMSEAS provided one of the real-time ocean forecast products accessed by NOAA's Office of Response and Restoration from the NGI/NCDDC developmental OceanNOMADS during the Deep Water Horizon oil spill last year. The developmental server also includes archived, real-time Navy coastal forecast products off coastal Japan in support of U.S./Japanese joint efforts following the 2011 tsunami. Real-time NAVOCEANO output from regional prediction systems off Southern California and around Hawaii, currently available on the NCEP ftp server, are scheduled for archival on the developmental OceanNOMADS by late 2011 along with the next generation Navy/NOAA global ocean prediction output. Accession and archival of additional regions is planned as server capacities increase.
Consequences of genetic change in farm animals on food intake and feeding behaviour.
Emmans, G; Kyriazakis, I
2001-02-01
Selection in commercial populations on aspects of output, such as for growth rate in poultry. against fatness and for growth rate in pigs, and for milk yield in cows, has had very barge effects on such outputs over the past 50 years. Partly because of the cost of recording intake, there has been little or no selection for food intake or feeding behaviour. In order to predict the effects of such past, and future, selection on intake it is necessary to have some suitable theoretical framework. Intake needs to be predicted in order to make rational feeding and environmental decisions. The idea that an animal will eat 'to meet its requirements' has proved useful and continues to be fruitful. An important part of the idea is that the animal (genotype) can be described in a way that is sufficient for the accurate prediction of its outputs over time. Such descriptions can be combined with a set of nutritional constants to calculate requirements. There appears to have been no change in the nutritional constants under selection for output. Under such selection it is simplest to assume that changes in intake follow from the changes in output rates, so that intake changes become entirely predictable. It is suggested that other ways that have been proposed for predicting intake cannot be successful in predicting the effects of selection. Feeding behaviour is seen as being the means that the animal uses to attain its intake rather than being the means by which that intake can be predicted. Thus, the organisation of feeding behaviour can be used to predict neither intake nor the effects of selection on it.
The Multiplier Effect of the Development of Forest Park Tourism on Employment Creation in China
ERIC Educational Resources Information Center
Shuifa, Ke; Chenguang, Pan; Jiahua, Pan; Yan, Zheng; Ying, Zhang
2011-01-01
The focus of this article was employment creation by developing forest park tourism industries in China. Analysis of the statistical data and an input-output approach showed that 1 direct job opportunity in tourism industries created 1.15 other job opportunities. In the high, middle, and low scenarios, the total predicted employment in forest park…
Absolute, SI-traceable lunar irradiance tie-points for the USGS Lunar Model
NASA Astrophysics Data System (ADS)
Brown, Steven W.; Eplee, Robert E.; Xiong, Xiaoxiong J.
2017-10-01
The United States Geological Survey (USGS) has developed an empirical model, known as the Robotic Lunar Observatory (ROLO) Model, that predicts the reflectance of the Moon for any Sun-sensor-Moon configuration over the spectral range from 350 nm to 2500 nm. The lunar irradiance can be predicted from the modeled lunar reflectance using a spectrum of the incident solar irradiance. While extremely successful as a relative exo-atmospheric calibration target, the ROLO Model is not SI-traceable and has estimated uncertainties too large for the Moon to be used as an absolute celestial calibration target. In this work, two recent absolute, low uncertainty, SI-traceable top-of-the-atmosphere (TOA) lunar irradiances, measured over the spectral range from 380 nm to 1040 nm, at lunar phase angles of 6.6° and 16.9° , are used as tie-points to the output of the ROLO Model. Combined with empirically derived phase and libration corrections to the output of the ROLO Model and uncertainty estimates in those corrections, the measurements enable development of a corrected TOA lunar irradiance model and its uncertainty budget for phase angles between +/-80° and libration angles from 7° to 51° . The uncertainties in the empirically corrected output from the ROLO model are approximately 1 % from 440 nm to 865 nm and increase to almost 3 % at 412 nm. The dominant components in the uncertainty budget are the uncertainty in the absolute TOA lunar irradiance and the uncertainty in the fit to the phase correction from the output of the ROLO model.
Modeling the expenditure and reconstitution of work capacity above critical power.
Skiba, Philip Friere; Chidnok, Weerapong; Vanhatalo, Anni; Jones, Andrew M
2012-08-01
The critical power (CP) model includes two constants: the CP and the W' [P = (W' / t) + CP]. The W' is the finite work capacity available above CP. Power output above CP results in depletion of the W' complete depletion of the W' results in exhaustion. Monitoring the W' may be valuable to athletes during training and competition. Our purpose was to develop a function describing the dynamic state of the W' during intermittent exercise. After determination of V˙O(2max), CP, and W', seven subjects completed four separate exercise tests on a cycle ergometer on different days. Each protocol comprised a set of intervals: 60 s at a severe power output, followed by 30-s recovery at a lower prescribed power output. The intervals were repeated until exhaustion. These data were entered into a continuous equation predicting balance of W' remaining, assuming exponential reconstitution of the W'. The time constant was varied by an iterative process until the remaining modeled W' = 0 at the point of exhaustion. The time constants of W' recharge were negatively correlated with the difference between sub-CP recovery power and CP. The relationship was best fit by an exponential (r = 0.77). The model-predicted W' balance correlated with the temporal course of the rise in V˙O(2) (r = 0.82-0.96). The model accurately predicted exhaustion of the W' in a competitive cyclist during a road race. We have developed a function to track the dynamic state of the W' during intermittent exercise. This may have important implications for the planning and real-time monitoring of athletic performance.
Summary of the key features of seven biomathematical models of human fatigue and performance.
Mallis, Melissa M; Mejdal, Sig; Nguyen, Tammy T; Dinges, David F
2004-03-01
Biomathematical models that quantify the effects of circadian and sleep/wake processes on the regulation of alertness and performance have been developed in an effort to predict the magnitude and timing of fatigue-related responses in a variety of contexts (e.g., transmeridian travel, sustained operations, shift work). This paper summarizes key features of seven biomathematical models reviewed as part of the Fatigue and Performance Modeling Workshop held in Seattle, WA, on June 13-14, 2002. The Workshop was jointly sponsored by the National Aeronautics and Space Administration, U.S. Department of Defense, U.S. Army Medical Research and Materiel Command, Office of Naval Research, Air Force Office of Scientific Research, and U.S. Department of Transportation. An invitation was sent to developers of seven biomathematical models that were commonly cited in scientific literature and/or supported by government funding. On acceptance of the invitation to attend the Workshop, developers were asked to complete a survey of the goals, capabilities, inputs, and outputs of their biomathematical models of alertness and performance. Data from the completed surveys were summarized and juxtaposed to provide a framework for comparing features of the seven models. Survey responses revealed that models varied greatly relative to their reported goals and capabilities. While all modelers reported that circadian factors were key components of their capabilities, they differed markedly with regard to the roles of sleep and work times as input factors for prediction: four of the seven models had work time as their sole input variable(s), while the other three models relied on various aspects of sleep timing for model input. Models also differed relative to outputs: five sought to predict results from laboratory experiments, field, and operational data, while two models were developed without regard to predicting laboratory experimental results. All modelers provided published papers describing their models, with three of the models being proprietary. Although all models appear to have been fundamentally influenced by the two-process model of sleep regulation by Borbély, there is considerable diversity among them in the number and type of input and output variables, and their stated goals and capabilities.
Summary of the key features of seven biomathematical models of human fatigue and performance
NASA Technical Reports Server (NTRS)
Mallis, Melissa M.; Mejdal, Sig; Nguyen, Tammy T.; Dinges, David F.
2004-01-01
BACKGROUND: Biomathematical models that quantify the effects of circadian and sleep/wake processes on the regulation of alertness and performance have been developed in an effort to predict the magnitude and timing of fatigue-related responses in a variety of contexts (e.g., transmeridian travel, sustained operations, shift work). This paper summarizes key features of seven biomathematical models reviewed as part of the Fatigue and Performance Modeling Workshop held in Seattle, WA, on June 13-14, 2002. The Workshop was jointly sponsored by the National Aeronautics and Space Administration, U.S. Department of Defense, U.S. Army Medical Research and Materiel Command, Office of Naval Research, Air Force Office of Scientific Research, and U.S. Department of Transportation. METHODS: An invitation was sent to developers of seven biomathematical models that were commonly cited in scientific literature and/or supported by government funding. On acceptance of the invitation to attend the Workshop, developers were asked to complete a survey of the goals, capabilities, inputs, and outputs of their biomathematical models of alertness and performance. Data from the completed surveys were summarized and juxtaposed to provide a framework for comparing features of the seven models. RESULTS: Survey responses revealed that models varied greatly relative to their reported goals and capabilities. While all modelers reported that circadian factors were key components of their capabilities, they differed markedly with regard to the roles of sleep and work times as input factors for prediction: four of the seven models had work time as their sole input variable(s), while the other three models relied on various aspects of sleep timing for model input. Models also differed relative to outputs: five sought to predict results from laboratory experiments, field, and operational data, while two models were developed without regard to predicting laboratory experimental results. All modelers provided published papers describing their models, with three of the models being proprietary. CONCLUSIONS: Although all models appear to have been fundamentally influenced by the two-process model of sleep regulation by Borbely, there is considerable diversity among them in the number and type of input and output variables, and their stated goals and capabilities.
Modeling the Afferent Dynamics of the Baroreflex Control System
Mahdi, Adam; Sturdy, Jacob; Ottesen, Johnny T.; Olufsen, Mette S.
2013-01-01
In this study we develop a modeling framework for predicting baroreceptor firing rate as a function of blood pressure. We test models within this framework both quantitatively and qualitatively using data from rats. The models describe three components: arterial wall deformation, stimulation of mechanoreceptors located in the BR nerve-endings, and modulation of the action potential frequency. The three sub-systems are modeled individually following well-established biological principles. The first submodel, predicting arterial wall deformation, uses blood pressure as an input and outputs circumferential strain. The mechanoreceptor stimulation model, uses circumferential strain as an input, predicting receptor deformation as an output. Finally, the neural model takes receptor deformation as an input predicting the BR firing rate as an output. Our results show that nonlinear dependence of firing rate on pressure can be accounted for by taking into account the nonlinear elastic properties of the artery wall. This was observed when testing the models using multiple experiments with a single set of parameters. We find that to model the response to a square pressure stimulus, giving rise to post-excitatory depression, it is necessary to include an integrate-and-fire model, which allows the firing rate to cease when the stimulus falls below a given threshold. We show that our modeling framework in combination with sensitivity analysis and parameter estimation can be used to test and compare models. Finally, we demonstrate that our preferred model can exhibit all known dynamics and that it is advantageous to combine qualitative and quantitative analysis methods. PMID:24348231
Radial basis function network learns ceramic processing and predicts related strength and density
NASA Technical Reports Server (NTRS)
Cios, Krzysztof J.; Baaklini, George Y.; Vary, Alex; Tjia, Robert E.
1993-01-01
Radial basis function (RBF) neural networks were trained using the data from 273 Si3N4 modulus of rupture (MOR) bars which were tested at room temperature and 135 MOR bars which were tested at 1370 C. Milling time, sintering time, and sintering gas pressure were the processing parameters used as the input features. Flexural strength and density were the outputs by which the RBF networks were assessed. The 'nodes-at-data-points' method was used to set the hidden layer centers and output layer training used the gradient descent method. The RBF network predicted strength with an average error of less than 12 percent and density with an average error of less than 2 percent. Further, the RBF network demonstrated a potential for optimizing and accelerating the development and processing of ceramic materials.
In silico prediction of splice-altering single nucleotide variants in the human genome.
Jian, Xueqiu; Boerwinkle, Eric; Liu, Xiaoming
2014-12-16
In silico tools have been developed to predict variants that may have an impact on pre-mRNA splicing. The major limitation of the application of these tools to basic research and clinical practice is the difficulty in interpreting the output. Most tools only predict potential splice sites given a DNA sequence without measuring splicing signal changes caused by a variant. Another limitation is the lack of large-scale evaluation studies of these tools. We compared eight in silico tools on 2959 single nucleotide variants within splicing consensus regions (scSNVs) using receiver operating characteristic analysis. The Position Weight Matrix model and MaxEntScan outperformed other methods. Two ensemble learning methods, adaptive boosting and random forests, were used to construct models that take advantage of individual methods. Both models further improved prediction, with outputs of directly interpretable prediction scores. We applied our ensemble scores to scSNVs from the Catalogue of Somatic Mutations in Cancer database. Analysis showed that predicted splice-altering scSNVs are enriched in recurrent scSNVs and known cancer genes. We pre-computed our ensemble scores for all potential scSNVs across the human genome, providing a whole genome level resource for identifying splice-altering scSNVs discovered from large-scale sequencing studies.
Application of higher harmonic blade feathering for helicopter vibration reduction
NASA Technical Reports Server (NTRS)
Powers, R. W.
1978-01-01
Higher harmonic blade feathering for helicopter vibration reduction is considered. Recent wind tunnel tests confirmed the effectiveness of higher harmonic control in reducing articulated rotor vibratory hub loads. Several predictive analyses developed in support of the NASA program were shown to be capable of calculating single harmonic control inputs required to minimize a single 4P hub response. In addition, a multiple-input, multiple-output harmonic control predictive analysis was developed. All techniques developed thus far obtain a solution by extracting empirical transfer functions from sampled data. Algorithm data sampling and processing requirements are minimal to encourage adaptive control system application of such techniques in a flight environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edelen, J. P.; Sun, Y.; Harris, J. R.
In this paper we derive analytical expressions for the output current of an un-gated thermionic cathode RF gun in the presence of back-bombardment heating. We provide a brief overview of back-bombardment theory and discuss comparisons between the analytical back-bombardment predictions and simulation models. We then derive an expression for the output current as a function of the RF repetition rate and discuss relationships between back-bombardment, fieldenhancement, and output current. We discuss in detail the relevant approximations and then provide predictions about how the output current should vary as a function of repetition rate for some given system configurations.
Modeling of a multileaf collimator
NASA Astrophysics Data System (ADS)
Kim, Siyong
A comprehensive physics model of a multileaf collimator (MLC) field for treatment planning was developed. Specifically, an MLC user interface module that includes a geometric optimization tool and a general method of in- air output factor calculation were developed. An automatic tool for optimization of MLC conformation is needed to realize the potential benefits of MLC. It is also necessary that a radiation therapy treatment planning (RTTP) system is capable of modeling MLC completely. An MLC geometric optimization and user interface module was developed. The planning time has been reduced significantly by incorporating the MLC module into the main RTTP system, Radiation Oncology Computer System (ROCS). The dosimetric parameter that has the most profound effect on the accuracy of the dose delivered with an MLC is the change in the in-air output factor that occurs with field shaping. It has been reported that the conventional method of calculating an in-air output factor cannot be used for MLC shaped fields accurately. Therefore, it is necessary to develop algorithms that allow accurate calculation of the in-air output factor. A generalized solution for an in-air output factor calculation was developed. Three major contributors of scatter to the in-air output-flattening filter, wedge, and tertiary collimator-were considered separately. By virtue of a field mapping method, in which a source plane field determined by detector's eye view is mapped into a detector plane field, no additional dosimetric data acquisition other than the standard data set for a range of square fields is required for the calculation of head scatter. Comparisons of in-air output factors between calculated and measured values show a good agreement for both open and wedge fields. For rectangular fields, a simple equivalent square formula was derived based on the configuration of a linear accelerator treatment head. This method predicts in-air output to within 1% accuracy. A two-effective-source algorithm was developed to account for the effect of source to detector distance on in-air output for wedge fields. Two effective sources, one for head scatter and the other for wedge scatter, were dealt with independently. Calculations provided less than 1% difference of in-air output factors from measurements. This approach offers the best comprehensive accuracy in radiation delivery with field shapes defined using MLC. This generalized model works equally well with fields shaped by any type of tertiary collimator and have the necessary framework to extend its application to intensity modulated radiation therapy.
Systems and methods for predicting materials properties
Ceder, Gerbrand; Fischer, Chris; Tibbetts, Kevin; Morgan, Dane; Curtarolo, Stefano
2007-11-06
Systems and methods for predicting features of materials of interest. Reference data are analyzed to deduce relationships between the input data sets and output data sets. Reference data includes measured values and/or computed values. The deduced relationships can be specified as equations, correspondences, and/or algorithmic processes that produce appropriate output data when suitable input data is used. In some instances, the output data set is a subset of the input data set, and computational results may be refined by optionally iterating the computational procedure. To deduce features of a new material of interest, a computed or measured input property of the material is provided to an equation, correspondence, or algorithmic procedure previously deduced, and an output is obtained. In some instances, the output is iteratively refined. In some instances, new features deduced for the material of interest are added to a database of input and output data for known materials.
A software tool for determination of breast cancer treatment methods using data mining approach.
Cakır, Abdülkadir; Demirel, Burçin
2011-12-01
In this work, breast cancer treatment methods are determined using data mining. For this purpose, software is developed to help to oncology doctor for the suggestion of application of the treatment methods about breast cancer patients. 462 breast cancer patient data, obtained from Ankara Oncology Hospital, are used to determine treatment methods for new patients. This dataset is processed with Weka data mining tool. Classification algorithms are applied one by one for this dataset and results are compared to find proper treatment method. Developed software program called as "Treatment Assistant" uses different algorithms (IB1, Multilayer Perception and Decision Table) to find out which one is giving better result for each attribute to predict and by using Java Net beans interface. Treatment methods are determined for the post surgical operation of breast cancer patients using this developed software tool. At modeling step of data mining process, different Weka algorithms are used for output attributes. For hormonotherapy output IB1, for tamoxifen and radiotherapy outputs Multilayer Perceptron and for the chemotherapy output decision table algorithm shows best accuracy performance compare to each other. In conclusion, this work shows that data mining approach can be a useful tool for medical applications particularly at the treatment decision step. Data mining helps to the doctor to decide in a short time.
USDA-ARS?s Scientific Manuscript database
Despite increased interest in watershed scale model simulations, literature lacks application of long-term data in fuzzy logic simulations and comparing outputs with physically based models such as APEX (Agricultural Policy Environmental eXtender). The objective of this study was to develop a fuzzy...
Prediction of Radial Vibration in Switched Reluctance Machines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, CJ; Fahimi, B
2013-12-01
Origins of vibration in switched reluctance machines (SRMs) are investigated. Accordingly, an input-output model based on the mechanical impulse response of the SRMis developed. The proposed model is derived using an experimental approach. Using the proposed approach, vibration of the stator frame is captured and experimentally verified.
Frequency domain model for analysis of paralleled, series-output-connected Mapham inverters
NASA Technical Reports Server (NTRS)
Brush, Andrew S.; Sundberg, Richard C.; Button, Robert M.
1989-01-01
The Mapham resonant inverter is characterized as a two-port network driven by a selected periodic voltage. The two-port model is then used to model a pair of Mapham inverters connected in series and employing phasor voltage regulation. It is shown that the model is useful for predicting power output in paralleled inverter units, and for predicting harmonic current output of inverter pairs, using standard power flow techniques. Some sample results are compared to data obtained from testing hardware inverters.
Frequency domain model for analysis of paralleled, series-output-connected Mapham inverters
NASA Technical Reports Server (NTRS)
Brush, Andrew S.; Sundberg, Richard C.; Button, Robert M.
1989-01-01
The Mapham resonant inverter is characterized as a two-port network driven by a selected periodic voltage. The two-port model is then used to model a pair of Mapham inverters connected in series and employing phasor voltage regulation. It is shown that the model is useful for predicting power output in paralleled inverter units, and for predicting harmonic current output of inverter pairs, using standard power flow techniques. Some examples are compared to data obtained from testing hardware inverters.
Design and experiment of vehicular charger AC/DC system based on predictive control algorithm
NASA Astrophysics Data System (ADS)
He, Guangbi; Quan, Shuhai; Lu, Yuzhang
2018-06-01
For the car charging stage rectifier uncontrollable system, this paper proposes a predictive control algorithm of DC/DC converter based on the prediction model, established by the state space average method and its prediction model, obtained by the optimal mathematical description of mathematical calculation, to analysis prediction algorithm by Simulink simulation. The design of the structure of the car charging, at the request of the rated output power and output voltage adjustable control circuit, the first stage is the three-phase uncontrolled rectifier DC voltage Ud through the filter capacitor, after by using double-phase interleaved buck-boost circuit with wide range output voltage required value, analyzing its working principle and the the parameters for the design and selection of components. The analysis of current ripple shows that the double staggered parallel connection has the advantages of reducing the output current ripple and reducing the loss. The simulation experiment of the whole charging circuit is carried out by software, and the result is in line with the design requirements of the system. Finally combining the soft with hardware circuit to achieve charging of the system according to the requirements, experimental platform proved the feasibility and effectiveness of the proposed predictive control algorithm based on the car charging of the system, which is consistent with the simulation results.
Mathematical modelling and numerical simulation of forces in milling process
NASA Astrophysics Data System (ADS)
Turai, Bhanu Murthy; Satish, Cherukuvada; Prakash Marimuthu, K.
2018-04-01
Machining of the material by milling induces forces, which act on the work piece material, tool and which in turn act on the machining tool. The forces involved in milling process can be quantified, mathematical models help to predict these forces. A lot of research has been carried out in this area in the past few decades. The current research aims at developing a mathematical model to predict forces at different levels which arise machining of Aluminium6061 alloy. Finite element analysis was used to develop a FE model to predict the cutting forces. Simulation was done for varying cutting conditions. Different experiments was designed using Taguchi method. A L9 orthogonal array was designed and the output was measure for the different experiments. The same was used to develop the mathematical model.
Akbaş, Halil; Bilgen, Bilge; Turhan, Aykut Melih
2015-11-01
This study proposes an integrated prediction and optimization model by using multi-layer perceptron neural network and particle swarm optimization techniques. Three different objective functions are formulated. The first one is the maximization of methane percentage with single output. The second one is the maximization of biogas production with single output. The last one is the maximization of biogas quality and biogas production with two outputs. Methane percentage, carbon dioxide percentage, and other contents' percentage are used as the biogas quality criteria. Based on the formulated models and data from a wastewater treatment facility, optimal values of input variables and their corresponding maximum output values are found out for each model. It is expected that the application of the integrated prediction and optimization models increases the biogas production and biogas quality, and contributes to the quantity of electricity production at the wastewater treatment facility. Copyright © 2015 Elsevier Ltd. All rights reserved.
Flyback CCM inverter for AC module applications: iterative learning control and convergence analysis
NASA Astrophysics Data System (ADS)
Lee, Sung-Ho; Kim, Minsung
2017-12-01
This paper presents an iterative learning controller (ILC) for an interleaved flyback inverter operating in continuous conduction mode (CCM). The flyback CCM inverter features small output ripple current, high efficiency, and low cost, and hence it is well suited for photovoltaic power applications. However, it exhibits the non-minimum phase behaviour, because its transfer function from control duty to output current has the right-half-plane (RHP) zero. Moreover, the flyback CCM inverter suffers from the time-varying grid voltage disturbance. Thus, conventional control scheme results in inaccurate output tracking. To overcome these problems, the ILC is first developed and applied to the flyback inverter operating in CCM. The ILC makes use of both predictive and current learning terms which help the system output to converge to the reference trajectory. We take into account the nonlinear averaged model and use it to construct the proposed controller. It is proven that the system output globally converges to the reference trajectory in the absence of state disturbances, output noises, or initial state errors. Numerical simulations are performed to validate the proposed control scheme, and experiments using 400-W AC module prototype are carried out to demonstrate its practical feasibility.
Short Term Single Station GNSS TEC Prediction Using Radial Basis Function Neural Network
NASA Astrophysics Data System (ADS)
Muslim, Buldan; Husin, Asnawi; Efendy, Joni
2018-04-01
TEC prediction models for 24 hours ahead have been developed from JOG2 GPS TEC data during 2016. Eleven month of TEC data were used as a training model of the radial basis function neural network (RBFNN) and 1 month of last data (December 2016) is used for the RBFNN model testing. The RBFNN inputs are the previous 24 hour TEC data and the minimum of Dst index during the previous 24 hours. Outputs of the model are 24 ahead TEC prediction. Comparison of model prediction show that the RBFNN model is able to predict the next 24 hours TEC is more accurate than the TEC GIM model.
Deadbeat Predictive Controllers
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Phan, Minh
1997-01-01
Several new computational algorithms are presented to compute the deadbeat predictive control law. The first algorithm makes use of a multi-step-ahead output prediction to compute the control law without explicitly calculating the controllability matrix. The system identification must be performed first and then the predictive control law is designed. The second algorithm uses the input and output data directly to compute the feedback law. It combines the system identification and the predictive control law into one formulation. The third algorithm uses an observable-canonical form realization to design the predictive controller. The relationship between all three algorithms is established through the use of the state-space representation. All algorithms are applicable to multi-input, multi-output systems with disturbance inputs. In addition to the feedback terms, feed forward terms may also be added for disturbance inputs if they are measurable. Although the feedforward terms do not influence the stability of the closed-loop feedback law, they enhance the performance of the controlled system.
Yakimov, Eugene B
2016-06-01
An approach for a prediction of (63)Ni-based betavoltaic battery output parameters is described. It consists of multilayer Monte Carlo simulation to obtain the depth dependence of excess carrier generation rate inside the semiconductor converter, a determination of collection probability based on the electron beam induced current measurements, a calculation of current induced in the semiconductor converter by beta-radiation, and SEM measurements of output parameters using the calculated induced current value. Such approach allows to predict the betavoltaic battery parameters and optimize the converter design for any real semiconductor structure and any thickness and specific activity of beta-radiation source. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sharudin, R. W.; AbdulBari Ali, S.; Zulkarnain, M.; Shukri, M. A.
2018-05-01
This study reports on the integration of Artificial Neural Network (ANNs) with experimental data in predicting the solubility of carbon dioxide (CO2) blowing agent in SEBS by generating highest possible value for Regression coefficient (R2). Basically, foaming of thermoplastic elastomer with CO2 is highly affected by the CO2 solubility. The ability of ANN in predicting interpolated data of CO2 solubility was investigated by comparing training results via different method of network training. Regards to the final prediction result for CO2 solubility by ANN, the prediction trend (output generate) was corroborated with the experimental results. The obtained result of different method of training showed the trend of output generated by Gradient Descent with Momentum & Adaptive LR (traingdx) required longer training time and required more accurate input to produce better output with final Regression Value of 0.88. However, it goes vice versa with Levenberg-Marquardt (trainlm) technique as it produced better output in quick detention time with final Regression Value of 0.91.
1992-01-09
Crystal Polymers Tracy Reed Geophysics Laboratory (GEO) 9 Analysis of Model Output Statistics Thunderstorm Prediction Model Frank Lasley 10...four hours to twenty-four hours. It was predicted that the dogbones would turn brown once they reached the approximate annealing temperature. This was...LYS Hanscom AFB Frank A. Lasley Abstracft. Model Output Statistics (MOS) Thunderstorm prediction information and Service A weather observations
Xiao, WenBo; Nazario, Gina; Wu, HuaMing; Zhang, HuaMing; Cheng, Feng
2017-01-01
In this article, we introduced an artificial neural network (ANN) based computational model to predict the output power of three types of photovoltaic cells, mono-crystalline (mono-), multi-crystalline (multi-), and amorphous (amor-) crystalline. The prediction results are very close to the experimental data, and were also influenced by numbers of hidden neurons. The order of the solar generation power output influenced by the external conditions from smallest to biggest is: multi-, mono-, and amor- crystalline silicon cells. In addition, the dependences of power prediction on the number of hidden neurons were studied. For multi- and amorphous crystalline cell, three or four hidden layer units resulted in the high correlation coefficient and low MSEs. For mono-crystalline cell, the best results were achieved at the hidden layer unit of 8.
Bahreyni Toossi, M T; Moradi, H; Zare, H
2008-01-01
In this work, the general purpose Monte Carlo N-particle radiation transport computer code (MCNP-4C) was used for the simulation of X-ray spectra in diagnostic radiology. The electron's path in the target was followed until its energy was reduced to 10 keV. A user-friendly interface named 'diagnostic X-ray spectra by Monte Carlo simulation (DXRaySMCS)' was developed to facilitate the application of MCNP-4C code for diagnostic radiology spectrum prediction. The program provides a user-friendly interface for: (i) modifying the MCNP input file, (ii) launching the MCNP program to simulate electron and photon transport and (iii) processing the MCNP output file to yield a summary of the results (relative photon number per energy bin). In this article, the development and characteristics of DXRaySMCS are outlined. As part of the validation process, output spectra for 46 diagnostic radiology system settings produced by DXRaySMCS were compared with the corresponding IPEM78. Generally, there is a good agreement between the two sets of spectra. No statistically significant differences have been observed between IPEM78 reported spectra and the simulated spectra generated in this study.
NASA Astrophysics Data System (ADS)
Hosseiny, S. M. H.; Zarzar, C.; Gomez, M.; Siddique, R.; Smith, V.; Mejia, A.; Demir, I.
2016-12-01
The National Water Model (NWM) provides a platform for operationalize nationwide flood inundation forecasting and mapping. The ability to model flood inundation on a national scale will provide invaluable information to decision makers and local emergency officials. Often, forecast products use deterministic model output to provide a visual representation of a single inundation scenario, which is subject to uncertainty from various sources. While this provides a straightforward representation of the potential inundation, the inherent uncertainty associated with the model output should be considered to optimize this tool for decision making support. The goal of this study is to produce ensembles of future flood inundation conditions (i.e. extent, depth, and velocity) to spatially quantify and visually assess uncertainties associated with the predicted flood inundation maps. The setting for this study is located in a highly urbanized watershed along the Darby Creek in Pennsylvania. A forecasting framework coupling the NWM with multiple hydraulic models was developed to produce a suite ensembles of future flood inundation predictions. Time lagged ensembles from the NWM short range forecasts were used to account for uncertainty associated with the hydrologic forecasts. The forecasts from the NWM were input to iRIC and HEC-RAS two-dimensional software packages, from which water extent, depth, and flow velocity were output. Quantifying the agreement between output ensembles for each forecast grid provided the uncertainty metrics for predicted flood water inundation extent, depth, and flow velocity. For visualization, a series of flood maps that display flood extent, water depth, and flow velocity along with the underlying uncertainty associated with each of the forecasted variables were produced. The results from this study demonstrate the potential to incorporate and visualize model uncertainties in flood inundation maps in order to identify the high flood risk zones.
Machine Learning and Deep Learning Models to Predict Runoff Water Quantity and Quality
NASA Astrophysics Data System (ADS)
Bradford, S. A.; Liang, J.; Li, W.; Murata, T.; Simunek, J.
2017-12-01
Contaminants can be rapidly transported at the soil surface by runoff to surface water bodies. Physically-based models, which are based on the mathematical description of main hydrological processes, are key tools for predicting surface water impairment. Along with physically-based models, data-driven models are becoming increasingly popular for describing the behavior of hydrological and water resources systems since these models can be used to complement or even replace physically based-models. In this presentation we propose a new data-driven model as an alternative to a physically-based overland flow and transport model. First, we have developed a physically-based numerical model to simulate overland flow and contaminant transport (the HYDRUS-1D overland flow module). A large number of numerical simulations were carried out to develop a database containing information about the impact of various input parameters (weather patterns, surface topography, vegetation, soil conditions, contaminants, and best management practices) on runoff water quantity and quality outputs. This database was used to train data-driven models. Three different methods (Neural Networks, Support Vector Machines, and Recurrence Neural Networks) were explored to prepare input- output functional relations. Results demonstrate the ability and limitations of machine learning and deep learning models to predict runoff water quantity and quality.
Jørgensen, Søren; Dau, Torsten
2011-09-01
A model for predicting the intelligibility of processed noisy speech is proposed. The speech-based envelope power spectrum model has a similar structure as the model of Ewert and Dau [(2000). J. Acoust. Soc. Am. 108, 1181-1196], developed to account for modulation detection and masking data. The model estimates the speech-to-noise envelope power ratio, SNR(env), at the output of a modulation filterbank and relates this metric to speech intelligibility using the concept of an ideal observer. Predictions were compared to data on the intelligibility of speech presented in stationary speech-shaped noise. The model was further tested in conditions with noisy speech subjected to reverberation and spectral subtraction. Good agreement between predictions and data was found in all cases. For spectral subtraction, an analysis of the model's internal representation of the stimuli revealed that the predicted decrease of intelligibility was caused by the estimated noise envelope power exceeding that of the speech. The classical concept of the speech transmission index fails in this condition. The results strongly suggest that the signal-to-noise ratio at the output of a modulation frequency selective process provides a key measure of speech intelligibility. © 2011 Acoustical Society of America
NASA Astrophysics Data System (ADS)
Beriro, D. J.; Abrahart, R. J.; Nathanail, C. P.
2012-04-01
Data-driven modelling is most commonly used to develop predictive models that will simulate natural processes. This paper, in contrast, uses Gene Expression Programming (GEP) to construct two alternative models of different pan evaporation estimations by means of symbolic regression: a simulator, a model of a real-world process developed on observed records, and an emulator, an imitator of some other model developed on predicted outputs calculated by that source model. The solutions are compared and contrasted for the purposes of determining whether any substantial differences exist between either option. This analysis will address recent arguments over the impact of using downloaded hydrological modelling datasets originating from different initial sources i.e. observed or calculated. These differences can be easily be overlooked by modellers, resulting in a model of a model developed on estimations derived from deterministic empirical equations and producing exceptionally high goodness-of-fit. This paper uses different lines-of-evidence to evaluate model output and in so doing paves the way for a new protocol in machine learning applications. Transparent modelling tools such as symbolic regression offer huge potential for explaining stochastic processes, however, the basic tenets of data quality and recourse to first principles with regard to problem understanding should not be trivialised. GEP is found to be an effective tool for the prediction of observed and calculated pan evaporation, with results supported by an understanding of the records, and of the natural processes concerned, evaluated using one-at-a-time response function sensitivity analysis. The results show that both architectures and response functions are very similar, implying that previously observed differences in goodness-of-fit can be explained by whether models are applied to observed or calculated data.
Robust Online Monitoring for Calibration Assessment of Transmitters and Instrumentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramuhalli, Pradeep; Coble, Jamie B.; Shumaker, Brent
Robust online monitoring (OLM) technologies are expected to enable the extension or elimination of periodic sensor calibration intervals in operating and new reactors. These advances in OLM technologies will improve the safety and reliability of current and planned nuclear power systems through improved accuracy and increased reliability of sensors used to monitor key parameters. In this article, we discuss an overview of research being performed within the Nuclear Energy Enabling Technologies (NEET)/Advanced Sensors and Instrumentation (ASI) program, for the development of OLM algorithms to use sensor outputs and, in combination with other available information, 1) determine whether one or moremore » sensors are out of calibration or failing and 2) replace a failing sensor with reliable, accurate sensor outputs. Algorithm development is focused on the following OLM functions: • Signal validation • Virtual sensing • Sensor response-time assessment These algorithms incorporate, at their base, a Gaussian Process-based uncertainty quantification (UQ) method. Various plant models (using kernel regression, GP, or hierarchical models) may be used to predict sensor responses under various plant conditions. These predicted responses can then be applied in fault detection (sensor output and response time) and in computing the correct value (virtual sensing) of a failing physical sensor. The methods being evaluated in this work can compute confidence levels along with the predicted sensor responses, and as a result, may have the potential for compensating for sensor drift in real-time (online recalibration). Evaluation was conducted using data from multiple sources (laboratory flow loops and plant data). Ongoing research in this project is focused on further evaluation of the algorithms, optimization for accuracy and computational efficiency, and integration into a suite of tools for robust OLM that are applicable to monitoring sensor calibration state in nuclear power plants.« less
Digital soil mapping in assessment of land suitability for organic farming
NASA Astrophysics Data System (ADS)
Ghambashidze, Giorgi; Kentchiashvili, Naira; Tarkhnishvili, Maia; Jolokhava, Tamar; Meskhi, Tea
2017-04-01
Digital soil mapping (DSM) is a fast-developing sub discipline of soil science which gets more importance along with increased availability of spatial data. DSM is based on three main components: the input in the form of field and laboratory observational methods, the process used in terms of spatial and non-spatial soil inference systems, and the output in the form of spatial soil information systems, which includes outputs in the form of rasters of prediction along with the uncertainty of prediction. Georgia is one of the countries who are under the way of spatial data infrastructure development, which includes soil related spatial data also. Therefore, it is important to demonstrate the capacity of DSM technics for planning and decision making process, in which assessment of land suitability is a major interest for those willing to grow agricultural crops. In that term land suitability assessment for establishing organic farms is in high demand as market for organically produced commodities is still increasing. It is the first attempt in Georgia to use DSM to predict areas with potential for organic farming development. Current approach is based on risk assessment of soil pollution with toxic elements (As, Hg, Pb, Cd, Cr) and prediction of bio-availability of those elements to plants on example of the region of Western Georgia, where detailed soil survey was conducted and spatial database of soil was created. The results of the study show the advantages of DSM at early stage assessment and depending on availability and quality of the input data, it can achieve acceptable accuracy.
Webb, Samuel J; Hanser, Thierry; Howlin, Brendan; Krause, Paul; Vessey, Jonathan D
2014-03-25
A new algorithm has been developed to enable the interpretation of black box models. The developed algorithm is agnostic to learning algorithm and open to all structural based descriptors such as fragments, keys and hashed fingerprints. The algorithm has provided meaningful interpretation of Ames mutagenicity predictions from both random forest and support vector machine models built on a variety of structural fingerprints.A fragmentation algorithm is utilised to investigate the model's behaviour on specific substructures present in the query. An output is formulated summarising causes of activation and deactivation. The algorithm is able to identify multiple causes of activation or deactivation in addition to identifying localised deactivations where the prediction for the query is active overall. No loss in performance is seen as there is no change in the prediction; the interpretation is produced directly on the model's behaviour for the specific query. Models have been built using multiple learning algorithms including support vector machine and random forest. The models were built on public Ames mutagenicity data and a variety of fingerprint descriptors were used. These models produced a good performance in both internal and external validation with accuracies around 82%. The models were used to evaluate the interpretation algorithm. Interpretation was revealed that links closely with understood mechanisms for Ames mutagenicity. This methodology allows for a greater utilisation of the predictions made by black box models and can expedite further study based on the output for a (quantitative) structure activity model. Additionally the algorithm could be utilised for chemical dataset investigation and knowledge extraction/human SAR development.
ERIC Educational Resources Information Center
Delmendo, Xeres; Borrero, John C.; Beauchamp, Kenneth L.; Francisco, Monica T.
2009-01-01
We conducted preference assessments with 4 typically developing children to identify potential reinforcers and assessed the reinforcing efficacy of those stimuli. Next, we tested two predictions of economic theory: that overall consumption (reinforcers obtained) would decrease as the unit price (response requirement per reinforcer) increased and…
Evaluation of In-Structure Shock Prediction Techniques for Buried Structures
1991-10-01
process of modeling this problem necessitated the inclus on of structure- 16 media interaction ( SMk ) for the development of loeds for the structural...shears, moments, and strains are also output. 5.2.1 Free-Field Load Generation The equations used in ISSV3 to characterize the free-field environment are
NASA Technical Reports Server (NTRS)
Berke, Laszlo; Patnaik, Surya N.; Murthy, Pappu L. N.
1993-01-01
The application of artificial neural networks to capture structural design expertise is demonstrated. The principal advantage of a trained neural network is that it requires trivial computational effort to produce an acceptable new design. For the class of problems addressed, the development of a conventional expert system would be extremely difficult. In the present effort, a structural optimization code with multiple nonlinear programming algorithms and an artificial neural network code NETS were used. A set of optimum designs for a ring and two aircraft wings for static and dynamic constraints were generated by using the optimization codes. The optimum design data were processed to obtain input and output pairs, which were used to develop a trained artificial neural network with the code NETS. Optimum designs for new design conditions were predicted by using the trained network. Neural net prediction of optimum designs was found to be satisfactory for most of the output design parameters. However, results from the present study indicate that caution must be exercised to ensure that all design variables are within selected error bounds.
Optimum Design of Aerospace Structural Components Using Neural Networks
NASA Technical Reports Server (NTRS)
Berke, L.; Patnaik, S. N.; Murthy, P. L. N.
1993-01-01
The application of artificial neural networks to capture structural design expertise is demonstrated. The principal advantage of a trained neural network is that it requires a trivial computational effort to produce an acceptable new design. For the class of problems addressed, the development of a conventional expert system would be extremely difficult. In the present effort, a structural optimization code with multiple nonlinear programming algorithms and an artificial neural network code NETS were used. A set of optimum designs for a ring and two aircraft wings for static and dynamic constraints were generated using the optimization codes. The optimum design data were processed to obtain input and output pairs, which were used to develop a trained artificial neural network using the code NETS. Optimum designs for new design conditions were predicted using the trained network. Neural net prediction of optimum designs was found to be satisfactory for the majority of the output design parameters. However, results from the present study indicate that caution must be exercised to ensure that all design variables are within selected error bounds.
NASA Technical Reports Server (NTRS)
Mukkamala, R.; Cohen, R. J.; Mark, R. G.
2002-01-01
Guyton developed a popular approach for understanding the factors responsible for cardiac output (CO) regulation in which 1) the heart-lung unit and systemic circulation are independently characterized via CO and venous return (VR) curves, and 2) average CO and right atrial pressure (RAP) of the intact circulation are predicted by graphically intersecting the curves. However, this approach is virtually impossible to verify experimentally. We theoretically evaluated the approach with respect to a nonlinear, computational model of the pulsatile heart and circulation. We developed two sets of open circulation models to generate CO and VR curves, differing by the manner in which average RAP was varied. One set applied constant RAPs, while the other set applied pulsatile RAPs. Accurate prediction of intact, average CO and RAP was achieved only by intersecting the CO and VR curves generated with pulsatile RAPs because of the pulsatility and nonlinearity (e.g., systemic venous collapse) of the intact model. The CO and VR curves generated with pulsatile RAPs were also practically independent. This theoretical study therefore supports the validity of Guyton's graphical analysis.
Baseline and Target Values for PV Forecasts: Toward Improved Solar Power Forecasting: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jie; Hodge, Bri-Mathias; Lu, Siyuan
2015-08-05
Accurate solar power forecasting allows utilities to get the most out of the solar resources on their systems. To truly measure the improvements that any new solar forecasting methods can provide, it is important to first develop (or determine) baseline and target solar forecasting at different spatial and temporal scales. This paper aims to develop baseline and target values for solar forecasting metrics. These were informed by close collaboration with utility and independent system operator partners. The baseline values are established based on state-of-the-art numerical weather prediction models and persistence models. The target values are determined based on the reductionmore » in the amount of reserves that must be held to accommodate the uncertainty of solar power output. forecasting metrics. These were informed by close collaboration with utility and independent system operator partners. The baseline values are established based on state-of-the-art numerical weather prediction models and persistence models. The target values are determined based on the reduction in the amount of reserves that must be held to accommodate the uncertainty of solar power output.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mudunuru, Maruti Kumar; Karra, Satish; Harp, Dylan Robert
Reduced-order modeling is a promising approach, as many phenomena can be described by a few parameters/mechanisms. An advantage and attractive aspect of a reduced-order model is that it is computational inexpensive to evaluate when compared to running a high-fidelity numerical simulation. A reduced-order model takes couple of seconds to run on a laptop while a high-fidelity simulation may take couple of hours to run on a high-performance computing cluster. The goal of this paper is to assess the utility of regression-based reduced-order models (ROMs) developed from high-fidelity numerical simulations for predicting transient thermal power output for an enhanced geothermal reservoirmore » while explicitly accounting for uncertainties in the subsurface system and site-specific details. Numerical simulations are performed based on equally spaced values in the specified range of model parameters. Key sensitive parameters are then identified from these simulations, which are fracture zone permeability, well/skin factor, bottom hole pressure, and injection flow rate. We found the fracture zone permeability to be the most sensitive parameter. The fracture zone permeability along with time, are used to build regression-based ROMs for the thermal power output. The ROMs are trained and validated using detailed physics-based numerical simulations. Finally, predictions from the ROMs are then compared with field data. We propose three different ROMs with different levels of model parsimony, each describing key and essential features of the power production curves. The coefficients in the proposed regression-based ROMs are developed by minimizing a non-linear least-squares misfit function using the Levenberg–Marquardt algorithm. The misfit function is based on the difference between numerical simulation data and reduced-order model. ROM-1 is constructed based on polynomials up to fourth order. ROM-1 is able to accurately reproduce the power output of numerical simulations for low values of permeabilities and certain features of the field-scale data. ROM-2 is a model with more analytical functions consisting of polynomials up to order eight, exponential functions and smooth approximations of Heaviside functions, and accurately describes the field-data. At higher permeabilities, ROM-2 reproduces numerical results better than ROM-1, however, there is a considerable deviation from numerical results at low fracture zone permeabilities. ROM-3 consists of polynomials up to order ten, and is developed by taking the best aspects of ROM-1 and ROM-2. ROM-1 is relatively parsimonious than ROM-2 and ROM-3, while ROM-2 overfits the data. ROM-3 on the other hand, provides a middle ground for model parsimony. Based on R 2-values for training, validation, and prediction data sets we found that ROM-3 is better model than ROM-2 and ROM-1. For predicting thermal drawdown in EGS applications, where high fracture zone permeabilities (typically greater than 10 –15 m 2) are desired, ROM-2 and ROM-3 outperform ROM-1. As per computational time, all the ROMs are 10 4 times faster when compared to running a high-fidelity numerical simulation. In conclusion, this makes the proposed regression-based ROMs attractive for real-time EGS applications because they are fast and provide reasonably good predictions for thermal power output.« less
Mudunuru, Maruti Kumar; Karra, Satish; Harp, Dylan Robert; ...
2017-07-10
Reduced-order modeling is a promising approach, as many phenomena can be described by a few parameters/mechanisms. An advantage and attractive aspect of a reduced-order model is that it is computational inexpensive to evaluate when compared to running a high-fidelity numerical simulation. A reduced-order model takes couple of seconds to run on a laptop while a high-fidelity simulation may take couple of hours to run on a high-performance computing cluster. The goal of this paper is to assess the utility of regression-based reduced-order models (ROMs) developed from high-fidelity numerical simulations for predicting transient thermal power output for an enhanced geothermal reservoirmore » while explicitly accounting for uncertainties in the subsurface system and site-specific details. Numerical simulations are performed based on equally spaced values in the specified range of model parameters. Key sensitive parameters are then identified from these simulations, which are fracture zone permeability, well/skin factor, bottom hole pressure, and injection flow rate. We found the fracture zone permeability to be the most sensitive parameter. The fracture zone permeability along with time, are used to build regression-based ROMs for the thermal power output. The ROMs are trained and validated using detailed physics-based numerical simulations. Finally, predictions from the ROMs are then compared with field data. We propose three different ROMs with different levels of model parsimony, each describing key and essential features of the power production curves. The coefficients in the proposed regression-based ROMs are developed by minimizing a non-linear least-squares misfit function using the Levenberg–Marquardt algorithm. The misfit function is based on the difference between numerical simulation data and reduced-order model. ROM-1 is constructed based on polynomials up to fourth order. ROM-1 is able to accurately reproduce the power output of numerical simulations for low values of permeabilities and certain features of the field-scale data. ROM-2 is a model with more analytical functions consisting of polynomials up to order eight, exponential functions and smooth approximations of Heaviside functions, and accurately describes the field-data. At higher permeabilities, ROM-2 reproduces numerical results better than ROM-1, however, there is a considerable deviation from numerical results at low fracture zone permeabilities. ROM-3 consists of polynomials up to order ten, and is developed by taking the best aspects of ROM-1 and ROM-2. ROM-1 is relatively parsimonious than ROM-2 and ROM-3, while ROM-2 overfits the data. ROM-3 on the other hand, provides a middle ground for model parsimony. Based on R 2-values for training, validation, and prediction data sets we found that ROM-3 is better model than ROM-2 and ROM-1. For predicting thermal drawdown in EGS applications, where high fracture zone permeabilities (typically greater than 10 –15 m 2) are desired, ROM-2 and ROM-3 outperform ROM-1. As per computational time, all the ROMs are 10 4 times faster when compared to running a high-fidelity numerical simulation. In conclusion, this makes the proposed regression-based ROMs attractive for real-time EGS applications because they are fast and provide reasonably good predictions for thermal power output.« less
The ergonomics of vertical turret lathe operation.
Pratt, F M; Corlett, E N
1970-12-01
A study of the work load of 14 vertical turret lathe operators engaged on different work tasks in two factories is reported. For eight of these workers continuous heart rate recordings were made throughout the day. It was shown that in four cases improved technology was unlikely to lead to higher output and certain aspects of posture and equipment manipulation were major contributors to the limitations on increased output. The role of the work-rest schedule in increasing work loads was also demonstrated. Improvements in technology and methods to reduce the extent of certain work loads to enable heavy work to be done in shorter periods followed by light work or rest periods are given as means to modify and improve the output of these machines. Finally, the direction for the development of a predictive model for man-machine matching is introduced.
Pasotti, Lorenzo; Bellato, Massimo; Casanova, Michela; Zucca, Susanna; Cusella De Angelis, Maria Gabriella; Magni, Paolo
2017-01-01
The study of simplified, ad-hoc constructed model systems can help to elucidate if quantitatively characterized biological parts can be effectively re-used in composite circuits to yield predictable functions. Synthetic systems designed from the bottom-up can enable the building of complex interconnected devices via rational approach, supported by mathematical modelling. However, such process is affected by different, usually non-modelled, unpredictability sources, like cell burden. Here, we analyzed a set of synthetic transcriptional cascades in Escherichia coli . We aimed to test the predictive power of a simple Hill function activation/repression model (no-burden model, NBM) and of a recently proposed model, including Hill functions and the modulation of proteins expression by cell load (burden model, BM). To test the bottom-up approach, the circuit collection was divided into training and test sets, used to learn individual component functions and test the predicted output of interconnected circuits, respectively. Among the constructed configurations, two test set circuits showed unexpected logic behaviour. Both NBM and BM were able to predict the quantitative output of interconnected devices with expected behaviour, but only the BM was also able to predict the output of one circuit with unexpected behaviour. Moreover, considering training and test set data together, the BM captures circuits output with higher accuracy than the NBM, which is unable to capture the experimental output exhibited by some of the circuits even qualitatively. Finally, resource usage parameters, estimated via BM, guided the successful construction of new corrected variants of the two circuits showing unexpected behaviour. Superior descriptive and predictive capabilities were achieved considering resource limitation modelling, but further efforts are needed to improve the accuracy of models for biological engineering.
Wang, Tong; Gao, Huijun; Qiu, Jianbin
2016-02-01
This paper investigates the multirate networked industrial process control problem in double-layer architecture. First, the output tracking problem for sampled-data nonlinear plant at device layer with sampling period T(d) is investigated using adaptive neural network (NN) control, and it is shown that the outputs of subsystems at device layer can track the decomposed setpoints. Then, the outputs and inputs of the device layer subsystems are sampled with sampling period T(u) at operation layer to form the index prediction, which is used to predict the overall performance index at lower frequency. Radial basis function NN is utilized as the prediction function due to its approximation ability. Then, considering the dynamics of the overall closed-loop system, nonlinear model predictive control method is proposed to guarantee the system stability and compensate the network-induced delays and packet dropouts. Finally, a continuous stirred tank reactor system is given in the simulation part to demonstrate the effectiveness of the proposed method.
Thorenz, Ute R; Kundel, Michael; Müller, Lars; Hoffmann, Thorsten
2012-11-01
In this work, we describe a simple diffusion capillary device for the generation of various organic test gases. Using a set of basic equations the output rate of the test gas devices can easily be predicted only based on the molecular formula and the boiling point of the compounds of interest. Since these parameters are easily accessible for a large number of potential analytes, even for those compounds which are typically not listed in physico-chemical handbooks or internet databases, the adjustment of the test gas source to the concentration range required for the individual analytical application is straightforward. The agreement of the predicted and measured values is shown to be valid for different groups of chemicals, such as halocarbons, alkanes, alkenes, and aromatic compounds and for different dimensions of the diffusion capillaries. The limits of the predictability of the output rates are explored and observed to result in an underprediction of the output rates when very thin capillaries are used. It is demonstrated that pressure variations are responsible for the observed deviation of the output rates. To overcome the influence of pressure variations and at the same time to establish a suitable test gas source for highly volatile compounds, also the usability of permeation sources is explored, for example for the generation of molecular bromine test gases.
Infused cardioplegia index: A new tool to improve myocardial protection. A cohort study.
Jiménez Rivera, J J; Llanos Jorge, C; Iribarren Sarrías, J L; Brouard Martín, M; Lacalzada Almeida, J; Pérez Vela, J L; Avalos Pinto, R; Pérez Hernández, R; Ramos de la Rosa, S; Yanes Bowden, G; Martínez Sanz, R
2018-05-19
Strategies for cardio-protection are essential in coronary artery bypass graft surgery. The authors explored the relationship between cardioplegia volume, left ventricular mass index and ischemia time by means of the infused cardioplegia index and its relationship with post-operative low cardiac output syndrome. All patients undergoing coronary artery bypass graft surgery between January 2013 and December 2015 were included. Low cardiac output syndrome was defined according to criteria of the SEMICYUC's consensus document. The perioperative factors associated with low cardiac output syndrome were estimated, and using a ROC curve, the optimum cut-off point for the infused cardioplegia index to predict the absence of low cardiac output syndrome was calculated. Of 360 patients included, 116 (32%) developed low cardiac output syndrome. The independent risk predictors were: New York Heart Association Functional Classification (OR 1.8 [95% CI=1.18-2.55]), left ventricle ejection fraction (OR 0.95 (95% CI=0.93-0.98]), ICI (OR 0.99 [95% CI=0.991-0.996]) and retrograde cardioplegia (OR 1.2 [95% CI=1.03-1.50]). The infused cardioplegia index showed an area under the ROC curve of 0.77 (0.70-0.83; P<.001) for the absence of postoperative low cardiac output syndrome using the optimum cut-off point of 23.6ml·min -1 (100g/m 2 of LV) -1 . The infused cardioplegia index presents an inverse relationship with the development of post-operative low cardiac output syndrome. This index could form part of new strategies aimed at optimising cardio-protection. The total volume of intermittent cardioplegia, especially that of maintenance, should probably be individualised, adjusting for ischemia time and left ventricle mass index. Copyright © 2018 Elsevier España, S.L.U. y SEMICYUC. All rights reserved.
GPC-Based Stable Reconfigurable Control
NASA Technical Reports Server (NTRS)
Soloway, Don; Shi, Jian-Jun; Kelkar, Atul
2004-01-01
This paper presents development of multi-input multi-output (MIMO) Generalized Pre-dictive Control (GPC) law and its application to reconfigurable control design in the event of actuator saturation. A Controlled Auto-Regressive Integrating Moving Average (CARIMA) model is used to describe the plant dynamics. The control law is derived using input-output description of the system and is also related to the state-space form of the model. The stability of the GPC control law without reconfiguration is first established using Riccati-based approach and state-space formulation. A novel reconfiguration strategy is developed for the systems which have actuator redundancy and are faced with actuator saturation type failure. An elegant reconfigurable control design is presented with stability proof. Several numerical examples are presented to demonstrate the application of various results.
Development and Production of a 201 MHz, 5.0 MW Peak Power Klystron
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aymar, Galen; Eisen, Edward; Stockwell, Brad
2016-01-01
Communications & Power Industries LLC has designed and manufactured the VKP-8201A, a high peak power, high gain, VHF band klystron. The klystron operates at 201.25 MHz, with 5.0 MW peak output power, 34 kW average output power, and a gain of 36 dB. The klystron is designed to operate between 1.0 MW and 4.5 MW in the linear range of the transfer curve. The klystron utilizes a unique magnetic field which enables the use of a proven electron gun design with a larger electron beam requirement. Experimental and predicted performance data are compared.
NASA Technical Reports Server (NTRS)
Meegan, C. A.; Fountain, W. F.; Berry, F. A., Jr.
1987-01-01
A system to rapidly digitize data from showers in nuclear emulsions is described. A TV camera views the emulsions though a microscope. The TV output is superimposed on the monitor of a minicomputer. The operator uses the computer's graphics capability to mark the positions of particle tracks. The coordinates of each track are stored on a disk. The computer then predicts the coordinates of each track through successive layers of emulsion. The operator, guided by the predictions, thus tracks and stores the development of the shower. The system provides a significant improvement over purely manual methods of recording shower development in nuclear emulsion stacks.
Adaptive Data-based Predictive Control for Short Take-off and Landing (STOL) Aircraft
NASA Technical Reports Server (NTRS)
Barlow, Jonathan Spencer; Acosta, Diana Michelle; Phan, Minh Q.
2010-01-01
Data-based Predictive Control is an emerging control method that stems from Model Predictive Control (MPC). MPC computes current control action based on a prediction of the system output a number of time steps into the future and is generally derived from a known model of the system. Data-based predictive control has the advantage of deriving predictive models and controller gains from input-output data. Thus, a controller can be designed from the outputs of complex simulation code or a physical system where no explicit model exists. If the output data happens to be corrupted by periodic disturbances, the designed controller will also have the built-in ability to reject these disturbances without the need to know them. When data-based predictive control is implemented online, it becomes a version of adaptive control. The characteristics of adaptive data-based predictive control are particularly appropriate for the control of nonlinear and time-varying systems, such as Short Take-off and Landing (STOL) aircraft. STOL is a capability of interest to NASA because conceptual Cruise Efficient Short Take-off and Landing (CESTOL) transport aircraft offer the ability to reduce congestion in the terminal area by utilizing existing shorter runways at airports, as well as to lower community noise by flying steep approach and climb-out patterns that reduce the noise footprint of the aircraft. In this study, adaptive data-based predictive control is implemented as an integrated flight-propulsion controller for the outer-loop control of a CESTOL-type aircraft. Results show that the controller successfully tracks velocity while attempting to maintain a constant flight path angle, using longitudinal command, thrust and flap setting as the control inputs.
Development of an X-Band 50 MW Multiple Beam Klystron
NASA Astrophysics Data System (ADS)
Song, Liqun; Ferguson, Patrick; Ives, R. Lawrence; Miram, George; Marsden, David; Mizuhara, Max
2003-12-01
Calabazas Creek Research, Inc. is developing an X-band 50 MW multiple beam klystron (MBK) on a DOE SBIR Phase II grant. The electrical design and preliminary mechanical design were completed on the Phase I. This MBK consists of eight discrete klystron circuits driven by eight electron beams located symmetrically on a circle with a radius of 6.3 cm. Each beam operates at 190 kV and 66 A. The eight beam electron gun is in development on a DOE SBIR Phase II grant. Each circuit consists of an input cavity, two gain cavities, three penultimate cavities, and a three cavity output circuit operating in the PI/2 mode. Ring resonators were initially proposed for the complete circuit; however, low beam — wave interaction resulted in the necessity to use discrete cavities for all eight circuits. The input cavities are coupled via hybrid waveguides to ensure constant drive power amplitude and phase. The output circuits can either be combined using compact waveguide twists driving a TE01 high power window or combined into a TM04 mode converter driving the same TE01 window. The gain and efficiency for a single circuit has been optimized using KLSC, a 2 1/2D large signal klystron code. Simulations for a single circuit predict an efficiency of 53% for a single output cavity and 55% for the three cavity output resonator. The total RF output power for this MBK is 55 MW. During the Phase II emphasis will be given to cost reduction techniques resulting in a robust — high efficient — long life high power amplifier.
Predictive and Neural Predictive Control of Uncertain Systems
NASA Technical Reports Server (NTRS)
Kelkar, Atul G.
2000-01-01
Accomplishments and future work are:(1) Stability analysis: the work completed includes characterization of stability of receding horizon-based MPC in the setting of LQ paradigm. The current work-in-progress includes analyzing local as well as global stability of the closed-loop system under various nonlinearities; for example, actuator nonlinearities; sensor nonlinearities, and other plant nonlinearities. Actuator nonlinearities include three major types of nonlineaxities: saturation, dead-zone, and (0, 00) sector. (2) Robustness analysis: It is shown that receding horizon parameters such as input and output horizon lengths have direct effect on the robustness of the system. (3) Code development: A matlab code has been developed which can simulate various MPC formulations. The current effort is to generalize the code to include ability to handle all plant types and all MPC types. (4) Improved predictor: It is shown that MPC design using better predictors that can minimize prediction errors. It is shown analytically and numerically that Smith predictor can provide closed-loop stability under GPC operation for plants with dead times where standard optimal predictor fails. (5) Neural network predictors: When neural network is used as predictor it can be shown that neural network predicts the plant output within some finite error bound under certain conditions. Our preliminary study shows that with proper choice of update laws and network architectures such bound can be obtained. However, much work needs to be done to obtain a similar result in general case.
A Predictive Model of Daily Seismic Activity Induced by Mining, Developed with Data Mining Methods
NASA Astrophysics Data System (ADS)
Jakubowski, Jacek
2014-12-01
The article presents the development and evaluation of a predictive classification model of daily seismic energy emissions induced by longwall mining in sector XVI of the Piast coal mine in Poland. The model uses data on tremor energy, basic characteristics of the longwall face and mined output in this sector over the period from July 1987 to March 2011. The predicted binary variable is the occurrence of a daily sum of tremor seismic energies in a longwall that is greater than or equal to the threshold value of 105 J. Three data mining analytical methods were applied: logistic regression,neural networks, and stochastic gradient boosted trees. The boosted trees model was chosen as the best for the purposes of the prediction. The validation sample results showed its good predictive capability, taking the complex nature of the phenomenon into account. This may indicate the applied model's suitability for a sequential, short-term prediction of mining induced seismic activity.
ERIC Educational Resources Information Center
Jou, Jerwen
2008-01-01
Recall latency, recall accuracy rate, and recall confidence were examined in free recall as a function of recall output serial position using a modified Deese-Roediger-McDermott paradigm to test a strength-based theory against the dual-retrieval process theory of recall output sequence. The strength theory predicts the item output sequence to be…
Thompson, Bryony A.; Greenblatt, Marc S.; Vallee, Maxime P.; Herkert, Johanna C.; Tessereau, Chloe; Young, Erin L.; Adzhubey, Ivan A.; Li, Biao; Bell, Russell; Feng, Bingjian; Mooney, Sean D.; Radivojac, Predrag; Sunyaev, Shamil R.; Frebourg, Thierry; Hofstra, Robert M.W.; Sijmons, Rolf H.; Boucher, Ken; Thomas, Alun; Goldgar, David E.; Spurdle, Amanda B.; Tavtigian, Sean V.
2015-01-01
Classification of rare missense substitutions observed during genetic testing for patient management is a considerable problem in clinical genetics. The Bayesian integrated evaluation of unclassified variants is a solution originally developed for BRCA1/2. Here, we take a step toward an analogous system for the mismatch repair (MMR) genes (MLH1, MSH2, MSH6, and PMS2) that confer colon cancer susceptibility in Lynch syndrome by calibrating in silico tools to estimate prior probabilities of pathogenicity for MMR gene missense substitutions. A qualitative five-class classification system was developed and applied to 143 MMR missense variants. This identified 74 missense substitutions suitable for calibration. These substitutions were scored using six different in silico tools (Align-Grantham Variation Grantham Deviation, multivariate analysis of protein polymorphisms [MAPP], Mut-Pred, PolyPhen-2.1, Sorting Intolerant From Tolerant, and Xvar), using curated MMR multiple sequence alignments where possible. The output from each tool was calibrated by regression against the classifications of the 74 missense substitutions; these calibrated outputs are interpretable as prior probabilities of pathogenicity. MAPP was the most accurate tool and MAPP + PolyPhen-2.1 provided the best-combined model (R2 = 0.62 and area under receiver operating characteristic = 0.93). The MAPP + PolyPhen-2.1 output is sufficiently predictive to feed as a continuous variable into the quantitative Bayesian integrated evaluation for clinical classification of MMR gene missense substitutions. PMID:22949387
Super short term forecasting of photovoltaic power generation output in micro grid
NASA Astrophysics Data System (ADS)
Gong, Cheng; Ma, Longfei; Chi, Zhongjun; Zhang, Baoqun; Jiao, Ran; Yang, Bing; Chen, Jianshu; Zeng, Shuang
2017-01-01
The prediction model combining data mining and support vector machine (SVM) was built. Which provide information of photovoltaic (PV) power generation output for economic operation and optimal control of micro gird, and which reduce influence of power system from PV fluctuation. Because of the characteristic which output of PV rely on radiation intensity, ambient temperature, cloudiness, etc., so data mining was brought in. This technology can deal with large amounts of historical data and eliminate superfluous data, by using fuzzy classifier of daily type and grey related degree. The model of SVM was built, which can dock with information from data mining. Based on measured data from a small PV station, the prediction model was tested. The numerical example shows that the prediction model is fast and accurate.
PPCM: Combing multiple classifiers to improve protein-protein interaction prediction
Yao, Jianzhuang; Guo, Hong; Yang, Xiaohan
2015-08-01
Determining protein-protein interaction (PPI) in biological systems is of considerable importance, and prediction of PPI has become a popular research area. Although different classifiers have been developed for PPI prediction, no single classifier seems to be able to predict PPI with high confidence. We postulated that by combining individual classifiers the accuracy of PPI prediction could be improved. We developed a method called protein-protein interaction prediction classifiers merger (PPCM), and this method combines output from two PPI prediction tools, GO2PPI and Phyloprof, using Random Forests algorithm. The performance of PPCM was tested by area under the curve (AUC) using anmore » assembled Gold Standard database that contains both positive and negative PPI pairs. Our AUC test showed that PPCM significantly improved the PPI prediction accuracy over the corresponding individual classifiers. We found that additional classifiers incorporated into PPCM could lead to further improvement in the PPI prediction accuracy. Furthermore, cross species PPCM could achieve competitive and even better prediction accuracy compared to the single species PPCM. This study established a robust pipeline for PPI prediction by integrating multiple classifiers using Random Forests algorithm. Ultimately, this pipeline will be useful for predicting PPI in nonmodel species.« less
NASA Astrophysics Data System (ADS)
Adabanija, M. A.; Omidiora, E. O.; Olayinka, A. I.
2008-05-01
A linguistic fuzzy logic system (LFLS)-based expert system model has been developed for the assessment of aquifers for the location of productive water boreholes in a crystalline basement complex. The model design employed a multiple input/single output (MISO) approach with geoelectrical parameters and topographic features as input variables and control crisp value as the output. The application of the method to the data acquired in Khondalitic terrain, a basement complex in Vizianagaram District, south India, shows that potential groundwater resource zones that have control output values in the range 0.3295-0.3484 have a yield greater than 6,000 liters per hour (LPH). The range 0.3174-0.3226 gives a yield less than 4,000 LPH. The validation of the control crisp value using data acquired from Oban Massif, a basement complex in southeastern Nigeria, indicates a yield less than 3,000 LPH for control output values in the range 0.2938-0.3065. This validation corroborates the ability of control output values to predict a yield, thereby vindicating the applicability of linguistic fuzzy logic system in siting productive water boreholes in a basement complex.
Optimizing Wind Power Generation while Minimizing Wildlife Impacts in an Urban Area
Bohrer, Gil; Zhu, Kunpeng; Jones, Robert L.; Curtis, Peter S.
2013-01-01
The location of a wind turbine is critical to its power output, which is strongly affected by the local wind field. Turbine operators typically seek locations with the best wind at the lowest level above ground since turbine height affects installation costs. In many urban applications, such as small-scale turbines owned by local communities or organizations, turbine placement is challenging because of limited available space and because the turbine often must be added without removing existing infrastructure, including buildings and trees. The need to minimize turbine hazard to wildlife compounds the challenge. We used an exclusion zone approach for turbine-placement optimization that incorporates spatially detailed maps of wind distribution and wildlife densities with power output predictions for the Ohio State University campus. We processed public GIS records and airborne lidar point-cloud data to develop a 3D map of all campus buildings and trees. High resolution large-eddy simulations and long-term wind climatology were combined to provide land-surface-affected 3D wind fields and the corresponding wind-power generation potential. This power prediction map was then combined with bird survey data. Our assessment predicts that exclusion of areas where bird numbers are highest will have modest effects on the availability of locations for power generation. The exclusion zone approach allows the incorporation of wildlife hazard in wind turbine siting and power output considerations in complex urban environments even when the quantitative interaction between wildlife behavior and turbine activity is unknown. PMID:23409117
Optimizing wind power generation while minimizing wildlife impacts in an urban area.
Bohrer, Gil; Zhu, Kunpeng; Jones, Robert L; Curtis, Peter S
2013-01-01
The location of a wind turbine is critical to its power output, which is strongly affected by the local wind field. Turbine operators typically seek locations with the best wind at the lowest level above ground since turbine height affects installation costs. In many urban applications, such as small-scale turbines owned by local communities or organizations, turbine placement is challenging because of limited available space and because the turbine often must be added without removing existing infrastructure, including buildings and trees. The need to minimize turbine hazard to wildlife compounds the challenge. We used an exclusion zone approach for turbine-placement optimization that incorporates spatially detailed maps of wind distribution and wildlife densities with power output predictions for the Ohio State University campus. We processed public GIS records and airborne lidar point-cloud data to develop a 3D map of all campus buildings and trees. High resolution large-eddy simulations and long-term wind climatology were combined to provide land-surface-affected 3D wind fields and the corresponding wind-power generation potential. This power prediction map was then combined with bird survey data. Our assessment predicts that exclusion of areas where bird numbers are highest will have modest effects on the availability of locations for power generation. The exclusion zone approach allows the incorporation of wildlife hazard in wind turbine siting and power output considerations in complex urban environments even when the quantitative interaction between wildlife behavior and turbine activity is unknown.
MEDSLIK oil spill model recent developments
NASA Astrophysics Data System (ADS)
Lardner, Robin; Zodiatis, George
2016-04-01
MEDSLIK oil spill model recent developments Robin Lardner and George Zodiatis Oceanography Center, University of Cyprus, 1678 Nicosia, Cyprus MEDSLIK is a well established 3D oil spill model that predicts the transport, fate and weathering of oil spills and is used by several response agencies and institutions around the Mediterranean, the Black seas and worldwide. MEDSLIK has been used operationally for real oil spill accidents and for preparedness in contingency planning within the framework of pilot projects with REMPEC-Regional Marine Pollution Emergency Response Centre for the Mediterranean Sea and EMSA-European Maritime Safety Agency. MEDSLIK has been implemented in many EU funded projects regarding oil spill predictions using the operational ocean forecasts, as for example the ECOOP, NEREIDs, RAOP-Med, EMODNET MedSea Check Point. Within the frame of MEDESS4MS project, MEDSLIK is at the heart of the MEDESS4MS multi model oil spill prediction system. The MEDSLIK oil spill model contains among other, the following features: a built-in database with 240 different oil types characteristics, assimilation of oil slick observations from in-situ or aerial, to correct the predictions, virtual deployment of oil booms and/or oil skimmers/dispersants, continuous or instantaneous oil spills from moving or drifting ships whose slicks merge can be modelled together, multiple oil spill predictions from different locations, backward simulations for tracking the source of oil spill pollution, integration with AIS data upon the availability of AIS data, sub-surface oil spills at any given water depth, coupling with SAR satellite data. The MEDSLIK can be used for operational intervention for any user-selected region in the world if the appropriate coastline, bathymetry and meteo-ocean forecast files are provided. MEDSLIK oil spill model has been extensively validated in the Mediterranean Sea, both in real oil spill incidents (i.e. during the Lebanese oil pollution crisis in summer 2006, the biggest oil pollution event in the Eastern Mediterranean so far) and through inter-comparison using drifters. The quality of the MEDSLIK oil spill model predictions depends on the quality of the meteo-ocean forecasting data that will be used. The guidelines set by the MEDESS4MS project to harmonize the meteo-ocean, oil spill and trajectory models input/output formats are implemented in MEDSLIK to suit the operational oil spill predictions. The output results of the trajectory predictions may available in the MEDESS4MS output standards (XML)and in ASCII, while the images in BMP or PNG, TIF,GIF, JPG (image), in KML (Google Earth).
Ching, Travers; Zhu, Xun; Garmire, Lana X
2018-04-01
Artificial neural networks (ANN) are computing architectures with many interconnections of simple neural-inspired computing elements, and have been applied to biomedical fields such as imaging analysis and diagnosis. We have developed a new ANN framework called Cox-nnet to predict patient prognosis from high throughput transcriptomics data. In 10 TCGA RNA-Seq data sets, Cox-nnet achieves the same or better predictive accuracy compared to other methods, including Cox-proportional hazards regression (with LASSO, ridge, and mimimax concave penalty), Random Forests Survival and CoxBoost. Cox-nnet also reveals richer biological information, at both the pathway and gene levels. The outputs from the hidden layer node provide an alternative approach for survival-sensitive dimension reduction. In summary, we have developed a new method for accurate and efficient prognosis prediction on high throughput data, with functional biological insights. The source code is freely available at https://github.com/lanagarmire/cox-nnet.
Artificial neural network intelligent method for prediction
NASA Astrophysics Data System (ADS)
Trifonov, Roumen; Yoshinov, Radoslav; Pavlova, Galya; Tsochev, Georgi
2017-09-01
Accounting and financial classification and prediction problems are high challenge and researchers use different methods to solve them. Methods and instruments for short time prediction of financial operations using artificial neural network are considered. The methods, used for prediction of financial data as well as the developed forecasting system with neural network are described in the paper. The architecture of a neural network used four different technical indicators, which are based on the raw data and the current day of the week is presented. The network developed is used for forecasting movement of stock prices one day ahead and consists of an input layer, one hidden layer and an output layer. The training method is algorithm with back propagation of the error. The main advantage of the developed system is self-determination of the optimal topology of neural network, due to which it becomes flexible and more precise The proposed system with neural network is universal and can be applied to various financial instruments using only basic technical indicators as input data.
NASA Astrophysics Data System (ADS)
Byrd, K. B.; Kreitler, J.; Labiosa, W.
2010-12-01
A scenario represents an account of a plausible future given logical assumptions about how conditions change over discrete bounds of space and time. Development of multiple scenarios provides a means to identify alternative directions of urban growth that account for a range of uncertainty in human behavior. Interactions between human and natural processes may be studied by coupling urban growth scenario outputs with biophysical change models; if growth scenarios encompass a sufficient range of alternative futures, scenario assumptions serve to constrain the uncertainty of biophysical models. Spatially explicit urban growth models (map-based) produce output such as distributions and densities of residential or commercial development in a GIS format that can serve as input to other models. Successful fusion of growth model outputs with other model inputs requires that both models strategically address questions of interest, incorporate ecological feedbacks, and minimize error. The U.S. Geological Survey (USGS) Puget Sound Ecosystem Portfolio Model (PSEPM) is a decision-support tool that supports land use and restoration planning in Puget Sound, Washington, a 35,500 sq. km region. The PSEPM couples future scenarios of urban growth with statistical, process-based and rule-based models of nearshore biophysical changes and ecosystem services. By using a multi-criteria approach, the PSEPM identifies cross-system and cumulative threats to the nearshore environment plus opportunities for conservation and restoration. Sub-models that predict changes in nearshore biophysical condition were developed and existing models were integrated to evaluate three growth scenarios: 1) Status Quo, 2) Managed Growth, and 3) Unconstrained Growth. These decadal scenarios were developed and projected out to 2060 at Oregon State University using the GIS-based ENVISION model. Given land management decisions and policies under each growth scenario, the sub-models predicted changes in 1) fecal coliform in shellfish growing areas, 2) sediment supply to beaches, 3) State beach recreational visits, 4) eelgrass habitat suitability, 5) forage fish habitat suitability, and 6) nutrient loadings. In some cases thousands of shoreline units were evaluated with multiple predictive models, creating a need for streamlined and consistent database development and data processing. Model development over multiple disciplines demonstrated the challenge of merging data types from multiple sources that were inconsistent in spatial and temporal resolution, classification schemes, and topology. Misalignment of data in space and time created potential for error and misinterpretation of results. This effort revealed that the fusion of growth scenarios and biophysical models requires an up-front iterative adjustment of both scenarios and models so that growth model outputs provide the needed input data in the correct format. Successful design of data flow across models that includes feedbacks between human and ecological systems was found to enhance the use of the final data product for decision making.
Somatosensory responses in a human motor cortex
Donoghue, John P.; Hochberg, Leigh R.
2013-01-01
Somatic sensory signals provide a major source of feedback to motor cortex. Changes in somatosensory systems after stroke or injury could profoundly influence brain computer interfaces (BCI) being developed to create new output signals from motor cortex activity patterns. We had the unique opportunity to study the responses of hand/arm area neurons in primary motor cortex to passive joint manipulation in a person with a long-standing brain stem stroke but intact sensory pathways. Neurons responded to passive manipulation of the contralateral shoulder, elbow, or wrist as predicted from prior studies of intact primates. Thus fundamental properties and organization were preserved despite arm/hand paralysis and damage to cortical outputs. The same neurons were engaged by attempted arm actions. These results indicate that intact sensory pathways retain the potential to influence primary motor cortex firing rates years after cortical outputs are interrupted and may contribute to online decoding of motor intentions for BCI applications. PMID:23343902
Multi-MW K-Band Harmonic Multiplier: RF Source For High-Gradient Accelerator R & D
NASA Astrophysics Data System (ADS)
Solyak, N. A.; Yakovlev, V. P.; Kazakov, S. Yu.; Hirshfield, J. L.
2009-01-01
A preliminary design is presented for a two-cavity harmonic multiplier, intended as a high-power RF source for use in experiments aimed at developing high-gradient structures for a future collider. The harmonic multiplier is to produce power at selected frequencies in K-band (18-26.5 GHz) using as an RF driver an XK-5 S-band klystron (2.856 GHz). The device is to be built with a TE111 rotating mode input cavity and interchangeable output cavities running in the TEn11 rotating mode, with n = 7,8,9 at 19.992, 22.848, and 25.704 GHz. An example for a 7th harmonic multiplier is described, using a 250 kV, 20 A injected laminar electron beam; with 10 MW of S-band drive power, 4.7 MW of 20-GHz output power is predicted. Details are described of the magnetic circuit, cavities, and output coupler.
Lallart, Mickaël; Garbuio, Lauric; Petit, Lionel; Richard, Claude; Guyomar, Daniel
2008-10-01
This paper presents a new technique for optimized energy harvesting using piezoelectric microgenerators called double synchronized switch harvesting (DSSH). This technique consists of a nonlinear treatment of the output voltage of the piezoelectric element. It also integrates an intermediate switching stage that ensures an optimal harvested power whatever the load connected to the microgenerator. Theoretical developments are presented considering either constant vibration magnitude, constant driving force, or independent extraction. Then experimental measurements are carried out to validate the theoretical predictions. This technique exhibits a constant output power for a wide range of load connected to the microgenerator. In addition, the extracted power obtained using such a technique allows a gain up to 500% in terms of maximal power output compared with the standard energy harvesting method. It is also shown that such a technique allows a fine-tuning of the trade-off between vibration damping and energy harvesting.
Prediction model of sinoatrial node field potential using high order partial least squares.
Feng, Yu; Cao, Hui; Zhang, Yanbin
2015-01-01
High order partial least squares (HOPLS) is a novel data processing method. It is highly suitable for building prediction model which has tensor input and output. The objective of this study is to build a prediction model of the relationship between sinoatrial node field potential and high glucose using HOPLS. The three sub-signals of the sinoatrial node field potential made up the model's input. The concentration and the actuation duration of high glucose made up the model's output. The results showed that on the premise of predicting two dimensional variables, HOPLS had the same predictive ability and a lower dispersion degree compared with partial least squares (PLS).
Fuzzy rule based estimation of agricultural diffuse pollution concentration in streams.
Singh, Raj Mohan
2008-04-01
Outflow from the agricultural fields carries diffuse pollutants like nutrients, pesticides, herbicides etc. and transports the pollutants into the nearby streams. It is a matter of serious concern for water managers and environmental researchers. The application of chemicals in the agricultural fields, and transport of these chemicals into streams are uncertain that cause complexity in reliable stream quality predictions. The chemical characteristics of applied chemical, percentage of area under the chemical application etc. are some of the main inputs that cause pollution concentration as output in streams. Each of these inputs and outputs may contain measurement errors. Fuzzy rule based model based on fuzzy sets suits to address uncertainties in inputs by incorporating overlapping membership functions for each of inputs even for limited data availability situations. In this study, the property of fuzzy sets to address the uncertainty in input-output relationship is utilized to obtain the estimate of concentrations of a herbicide, atrazine, in a stream. The data of White river basin, a part of the Mississippi river system, is used for developing the fuzzy rule based models. The performance of the developed methodology is found encouraging.
Drain data to predict clinically relevant pancreatic fistula
Moskovic, Daniel J; Hodges, Sally E; Wu, Meng-Fen; Brunicardi, F Charles; Hilsenbeck, Susan G; Fisher, William E
2010-01-01
Background Post-operative pancreatic fistula (POPF) is a common and potentially devastating complication of pancreas resection. Management of this complication is important to the pancreas surgeon. Objective The aim of the present study was to evaluate whether drain data accurately predicts clinically significant POPF. Methods A prospectively maintained database with daily drain amylase concentrations and output volumes from 177 consecutive pancreatic resections was analysed. Drain data, demographic and operative data were correlated with POPF (ISGPF Grade: A – clinically silent, B – clinically evident, C – severe) to determine predictive factors. Results Twenty-six (46.4%) out of 56 patients who underwent distal pancreatectomy and 52 (43.0%) out of 121 patients who underwent a Whipple procedure developed a POPF (Grade A-C). POPFs were classified as A (24, 42.9%) and C (2, 3.6%) after distal pancreatectomy whereas they were graded as A (35, 28.9%), B (15, 12.4%) and C (2, 1.7%) after Whipple procedures. Drain data analysis was limited to Whipple procedures because only two patients developed a clinically significant leak after distal pancreatectomy. The daily total drain output did not differ between patients with a clinical leak (Grades B/C) and patients without a clinical leak (no leak and Grade A) on post-operative day (POD) 1 to 7. Although the median amylase concentration was significantly higher in patients with a clinical leak on POD 1–6, there was no day that amylase concentration predicted a clinical leak better than simply classifying all patients as ‘no leak’ (maximum accuracy =86.1% on POD 1, expected accuracy by chance =85.6%, kappa =10.2%). Conclusion Drain amylase data in the early post-operative period are not a sensitive or specific predictor of which patients will develop clinically significant POPF after pancreas resection. PMID:20815856
NASA Astrophysics Data System (ADS)
Sinner, K.; Teasley, R. L.
2016-12-01
Groundwater models serve as integral tools for understanding flow processes and informing stakeholders and policy makers in management decisions. Historically, these models tended towards a deterministic nature, relying on historical data to predict and inform future decisions based on model outputs. This research works towards developing a stochastic method of modeling recharge inputs from pipe main break predictions in an existing groundwater model, which subsequently generates desired outputs incorporating future uncertainty rather than deterministic data. The case study for this research is the Barton Springs segment of the Edwards Aquifer near Austin, Texas. Researchers and water resource professionals have modeled the Edwards Aquifer for decades due to its high water quality, fragile ecosystem, and stakeholder interest. The original case study and model that this research is built upon was developed as a co-design problem with regional stakeholders and the model outcomes are generated specifically for communication with policy makers and managers. Recently, research in the Barton Springs segment demonstrated a significant contribution of urban, or anthropogenic, recharge to the aquifer, particularly during dry period, using deterministic data sets. Due to social and ecological importance of urban water loss to recharge, this study develops an evaluation method to help predicted pipe breaks and their related recharge contribution within the Barton Springs segment of the Edwards Aquifer. To benefit groundwater management decision processes, the performance measures captured in the model results, such as springflow, head levels, storage, and others, were determined by previous work in elicitation of problem framing to determine stakeholder interests and concerns. The results of the previous deterministic model and the stochastic model are compared to determine gains to stakeholder knowledge through the additional modeling
Minimizing the total harmonic distortion for a 3 kW, 20 kHz ac to dc converter using SPICE
NASA Technical Reports Server (NTRS)
Lollar, Louis F.; Kapustka, Robert E.
1988-01-01
This paper describes the SPICE model of a transformer-rectified-filter (TRF) circuit and the Micro-CAP (Microcomputer Circuit Analysis Program) model and their application. The models were used to develop an actual circuit with reduced input current THD. The SPICE analysis consistently predicted the THD improvements in actual circuits as various designs were attempted. In an effort to predict and verify load regulation, the incorporation of saturable inductor models significantly improved the fidelity of the TRF circuit output voltage.
Polyadenylation site prediction using PolyA-iEP method.
Kavakiotis, Ioannis; Tzanis, George; Vlahavas, Ioannis
2014-01-01
This chapter presents a method called PolyA-iEP that has been developed for the prediction of polyadenylation sites. More precisely, PolyA-iEP is a method that recognizes mRNA 3'ends which contain polyadenylation sites. It is a modular system which consists of two main components. The first exploits the advantages of emerging patterns and the second is a distance-based scoring method. The outputs of the two components are finally combined by a classifier. The final results reach very high scores of sensitivity and specificity.
Photovoltaic System Modeling. Uncertainty and Sensitivity Analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Clifford W.; Martin, Curtis E.
2015-08-01
We report an uncertainty and sensitivity analysis for modeling AC energy from ph otovoltaic systems . Output from a PV system is predicted by a sequence of models. We quantify u ncertainty i n the output of each model using empirical distribution s of each model's residuals. We propagate uncertainty through the sequence of models by sampli ng these distributions to obtain a n empirical distribution of a PV system's output. We consider models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane - of - array irradiance; (2) estimate effective irradiance; (3) predict cell temperature;more » (4) estimate DC voltage, current and power ; (5) reduce DC power for losses due to inefficient maximum power point tracking or mismatch among modules; and (6) convert DC to AC power . O ur analysis consider s a notional PV system com prising an array of FirstSolar FS - 387 modules and a 250 kW AC inverter ; we use measured irradiance and weather at Albuquerque, NM. We found the uncertainty in PV syste m output to be relatively small, on the order of 1% for daily energy. We found that unce rtainty in the models for POA irradiance and effective irradiance to be the dominant contributors to uncertainty in predicted daily energy. Our analysis indicates that efforts to reduce the uncertainty in PV system output predictions may yield the greatest improvements by focusing on the POA and effective irradiance models.« less
Factors leading to different viability predictions for a grizzly bear data set
Mills, L.S.; Hayes, S.G.; Wisdom, M.J.; Citta, J.; Mattson, D.J.; Murphy, K.
1996-01-01
Population viability analysis programs are being used increasingly in research and management applications, but there has not been a systematic study of the congruence of different program predictions based on a single data set. We performed such an analysis using four population viability analysis computer programs: GAPPS, INMAT, RAMAS/AGE, and VORTEX. The standardized demographic rates used in all programs were generalized from hypothetical increasing and decreasing grizzly bear (Ursus arctos horribilis) populations. Idiosyncracies of input format for each program led to minor differences in intrinsic growth rates that translated into striking differences in estimates of extinction rates and expected population size. In contrast, the addition of demographic stochasticity, environmental stochasticity, and inbreeding costs caused only a small divergence in viability predictions. But, the addition of density dependence caused large deviations between the programs despite our best attempts to use the same density-dependent functions. Population viability programs differ in how density dependence is incorporated, and the necessary functions are difficult to parameterize accurately. Thus, we recommend that unless data clearly suggest a particular density-dependent model, predictions based on population viability analysis should include at least one scenario without density dependence. Further, we describe output metrics that may differ between programs; development of future software could benefit from standardized input and output formats across different programs.
NASA Astrophysics Data System (ADS)
Del Giudice, Dario; Löwe, Roland; Madsen, Henrik; Mikkelsen, Peter Steen; Rieckermann, Jörg
2015-07-01
In urban rainfall-runoff, commonly applied statistical techniques for uncertainty quantification mostly ignore systematic output errors originating from simplified models and erroneous inputs. Consequently, the resulting predictive uncertainty is often unreliable. Our objective is to present two approaches which use stochastic processes to describe systematic deviations and to discuss their advantages and drawbacks for urban drainage modeling. The two methodologies are an external bias description (EBD) and an internal noise description (IND, also known as stochastic gray-box modeling). They emerge from different fields and have not yet been compared in environmental modeling. To compare the two approaches, we develop a unifying terminology, evaluate them theoretically, and apply them to conceptual rainfall-runoff modeling in the same drainage system. Our results show that both approaches can provide probabilistic predictions of wastewater discharge in a similarly reliable way, both for periods ranging from a few hours up to more than 1 week ahead of time. The EBD produces more accurate predictions on long horizons but relies on computationally heavy MCMC routines for parameter inferences. These properties make it more suitable for off-line applications. The IND can help in diagnosing the causes of output errors and is computationally inexpensive. It produces best results on short forecast horizons that are typical for online applications.
NASA Technical Reports Server (NTRS)
Schifer, Nicholas A.; Briggs, Maxwell H.
2012-01-01
The U.S. Department of Energy (DOE) and Lockheed Martin Space Systems Company (LMSSC) have been developing the Advanced Stirling Radioisotope Generator (ASRG) for use as a power system for space science missions. This generator would use two highefficiency Advanced Stirling Convertors (ASCs), developed by Sunpower Inc. and NASA Glenn Research Center (GRC). As part of ground testing of these ASCs, different operating conditions are used to simulate expected mission conditions. These conditions require achieving a specified electrical power output for a given net heat input. While electrical power output can be precisely quantified, thermal power input to the Stirling cycle cannot be directly measured. In an effort to improve net heat input predictions, the Mock Heater Head was developed with the same relative thermal paths as a convertor using a conducting rod to represent the Stirling cycle and tested to provide a direct comparison to numerical and empirical models used to predict convertor net heat input. The Mock Heater Head also served as the pathfinder for a higher fidelity version of validation test hardware, known as the Thermal Standard. This paper describes how the Mock Heater Head was tested and utilized to validate a process for the Thermal Standard.
A Dynamic Calibration Method for Experimental and Analytical Hub Load Comparison
NASA Technical Reports Server (NTRS)
Kreshock, Andrew R.; Thornburgh, Robert P.; Wilbur, Matthew L.
2017-01-01
This paper presents the results from an ongoing effort to produce improved correlation between analytical hub force and moment prediction and those measured during wind-tunnel testing on the Aeroelastic Rotor Experimental System (ARES), a conventional rotor testbed commonly used at the Langley Transonic Dynamics Tunnel (TDT). A frequency-dependent transformation between loads at the rotor hub and outputs of the testbed balance is produced from frequency response functions measured during vibration testing of the system. The resulting transformation is used as a dynamic calibration of the balance to transform hub loads predicted by comprehensive analysis into predicted balance outputs. In addition to detailing the transformation process, this paper also presents a set of wind-tunnel test cases, with comparisons between the measured balance outputs and transformed predictions from the comprehensive analysis code CAMRAD II. The modal response of the testbed is discussed and compared to a detailed finite-element model. Results reveal that the modal response of the testbed exhibits a number of characteristics that make accurate dynamic balance predictions challenging, even with the use of the balance transformation.
NASA Astrophysics Data System (ADS)
Pohjoranta, Antti; Halinen, Matias; Pennanen, Jari; Kiviaho, Jari
2015-03-01
Generalized predictive control (GPC) is applied to control the maximum temperature in a solid oxide fuel cell (SOFC) stack and the temperature difference over the stack. GPC is a model predictive control method and the models utilized in this work are ARX-type (autoregressive with extra input), multiple input-multiple output, polynomial models that were identified from experimental data obtained from experiments with a complete SOFC system. The proposed control is evaluated by simulation with various input-output combinations, with and without constraints. A comparison with conventional proportional-integral-derivative (PID) control is also made. It is shown that if only the stack maximum temperature is controlled, a standard PID controller can be used to obtain output performance comparable to that obtained with the significantly more complex model predictive controller. However, in order to control the temperature difference over the stack, both the stack minimum and the maximum temperature need to be controlled and this cannot be done with a single PID controller. In such a case the model predictive controller provides a feasible and effective solution.
Poloczek, Sebastian; Büttner, Gerhard; Hasselhorn, Marcus
2014-02-01
There is mounting evidence that children and adolescents with intellectual disabilities (ID) of nonspecific aetiology perform poorer on phonological short-term memory tasks than children matched for mental age indicating a structural deficit in a process contributing to short-term recall of verbal material. One explanation is that children with ID of nonspecific aetiology do not activate subvocal rehearsal to refresh degrading memory traces. However, existing research concerning this explanation is inconclusive since studies focussing on the word length effect (WLE) as indicator of rehearsal have revealed inconsistent results for samples with ID and because in several existing studies, it is unclear whether the WLE was caused by rehearsal or merely appeared during output of the responses. We assumed that in children with ID only output delays produce a small WLE while in typically developing 6- to 8-year-olds rehearsal and output contribute to the WLE. From this assumption we derived several predictions that were tested in an experiment including 34 children with mild or borderline ID and 34 typically developing children matched for mental age (MA). As predicted, results revealed a small but significant WLE for children with ID that was significantly smaller than the WLE in the control group. Additionally, for children with ID, a WLE was not found for the first word of each trial but the effect emerged only in later serial positions. The findings corroborate the notion that in children with ID subvocal rehearsal does not develop in line with their mental age and provide a potential explanation for the inconsistent results on the WLE in children with ID. Copyright © 2013 Elsevier Ltd. All rights reserved.
Evaluation and statistical inference for human connectomes.
Pestilli, Franco; Yeatman, Jason D; Rokem, Ariel; Kay, Kendrick N; Wandell, Brian A
2014-10-01
Diffusion-weighted imaging coupled with tractography is currently the only method for in vivo mapping of human white-matter fascicles. Tractography takes diffusion measurements as input and produces the connectome, a large collection of white-matter fascicles, as output. We introduce a method to evaluate the evidence supporting connectomes. Linear fascicle evaluation (LiFE) takes any connectome as input and predicts diffusion measurements as output, using the difference between the measured and predicted diffusion signals to quantify the prediction error. We use the prediction error to evaluate the evidence that supports the properties of the connectome, to compare tractography algorithms and to test hypotheses about tracts and connections.
He, Dan; Kuhn, David; Parida, Laxmi
2016-06-15
Given a set of biallelic molecular markers, such as SNPs, with genotype values encoded numerically on a collection of plant, animal or human samples, the goal of genetic trait prediction is to predict the quantitative trait values by simultaneously modeling all marker effects. Genetic trait prediction is usually represented as linear regression models. In many cases, for the same set of samples and markers, multiple traits are observed. Some of these traits might be correlated with each other. Therefore, modeling all the multiple traits together may improve the prediction accuracy. In this work, we view the multitrait prediction problem from a machine learning angle: as either a multitask learning problem or a multiple output regression problem, depending on whether different traits share the same genotype matrix or not. We then adapted multitask learning algorithms and multiple output regression algorithms to solve the multitrait prediction problem. We proposed a few strategies to improve the least square error of the prediction from these algorithms. Our experiments show that modeling multiple traits together could improve the prediction accuracy for correlated traits. The programs we used are either public or directly from the referred authors, such as MALSAR (http://www.public.asu.edu/~jye02/Software/MALSAR/) package. The Avocado data set has not been published yet and is available upon request. dhe@us.ibm.com. © The Author 2016. Published by Oxford University Press.
Predicting ecological roles in the rhizosphere using metabolome and transportome modeling
Larsen, Peter E.; Collart, Frank R.; Dai, Yang; ...
2015-09-02
The ability to obtain complete genome sequences from bacteria in environmental samples, such as soil samples from the rhizosphere, has highlighted the microbial diversity and complexity of environmental communities. New algorithms to analyze genome sequence information in the context of community structure are needed to enhance our understanding of the specific ecological roles of these organisms in soil environments. We present a machine learning approach using sequenced Pseudomonad genomes coupled with outputs of metabolic and transportomic computational models for identifying the most predictive molecular mechanisms indicative of a Pseudomonad’s ecological role in the rhizosphere: a biofilm, biocontrol agent, promoter ofmore » plant growth, or plant pathogen. Computational predictions of ecological niche were highly accurate overall with models trained on transportomic model output being the most accurate (Leave One Out Validation F-scores between 0.82 and 0.89). The strongest predictive molecular mechanism features for rhizosphere ecological niche overlap with many previously reported analyses of Pseudomonad interactions in the rhizosphere, suggesting that this approach successfully informs a system-scale level understanding of how Pseudomonads sense and interact with their environments. The observation that an organism’s transportome is highly predictive of its ecological niche is a novel discovery and may have implications in our understanding microbial ecology. The framework developed here can be generalized to the analysis of any bacteria across a wide range of environments and ecological niches making this approach a powerful tool for providing insights into functional predictions from bacterial genomic data.« less
Predicting Ecological Roles in the Rhizosphere Using Metabolome and Transportome Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larsen, Peter E.; Collart, Frank R.; Dai, Yang
2015-09-02
The ability to obtain complete genome sequences from bacteria in environmental samples, such as soil samples from the rhizosphere, has highlighted the microbial diversity and complexity of environmental communities. However, new algorithms to analyze genome sequence information in the context of community structure are needed to enhance our understanding of the specific ecological roles of these organisms in soil environments. We present a machine learning approach using sequenced Pseudomonad genomes coupled with outputs of metabolic and transportomic computational models for identifying the most predictive molecular mechanisms indicative of a Pseudomonad's ecological role in the rhizosphere: a biofilm, biocontrol agent, promotermore » of plant growth, or plant pathogen. Computational predictions of ecological niche were highly accurate overall with models trained on transportomic model output being the most accurate (Leave One Out Validation F-scores between 0.82 and 0.89). The strongest predictive molecular mechanism features for rhizosphere ecological niche overlap with many previously reported analyses of Pseudomonad interactions in the rhizosphere, suggesting that this approach successfully informs a system-scale level understanding of how Pseudomonads sense and interact with their environments. The observation that an organism's transportome is highly predictive of its ecological niche is a novel discovery and may have implications in our understanding microbial ecology. The framework developed here can be generalized to the analysis of any bacteria across a wide range of environments and ecological niches making this approach a powerful tool for providing insights into functional predictions from bacterial genomic data.« less
The light output and the detection efficiency of the liquid scintillator EJ-309.
Pino, F; Stevanato, L; Cester, D; Nebbia, G; Sajo-Bohus, L; Viesti, G
2014-07-01
The light output response and the neutron and gamma-ray detection efficiency are determined for liquid scintillator EJ-309. The light output function is compared to those of previous studies. Experimental efficiency results are compared to predictions from GEANT4, MCNPX and PENELOPE Monte Carlo simulations. The differences associated with the use of different light output functions are discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.
The Czech Hydrometeorological Institute's severe storm nowcasting system
NASA Astrophysics Data System (ADS)
Novak, Petr
2007-02-01
To satisfy requirements for operational severe weather monitoring and prediction, the Czech Hydrometeorological Institute (CHMI) has developed a severe storm nowcasting system which uses weather radar data as its primary data source. Previous CHMI studies identified two methods of radar echo prediction, which were then implemented during 2003 into the Czech weather radar network operational weather processor. The applications put into operations were the Continuity Tracking Radar Echoes by Correlation (COTREC) algorithm, and an application that predicts future radar fields using the wind field derived from the geopotential at 700 hPa calculated from a local numerical weather prediction model (ALADIN). To ensure timely delivery of the prediction products to the users, the forecasts are implemented into a web-based viewer (JSMeteoView) that has been developed by the CHMI Radar Department. At present, this viewer is used by all CHMI forecast offices for versatile visualization of radar and other meteorological data (Meteosat, lightning detection, NWP LAM output, SYNOP data) in the Internet/Intranet environment, and the viewer has detailed geographical navigation capabilities.
Validation of the thermal challenge problem using Bayesian Belief Networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McFarland, John; Swiler, Laura Painton
The thermal challenge problem has been developed at Sandia National Laboratories as a testbed for demonstrating various types of validation approaches and prediction methods. This report discusses one particular methodology to assess the validity of a computational model given experimental data. This methodology is based on Bayesian Belief Networks (BBNs) and can incorporate uncertainty in experimental measurements, in physical quantities, and model uncertainties. The approach uses the prior and posterior distributions of model output to compute a validation metric based on Bayesian hypothesis testing (a Bayes' factor). This report discusses various aspects of the BBN, specifically in the context ofmore » the thermal challenge problem. A BBN is developed for a given set of experimental data in a particular experimental configuration. The development of the BBN and the method for ''solving'' the BBN to develop the posterior distribution of model output through Monte Carlo Markov Chain sampling is discussed in detail. The use of the BBN to compute a Bayes' factor is demonstrated.« less
NASA Technical Reports Server (NTRS)
Slaby, J. G.
1986-01-01
Free piston Stirling technology is applicable for both solar and nuclear powered systems. As such, the Lewis Research Center serves as the project office to manage the newly initiated SP-100 Advanced Technology Program. This five year program provides the technology push for providing significant component and subsystem options for increased efficiency, reliability and survivability, and power output growth at reduced specific mass. One of the major elements of the program is the development of advanced power conversion concepts of which the Stirling cycle is a viable candidate. Under this program the research findings of the 25 kWe opposed piston Space Power Demonstrator Engine (SPDE) are presented. Included in the SPDE discussions are initial differences between predicted and experimental power outputs and power output influenced by variations in regenerators. Projections are made for future space power requirements over the next few decades. And a cursory comparison is presented showing the mass benefits that a Stirling system has over a Brayton system for the same peak temperature and output power.
The effects of load on system and lower-body joint kinetics during jump squats.
Moir, Gavin L; Gollie, Jared M; Davis, Shala E; Guers, John J; Witmer, Chad A
2012-11-01
To investigate the effects of different loads on system and lower-body kinetics during jump squats, 12 resistance-trained men performed jumps under different loading conditions: 0%, 12%, 27%, 42%, 56%, 71%, and 85% of 1-repetition maximum (1-RM). System power output was calculated as the product of the vertical component of the ground reaction force and the vertical velocity of the bar during its ascent. Joint power output was calculated during bar ascent for the hip, knee, and ankle joints, and was also summed across the joints. System power output and joint power at knee and ankle joints were maximized at 0% 1-RM (p < 0.001) and followed the linear trends (p < 0.001) caused by power output decreasing as the load increased. Power output at the hip was maximized at 42% 1-RM (p = 0.016) and followed a quadratic trend (p = 0.030). Summed joint power could be predicted from system power (p < 0.05), while system power could predict power at the knee and ankle joints under some of the loading conditions. Power at the hip could not be predicted from system power. System power during loaded jumps reflects the power at the knee and ankle, while power at the hip does not correspond to system power.
Eloqayli, Haytham; Al-Yousef, Ali; Jaradat, Raid
2018-02-15
Despite the high prevalence of chronic neck pain, there is limited consensus about the primary etiology, risk factors, diagnostic criteria and therapeutic outcome. Here, we aimed to determine if Ferritin and Vitamin D are modifiable risk factors with chronic neck pain using slandered statistics and artificial intelligence neural network (ANN). Fifty-four patients with chronic neck pain treated between February 2016 and August 2016 in King Abdullah University Hospital and 54 patients age matched controls undergoing outpatient or minor procedures were enrolled. Patients and control demographic parameters, height, weight and single measurement of serum vitamin D, Vitamin B12, ferritin, calcium, phosphorus, zinc were obtained. An ANN prediction model was developed. The statistical analysis reveals that patients with chronic neck pain have significantly lower serum Vitamin D and Ferritin (p-value <.05). 90% of patients with chronic neck pain were females. Multilayer Feed Forward Neural Network with Back Propagation(MFFNN) prediction model were developed and designed based on vitamin D and ferritin as input variables and CNP as output. The ANN model output results show that, 92 out of 108 samples were correctly classified with 85% classification accuracy. Although Iron and vitamin D deficiency cannot be isolated as the sole risk factors of chronic neck pain, they should be considered as two modifiable risk. The high prevalence of chronic neck pain, hypovitaminosis D and low ferritin amongst women is of concern. Bioinformatics predictions with artificial neural network can be of future benefit in classification and prediction models for chronic neck pain. We hope this initial work will encourage a future larger cohort study addressing vitamin D and iron correction as modifiable factors and the application of artificial intelligence models in clinical practice.
Study of CNG/diesel dual fuel engine's emissions by means of RBF neural network.
Liu, Zhen-tao; Fei, Shao-mei
2004-08-01
Great efforts have been made to resolve the serious environmental pollution and inevitable declining of energy resources. A review of Chinese fuel reserves and engine technology showed that compressed natural gas (CNG)/diesel dual fuel engine (DFE) was one of the best solutions for the above problems at present. In order to study and improve the emission performance of CNG/diesel DFE, an emission model for DFE based on radial basis function (RBF) neural network was developed which was a black-box input-output training data model not require priori knowledge. The RBF centers and the connected weights could be selected automatically according to the distribution of the training data in input-output space and the given approximating error. Studies showed that the predicted results accorded well with the experimental data over a large range of operating conditions from low load to high load. The developed emissions model based on the RBF neural network could be used to successfully predict and optimize the emissions performance of DFE. And the effect of the DFEmain performance parameters, such as rotation speed, load, pilot quantity and injection timing, were also predicted by means of this model. In resumé, an emission prediction model for CNG/diesel DFE based on RBF neural network was built for analyzing the effect of the main performance parameters on the CO, NOx, emissions of DFE. The predicted results agreed quite well with the traditional emissions model, which indicated that the model had certain application value, although it still has some limitations, because of its high dependence on the quantity of the experimental sample data.
A Spectral Method for Spatial Downscaling
Reich, Brian J.; Chang, Howard H.; Foley, Kristen M.
2014-01-01
Summary Complex computer models play a crucial role in air quality research. These models are used to evaluate potential regulatory impacts of emission control strategies and to estimate air quality in areas without monitoring data. For both of these purposes, it is important to calibrate model output with monitoring data to adjust for model biases and improve spatial prediction. In this article, we propose a new spectral method to study and exploit complex relationships between model output and monitoring data. Spectral methods allow us to estimate the relationship between model output and monitoring data separately at different spatial scales, and to use model output for prediction only at the appropriate scales. The proposed method is computationally efficient and can be implemented using standard software. We apply the method to compare Community Multiscale Air Quality (CMAQ) model output with ozone measurements in the United States in July 2005. We find that CMAQ captures large-scale spatial trends, but has low correlation with the monitoring data at small spatial scales. PMID:24965037
SCOUT: a fast Monte-Carlo modeling tool of scintillation camera output†
Hunter, William C J; Barrett, Harrison H.; Muzi, John P.; McDougald, Wendy; MacDonald, Lawrence R.; Miyaoka, Robert S.; Lewellen, Thomas K.
2013-01-01
We have developed a Monte-Carlo photon-tracking and readout simulator called SCOUT to study the stochastic behavior of signals output from a simplified rectangular scintillation-camera design. SCOUT models the salient processes affecting signal generation, transport, and readout of a scintillation camera. Presently, we compare output signal statistics from SCOUT to experimental results for both a discrete and a monolithic camera. We also benchmark the speed of this simulation tool and compare it to existing simulation tools. We find this modeling tool to be relatively fast and predictive of experimental results. Depending on the modeled camera geometry, we found SCOUT to be 4 to 140 times faster than other modeling tools. PMID:23640136
Adjusting Quality index Log Values to Represent Local and Regional Commercial Sawlog Product Values
Orris D. McCauley; Joseph J. Mendel; Joseph J. Mendel
1969-01-01
The primary purpose of this paper is not only to report the results of a comparative analysis as to how well the Q.I. method predicts log product values when compared to commercial sawmill log output values, but also to develop a methodology which will facilitate the comparison and provide the adjustments needed by the sawmill operator.
Forecasting Electric Power Generation of Photovoltaic Power System for Energy Network
NASA Astrophysics Data System (ADS)
Kudo, Mitsuru; Takeuchi, Akira; Nozaki, Yousuke; Endo, Hisahito; Sumita, Jiro
Recently, there has been an increase in concern about the global environment. Interest is growing in developing an energy network by which new energy systems such as photovoltaic and fuel cells generate power locally and electric power and heat are controlled with a communications network. We developed the power generation forecast method for photovoltaic power systems in an energy network. The method makes use of weather information and regression analysis. We carried out forecasting power output of the photovoltaic power system installed in Expo 2005, Aichi Japan. As a result of comparing measurements with a prediction values, the average prediction error per day was about 26% of the measured power.
ICAN/PART: Particulate composite analyzer, user's manual and verification studies
NASA Technical Reports Server (NTRS)
Goldberg, Robert K.; Murthy, Pappu L. N.; Mital, Subodh K.
1996-01-01
A methodology for predicting the equivalent properties and constituent microstresses for particulate matrix composites, based on the micromechanics approach, is developed. These equations are integrated into a computer code developed to predict the equivalent properties and microstresses of fiber reinforced polymer matrix composites to form a new computer code, ICAN/PART. Details of the flowchart, input and output for ICAN/PART are described, along with examples of the input and output. Only the differences between ICAN/PART and the original ICAN code are described in detail, and the user is assumed to be familiar with the structure and usage of the original ICAN code. Detailed verification studies, utilizing dim dimensional finite element and boundary element analyses, are conducted in order to verify that the micromechanics methodology accurately models the mechanics of particulate matrix composites. ne equivalent properties computed by ICAN/PART fall within bounds established by the finite element and boundary element results. Furthermore, constituent microstresses computed by ICAN/PART agree in average sense with results computed using the finite element method. The verification studies indicate that the micromechanics programmed into ICAN/PART do indeed accurately model the mechanics of particulate matrix composites.
Development of a solid state laser of Nd:YLF
NASA Astrophysics Data System (ADS)
Doamaralneto, R.
The CW laser action was obtained at room temperature of a Nd:YLF crystal in an astigmatically compensated cavity, pumped by an argon laser. This laser was completely projected, constructed and characterized in our laboratories. It initiates a broader project on laser development that will have several applications like nuclear fusion, industry, medicine, telemetry, etc. Throught the study of the optical properties of the Nd:YLF crystal, laser operation was predicted using a small volume gain medium on the mentioned cavity, pumped by an Ar 514,5 nm laser line. To obtain the laser action at polarizations sigma (1,053 (MU)m) and (PI) (1.047 (MU)m) an active medium was prepared which was a crystalline plate with a convenient crystallographic orientation. The laser characterization is in reasonable agreement with the initial predictions. For a 3.5% output mirror transmission, the oscillation threshold is about 0.15 W incident on the crystal, depending upon the sample used. For 1 W of incident pump light, the output power is estimated to be 12 mw, which corresponds to almost 1.5% slope efficiency. The versatile arrangement is applicable to almost all optically pumped solid state laser materials.
Random Predictor Models for Rigorous Uncertainty Quantification: Part 1
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2015-01-01
This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean and the variance of the model's parameters, thus of the predicted output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, can be bounded tightly and rigorously.
Performance and Simulation of a Stand-alone Parabolic Trough Solar Thermal Power Plant
NASA Astrophysics Data System (ADS)
Mohammad, S. T.; Al-Kayiem, H. H.; Assadi, M. K.; Gilani, S. I. U. H.; Khlief, A. K.
2018-05-01
In this paper, a Simulink® Thermolib Model has been established for simulation performance evaluation of Stand-alone Parabolic Trough Solar Thermal Power Plant in Universiti Teknologi PETRONAS, Malaysia. This paper proposes a design of 1.2 kW parabolic trough power plant. The model is capable to predict temperatures at any system outlet in the plant, as well as the power output produced. The conditions that are taken into account as input to the model are: local solar radiation and ambient temperatures, which have been measured during the year. Other parameters that have been input to the model are the collector’s sizes, location in terms of latitude and altitude. Lastly, the results are presented in graphical manner to describe the analysed variations of various outputs of the solar fields obtained, and help to predict the performance of the plant. The developed model allows an initial evaluation of the viability and technical feasibility of any similar solar thermal power plant.
Jet impingement heat transfer enhancement for the GPU-3 Stirling engine
NASA Technical Reports Server (NTRS)
Johnson, D. C.; Congdon, C. W.; Begg, L. L.; Britt, E. J.; Thieme, L. G.
1981-01-01
A computer model of the combustion-gas-side heat transfer was developed to predict the effects of a jet impingement system and the possible range of improvements available. Using low temperature (315 C (600 F)) pretest data in an updated model, a high temperature silicon carbide jet impingement heat transfer system was designed and fabricated. The system model predicted that at the theoretical maximum limit, jet impingement enhanced heat transfer can: (1) reduce the flame temperature by 275 C (500 F); (2) reduce the exhaust temperature by 110 C (200 F); and (3) increase the overall heat into the working fluid by 10%, all for an increase in required pumping power of less than 0.5% of the engine power output. Initial tests on the GPU-3 Stirling engine at NASA-Lewis demonstrated that the jet impingement system increased the engine output power and efficiency by 5% - 8% with no measurable increase in pumping power. The overall heat transfer coefficient was increased by 65% for the maximum power point of the tests.
Rules and mechanisms for efficient two-stage learning in neural circuits.
Teşileanu, Tiberiu; Ölveczky, Bence; Balasubramanian, Vijay
2017-04-04
Trial-and-error learning requires evaluating variable actions and reinforcing successful variants. In songbirds, vocal exploration is induced by LMAN, the output of a basal ganglia-related circuit that also contributes a corrective bias to the vocal output. This bias is gradually consolidated in RA, a motor cortex analogue downstream of LMAN. We develop a new model of such two-stage learning. Using stochastic gradient descent, we derive how the activity in 'tutor' circuits ( e.g., LMAN) should match plasticity mechanisms in 'student' circuits ( e.g., RA) to achieve efficient learning. We further describe a reinforcement learning framework through which the tutor can build its teaching signal. We show that mismatches between the tutor signal and the plasticity mechanism can impair learning. Applied to birdsong, our results predict the temporal structure of the corrective bias from LMAN given a plasticity rule in RA. Our framework can be applied predictively to other paired brain areas showing two-stage learning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weimar, Mark R.; Daly, Don S.; Wood, Thomas W.
Both nuclear power and nuclear weapons programs should have (related) economic signatures which are detectible at some scale. We evaluated this premise in a series of studies using national economic input/output (IO) data. Statistical discrimination models using economic IO tables predict with a high probability whether a country with an unknown predilection for nuclear weapons proliferation is in fact engaged in nuclear power development or nuclear weapons proliferation. We analyzed 93 IO tables, spanning the years 1993 to 2005 for 37 countries that are either members or associates of the Organization for Economic Cooperation and Development (OECD). The 2009 OECDmore » input/output tables featured 48 industrial sectors based on International Standard Industrial Classification (ISIC) Revision 3, and described the respective economies in current country-of-origin valued currency. We converted and transformed these reported values to US 2005 dollars using appropriate exchange rates and implicit price deflators, and addressed discrepancies in reported industrial sectors across tables. We then classified countries with Random Forest using either the adjusted or industry-normalized values. Random Forest, a classification tree technique, separates and categorizes countries using a very small, select subset of the 2304 individual cells in the IO table. A nation’s efforts in nuclear power, be it for electricity or nuclear weapons, are an enterprise with a large economic footprint -- an effort so large that it should discernibly perturb coarse country-level economics data such as that found in yearly input-output economic tables. The neoclassical economic input-output model describes a country’s or region’s economy in terms of the requirements of industries to produce the current level of economic output. An IO table row shows the distribution of an industry’s output to the industrial sectors while a table column shows the input required of each industrial sector by a given industry.« less
NASA Astrophysics Data System (ADS)
Ahn, J. B.; Hur, J.
2015-12-01
The seasonal prediction of both the surface air temperature and the first-flowering date (FFD) over South Korea are produced using dynamical downscaling (Hur and Ahn, 2015). Dynamical downscaling is performed using Weather Research and Forecast (WRF) v3.0 with the lateral forcing from hourly outputs of Pusan National University (PNU) coupled general circulation model (CGCM) v1.1. Gridded surface air temperature data with high spatial (3km) and temporal (daily) resolution are obtained using the physically-based dynamical models. To reduce systematic bias, simple statistical correction method is then applied to the model output. The FFDs of cherry, peach and pear in South Korea are predicted for the decade of 1999-2008 by applying the corrected daily temperature predictions to the phenological thermal-time model. The WRF v3.0 results reflect the detailed topographical effect, despite having cold and warm biases for warm and cold seasons, respectively. After applying the correction, the mean temperature for early spring (February to April) well represents the general pattern of observation, while preserving the advantages of dynamical downscaling. The FFD predictabilities for the three species of trees are evaluated in terms of qualitative, quantitative and categorical estimations. Although FFDs derived from the corrected WRF results well predict the spatial distribution and the variation of observation, the prediction performance has no statistical significance or appropriate predictability. The approach used in the study may be helpful in obtaining detailed and useful information about FFD and regional temperature by accounting for physically-based atmospheric dynamics, although the seasonal predictability of flowering phenology is not high enough. Acknowledgements This work was carried out with the support of the Rural Development Administration Cooperative Research Program for Agriculture Science and Technology Development under Grant Project No. PJ009953 and Project No. PJ009353, Republic of Korea. Reference Hur, J., J.-B. Ahn, 2015. Seasonal Prediction of Regional Surface Air Temperature and First-flowering Date over South Korea, Int. J. Climatol., DOI: 10.1002/joc.4323.
NASA Astrophysics Data System (ADS)
Bennett, J.; David, R. E.; Wang, Q.; Li, M.; Shrestha, D. L.
2016-12-01
Flood forecasting in Australia has historically relied on deterministic forecasting models run only when floods are imminent, with considerable forecaster input and interpretation. These now co-existed with a continually available 7-day streamflow forecasting service (also deterministic) aimed at operational water management applications such as environmental flow releases. The 7-day service is not optimised for flood prediction. We describe progress on developing a system for ensemble streamflow forecasting that is suitable for both flood prediction and water management applications. Precipitation uncertainty is handled through post-processing of Numerical Weather Prediction (NWP) output with a Bayesian rainfall post-processor (RPP). The RPP corrects biases, downscales NWP output, and produces reliable ensemble spread. Ensemble precipitation forecasts are used to force a semi-distributed conceptual rainfall-runoff model. Uncertainty in precipitation forecasts is insufficient to reliably describe streamflow forecast uncertainty, particularly at shorter lead-times. We characterise hydrological prediction uncertainty separately with a 4-stage error model. The error model relies on data transformation to ensure residuals are homoscedastic and symmetrically distributed. To ensure streamflow forecasts are accurate and reliable, the residuals are modelled using a mixture-Gaussian distribution with distinct parameters for the rising and falling limbs of the forecast hydrograph. In a case study of the Murray River in south-eastern Australia, we show ensemble predictions of floods generally have lower errors than deterministic forecasting methods. We also discuss some of the challenges in operationalising short-term ensemble streamflow forecasts in Australia, including meeting the needs for accurate predictions across all flow ranges and comparing forecasts generated by event and continuous hydrological models.
Probabilistic Predictions of PM2.5 Using a Novel Ensemble Design for the NAQFC
NASA Astrophysics Data System (ADS)
Kumar, R.; Lee, J. A.; Delle Monache, L.; Alessandrini, S.; Lee, P.
2017-12-01
Poor air quality (AQ) in the U.S. is estimated to cause about 60,000 premature deaths with costs of 100B-150B annually. To reduce such losses, the National AQ Forecasting Capability (NAQFC) at the National Oceanic and Atmospheric Administration (NOAA) produces forecasts of ozone, particulate matter less than 2.5 mm in diameter (PM2.5), and other pollutants so that advance notice and warning can be issued to help individuals and communities limit the exposure and reduce air pollution-caused health problems. The current NAQFC, based on the U.S. Environmental Protection Agency Community Multi-scale AQ (CMAQ) modeling system, provides only deterministic AQ forecasts and does not quantify the uncertainty associated with the predictions, which could be large due to the chaotic nature of atmosphere and nonlinearity in atmospheric chemistry. This project aims to take NAQFC a step further in the direction of probabilistic AQ prediction by exploring and quantifying the potential value of ensemble predictions of PM2.5, and perturbing three key aspects of PM2.5 modeling: the meteorology, emissions, and CMAQ secondary organic aerosol formulation. This presentation focuses on the impact of meteorological variability, which is represented by three members of NOAA's Short-Range Ensemble Forecast (SREF) system that were down-selected by hierarchical cluster analysis. These three SREF members provide the physics configurations and initial/boundary conditions for the Weather Research and Forecasting (WRF) model runs that generate required output variables for driving CMAQ that are missing in operational SREF output. We conducted WRF runs for Jan, Apr, Jul, and Oct 2016 to capture seasonal changes in meteorology. Estimated emissions of trace gases and aerosols via the Sparse Matrix Operator Kernel (SMOKE) system were developed using the WRF output. WRF and SMOKE output drive a 3-member CMAQ mini-ensemble of once-daily, 48-h PM2.5 forecasts for the same four months. The CMAQ mini-ensemble is evaluated against both observations and the current operational deterministic NAQFC products, and analyzed to assess the impact of meteorological biases on PM2.5 variability. Quantification of the PM2.5 prediction uncertainty will prove a key factor to support cost-effective decision-making while protecting public health.
NASA Astrophysics Data System (ADS)
Bellos, V.; Mahmoodian, M.; Leopold, U.; Torres-Matallana, J. A.; Schutz, G.; Clemens, F.
2017-12-01
Surrogate models help to decrease the run-time of computationally expensive, detailed models. Recent studies show that Gaussian Process Emulators (GPE) are promising techniques in the field of urban drainage modelling. However, this study focusses on developing a GPE-based surrogate model for later application in Real Time Control (RTC) using input and output time series of a complex simulator. The case study is an urban drainage catchment in Luxembourg. A detailed simulator, implemented in InfoWorks ICM, is used to generate 120 input-output ensembles, from which, 100 are used for training the emulator and 20 for validation of the results. An ensemble of historical rainfall events with 2 hours duration and 10 minutes time steps are considered as the input data. Two example outputs, are selected as wastewater volume and total COD concentration in a storage tank in the network. The results of the emulator are tested with unseen random rainfall events from the ensemble dataset. The emulator is approximately 1000 times faster than the original simulator for this small case study. Whereas the overall patterns of the simulator are matched by the emulator, in some cases the emulator deviates from the simulator. To quantify the accuracy of the emulator in comparison with the original simulator, Nash-Sutcliffe efficiency (NSE) between the emulator and simulator is calculated for unseen rainfall scenarios. The range of NSE for the case of tank volume is from 0.88 to 0.99 with a mean value of 0.95, whereas for COD is from 0.71 to 0.99 with a mean value of 0.92. The emulator is able to predict the tank volume with higher accuracy as the relationship between rainfall intensity and tank volume is linear. For COD, which has a non-linear behaviour, the predictions are less accurate and more uncertain, in particular when rainfall intensity increases. This predictions were improved by including a larger amount of training data for the higher rainfall intensities. It was observed that, the accuracy of the emulator predictions depends on the ensemble training dataset design and the amount of data fed. Finally, more investigation is required to test the possibility of applying this type of fast emulators for model-based RTC applications in which limited number of inputs and outputs are considered in a short prediction horizon.
Predictive Measures of Locomotor Performance on an Unstable Walking Surface
NASA Technical Reports Server (NTRS)
Bloomberg, J. J.; Peters, B. T.; Mulavara, A. P.; Caldwell, E. E.; Batson, C. D.; De Dios, Y. E.; Gadd, N. E.; Goel, R.; Wood, S. J.; Cohen, H. S.;
2016-01-01
Locomotion requires integration of visual, vestibular, and somatosensory information to produce the appropriate motor output to control movement. The degree to which these sensory inputs are weighted and reorganized in discordant sensory environments varies by individual and may be predictive of the ability to adapt to novel environments. The goals of this project are to: 1) develop a set of predictive measures capable of identifying individual differences in sensorimotor adaptability, and 2) use this information to inform the design of training countermeasures designed to enhance the ability of astronauts to adapt to gravitational transitions improving balance and locomotor performance after a Mars landing and enhancing egress capability after a landing on Earth.
Modelling the nitrogen loadings from large yellow croaker (Larimichthys crocea) cage aquaculture.
Cai, Huiwen; Ross, Lindsay G; Telfer, Trevor C; Wu, Changwen; Zhu, Aiyi; Zhao, Sheng; Xu, Meiying
2016-04-01
Large yellow croaker (LYC) cage farming is a rapidly developing industry in the coastal areas of the East China Sea. However, little is known about the environmental nutrient loadings resulting from the current aquaculture practices for this species. In this study, a nitrogenous waste model was developed for LYC based on thermal growth and bioenergetic theories. The growth model produced a good fit with the measured data of the growth trajectory of the fish. The total, dissolved and particulate nitrogen outputs were estimated to be 133, 51 and 82 kg N tonne(-1) of fish production, respectively, with daily dissolved and particulate nitrogen outputs varying from 69 to 104 and 106 to 181 mg N fish(-1), respectively, during the 2012 operational cycle. Greater than 80 % of the nitrogen input from feed was predicted to be lost to the environment, resulting in low nitrogen retention (<20 %) in the fish tissues. Ammonia contributed the greatest proportion (>85 %) of the dissolved nitrogen generated from cage farming. This nitrogen loading assessment model is the first to address nitrogenous output from LYC farming and could be a valuable tool to examine the effects of management and feeding practices on waste from cage farming. The application of this model could help improve the scientific understanding of offshore fish farming systems. Furthermore, the model predicts that a 63 % reduction in nitrogenous waste production could be achieved by switching from the use of trash fish for feed to the use of pelleted feed.
Adam-Poupart, Ariane; Brand, Allan; Fournier, Michel; Jerrett, Michael; Smargiassi, Audrey
2014-09-01
Ambient air ozone (O3) is a pulmonary irritant that has been associated with respiratory health effects including increased lung inflammation and permeability, airway hyperreactivity, respiratory symptoms, and decreased lung function. Estimation of O3 exposure is a complex task because the pollutant exhibits complex spatiotemporal patterns. To refine the quality of exposure estimation, various spatiotemporal methods have been developed worldwide. We sought to compare the accuracy of three spatiotemporal models to predict summer ground-level O3 in Quebec, Canada. We developed a land-use mixed-effects regression (LUR) model based on readily available data (air quality and meteorological monitoring data, road networks information, latitude), a Bayesian maximum entropy (BME) model incorporating both O3 monitoring station data and the land-use mixed model outputs (BME-LUR), and a kriging method model based only on available O3 monitoring station data (BME kriging). We performed leave-one-station-out cross-validation and visually assessed the predictive capability of each model by examining the mean temporal and spatial distributions of the average estimated errors. The BME-LUR was the best predictive model (R2 = 0.653) with the lowest root mean-square error (RMSE ;7.06 ppb), followed by the LUR model (R2 = 0.466, RMSE = 8.747) and the BME kriging model (R2 = 0.414, RMSE = 9.164). Our findings suggest that errors of estimation in the interpolation of O3 concentrations with BME can be greatly reduced by incorporating outputs from a LUR model developed with readily available data.
Ultra-fast consensus of discrete-time multi-agent systems with multi-step predictive output feedback
NASA Astrophysics Data System (ADS)
Zhang, Wenle; Liu, Jianchang
2016-04-01
This article addresses the ultra-fast consensus problem of high-order discrete-time multi-agent systems based on a unified consensus framework. A novel multi-step predictive output mechanism is proposed under a directed communication topology containing a spanning tree. By predicting the outputs of a network several steps ahead and adding this information into the consensus protocol, it is shown that the asymptotic convergence factor is improved by a power of q + 1 compared to the routine consensus. The difficult problem of selecting the optimal control gain is solved well by introducing a variable called convergence step. In addition, the ultra-fast formation achievement is studied on the basis of this new consensus protocol. Finally, the ultra-fast consensus with respect to a reference model and robust consensus is discussed. Some simulations are performed to illustrate the effectiveness of the theoretical results.
Handling Input and Output for COAMPS
NASA Technical Reports Server (NTRS)
Fitzpatrick, Patrick; Tran, Nam; Li, Yongzuo; Anantharaj, Valentine
2007-01-01
Two suites of software have been developed to handle the input and output of the Coupled Ocean Atmosphere Prediction System (COAMPS), which is a regional atmospheric model developed by the Navy for simulating and predicting weather. Typically, the initial and boundary conditions for COAMPS are provided by a flat-file representation of the Navy s global model. Additional algorithms are needed for running the COAMPS software using global models. One of the present suites satisfies this need for running COAMPS using the Global Forecast System (GFS) model of the National Oceanic and Atmospheric Administration. The first step in running COAMPS downloading of GFS data from an Internet file-transfer-protocol (FTP) server computer of the National Centers for Environmental Prediction (NCEP) is performed by one of the programs (SSC-00273) in this suite. The GFS data, which are in gridded binary (GRIB) format, are then changed to a COAMPS-compatible format by another program in the suite (SSC-00278). Once a forecast is complete, still another program in the suite (SSC-00274) sends the output data to a different server computer. The second suite of software (SSC- 00275) addresses the need to ingest up-to-date land-use-and-land-cover (LULC) data into COAMPS for use in specifying typical climatological values of such surface parameters as albedo, aerodynamic roughness, and ground wetness. This suite includes (1) a program to process LULC data derived from observations by the Moderate Resolution Imaging Spectroradiometer (MODIS) instruments aboard NASA s Terra and Aqua satellites, (2) programs to derive new climatological parameters for the 17-land-use-category MODIS data; and (3) a modified version of a FORTRAN subroutine to be used by COAMPS. The MODIS data files are processed to reformat them into a compressed American Standard Code for Information Interchange (ASCII) format used by COAMPS for efficient processing.
A Flexible Spatio-Temporal Model for Air Pollution with Spatial and Spatio-Temporal Covariates.
Lindström, Johan; Szpiro, Adam A; Sampson, Paul D; Oron, Assaf P; Richards, Mark; Larson, Tim V; Sheppard, Lianne
2014-09-01
The development of models that provide accurate spatio-temporal predictions of ambient air pollution at small spatial scales is of great importance for the assessment of potential health effects of air pollution. Here we present a spatio-temporal framework that predicts ambient air pollution by combining data from several different monitoring networks and deterministic air pollution model(s) with geographic information system (GIS) covariates. The model presented in this paper has been implemented in an R package, SpatioTemporal, available on CRAN. The model is used by the EPA funded Multi-Ethnic Study of Atherosclerosis and Air Pollution (MESA Air) to produce estimates of ambient air pollution; MESA Air uses the estimates to investigate the relationship between chronic exposure to air pollution and cardiovascular disease. In this paper we use the model to predict long-term average concentrations of NO x in the Los Angeles area during a ten year period. Predictions are based on measurements from the EPA Air Quality System, MESA Air specific monitoring, and output from a source dispersion model for traffic related air pollution (Caline3QHCR). Accuracy in predicting long-term average concentrations is evaluated using an elaborate cross-validation setup that accounts for a sparse spatio-temporal sampling pattern in the data, and adjusts for temporal effects. The predictive ability of the model is good with cross-validated R 2 of approximately 0.7 at subject sites. Replacing four geographic covariate indicators of traffic density with the Caline3QHCR dispersion model output resulted in very similar prediction accuracy from a more parsimonious and more interpretable model. Adding traffic-related geographic covariates to the model that included Caline3QHCR did not further improve the prediction accuracy.
Real-time implementation of biofidelic SA1 model for tactile feedback.
Russell, A F; Armiger, R S; Vogelstein, R J; Bensmaia, S J; Etienne-Cummings, R
2009-01-01
In order for the functionality of an upper-limb prosthesis to approach that of a real limb it must be able to, accurately and intuitively, convey sensory feedback to the limb user. This paper presents results of the real-time implementation of a 'biofidelic' model that describes mechanotransduction in Slowly Adapting Type 1 (SA1) afferent fibers. The model accurately predicts the timing of action potentials for arbitrary force or displacement stimuli and its output can be used as stimulation times for peripheral nerve stimulation by a neuroprosthetic device. The model performance was verified by comparing the predicted action potential (or spike) outputs against measured spike outputs for different vibratory stimuli. Furthermore experiments were conducted to show that, like real SA1 fibers, the model's spike rate varies according to input pressure and that a periodic 'tapping' stimulus evokes periodic spike outputs.
The MER/CIP Portal for Ground Operations
NASA Technical Reports Server (NTRS)
Chan, Louise; Desai, Sanjay; DOrtenzio, Matthew; Filman, Robtert E.; Heher, Dennis M.; Hubbard, Kim; Johan, Sandra; Keely, Leslie; Magapu, Vish; Mak, Ronald
2003-01-01
We developed the Mars Exploration Rover/Collaborative Information Portal (MER/CIP) to facilitate MER operations. MER/CIP provides a centralized, one-stop delivery platform integrating science and engineering data from several distributed heterogeneous data sources. Key issues for MER/CIP include: 1) Scheduling and schedule reminders; 2) Tracking the status of daily predicted outputs; 3) Finding and analyzing data products; 4) Collaboration; 5) Announcements; 6) Personalization.
Statistical Signal Models and Algorithms for Image Analysis
1984-10-25
In this report, two-dimensional stochastic linear models are used in developing algorithms for image analysis such as classification, segmentation, and object detection in images characterized by textured backgrounds. These models generate two-dimensional random processes as outputs to which statistical inference procedures can naturally be applied. A common thread throughout our algorithms is the interpretation of the inference procedures in terms of linear prediction
Donor age is a predictor of early low output after heart transplantation.
Fujino, Takeo; Kinugawa, Koichiro; Nitta, Daisuke; Imamura, Teruhiko; Maki, Hisataka; Amiya, Eisuke; Hatano, Masaru; Kimura, Mitsutoshi; Kinoshita, Osamu; Nawata, Kan; Komuro, Issei; Ono, Minoru
2016-05-01
Using hearts from marginal donors could be related to increased risk of primary graft dysfunction and poor long-term survival. However, factors associated with delayed myocardial recovery after heart transplantation (HTx) remain unknown. We sought to clarify risk factors that predict early low output after HTx, and investigated whether early low output affects mid-term graft dysfunction. We retrospectively analyzed patients who had undergone HTx at The University of Tokyo Hospital. We defined early low output patients as those whose cardiac index (CI) was <2.2 L/min/m(2) despite the use of intravenous inotrope at 1 week after HTx. We included 45 consecutive HTx recipients, and classified 11 patients into early low output group, and the others into early preserved output group. We performed univariable logistic analysis and found that donor age was the only significant factor that predicted early low output (odds ratio 1.107, 95% confidence interval 1.034-1.210, p=0.002). CI of early low output patients gradually increased and it caught up with that of early preserved output patients at 2 weeks after HTx (2.4±0.6 L/min/m(2) in early low output group vs 2.5±0.5 L/min/m(2) in early preserved output group, p=0.684). Plasma B-type natriuretic peptide concentration of early low output patients was higher (1118.5±1250.2 pg/ml vs 526.4±399.5 pg/ml; p=0.033) at 1 week, 703.6±518.4 pg/ml vs 464.6±509.0 pg/ml (p=0.033) at 2 weeks, and 387.7±231.9 pg/ml vs 249.4±209.5 pg/ml (p=0.010) at 4 weeks after HTx, and it came down to that of early preserved output patients at 12 weeks after HTx. Donor age was a predictor of early low output after HTx. We should be careful after HTx from old donors. However, hemodynamic parameters of early low output patients gradually caught up with those of early preserved output patients. Copyright © 2015 Japanese College of Cardiology. Published by Elsevier Ltd. All rights reserved.
Computer programs to predict induced effects of jets exhausting into a crossflow
NASA Technical Reports Server (NTRS)
Perkins, S. C., Jr.; Mendenhall, M. R.
1984-01-01
A user's manual for two computer programs was developed to predict the induced effects of jets exhausting into a crossflow. Program JETPLT predicts pressures induced on an infinite flat plate by a jet exhausting at angles to the plate and Program JETBOD, in conjunction with a panel code, predicts pressures induced on a body of revolution by a jet exhausting normal to the surface. Both codes use a potential model of the jet and adjacent surface with empirical corrections for the viscous or nonpotential effects. This program manual contains a description of the use of both programs, instructions for preparation of input, descriptions of the output, limitations of the codes, and sample cases. In addition, procedures to extend both codes to include additional empirical correlations are described.
Prakash, J; Srinivasan, K
2009-07-01
In this paper, the authors have represented the nonlinear system as a family of local linear state space models, local PID controllers have been designed on the basis of linear models, and the weighted sum of the output from the local PID controllers (Nonlinear PID controller) has been used to control the nonlinear process. Further, Nonlinear Model Predictive Controller using the family of local linear state space models (F-NMPC) has been developed. The effectiveness of the proposed control schemes has been demonstrated on a CSTR process, which exhibits dynamic nonlinearity.
A methodology for long range prediction of air transportation
NASA Technical Reports Server (NTRS)
Ayati, M. B.; English, J. M.
1980-01-01
The paper describes the methodology for long-time projection of aircraft fuel requirements. A new concept of social and economic factors for future aviation industry which provides an estimate of predicted fuel usage is presented; it includes air traffic forecasts and lead times for producing new engines and aircraft types. An air transportation model is then developed in terms of an abstracted set of variables which represent the entire aircraft industry on a macroscale. This model was evaluated by testing the required output variables from a model based on historical data over the past decades.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bessac, Julie; Constantinescu, Emil; Anitescu, Mihai
We propose a statistical space-time model for predicting atmospheric wind speed based on deterministic numerical weather predictions and historical measurements. We consider a Gaussian multivariate space-time framework that combines multiple sources of past physical model outputs and measurements in order to produce a probabilistic wind speed forecast within the prediction window. We illustrate this strategy on wind speed forecasts during several months in 2012 for a region near the Great Lakes in the United States. The results show that the prediction is improved in the mean-squared sense relative to the numerical forecasts as well as in probabilistic scores. Moreover, themore » samples are shown to produce realistic wind scenarios based on sample spectra and space-time correlation structure.« less
Bessac, Julie; Constantinescu, Emil; Anitescu, Mihai
2018-03-01
We propose a statistical space-time model for predicting atmospheric wind speed based on deterministic numerical weather predictions and historical measurements. We consider a Gaussian multivariate space-time framework that combines multiple sources of past physical model outputs and measurements in order to produce a probabilistic wind speed forecast within the prediction window. We illustrate this strategy on wind speed forecasts during several months in 2012 for a region near the Great Lakes in the United States. The results show that the prediction is improved in the mean-squared sense relative to the numerical forecasts as well as in probabilistic scores. Moreover, themore » samples are shown to produce realistic wind scenarios based on sample spectra and space-time correlation structure.« less
Robot trajectory tracking with self-tuning predicted control
NASA Technical Reports Server (NTRS)
Cui, Xianzhong; Shin, Kang G.
1988-01-01
A controller that combines self-tuning prediction and control is proposed for robot trajectory tracking. The controller has two feedback loops: one is used to minimize the prediction error, and the other is designed to make the system output track the set point input. Because the velocity and position along the desired trajectory are given and the future output of the system is predictable, a feedforward loop can be designed for robot trajectory tracking with self-tuning predicted control (STPC). Parameters are estimated online to account for the model uncertainty and the time-varying property of the system. The authors describe the principle of STPC, analyze the system performance, and discuss the simplification of the robot dynamic equations. To demonstrate its utility and power, the controller is simulated for a Stanford arm.
A Real-Time Offshore Weather Risk Advisory System
NASA Astrophysics Data System (ADS)
Jolivet, Samuel; Zemskyy, Pavlo; Mynampati, Kalyan; Babovic, Vladan
2015-04-01
Offshore oil and gas operations in South East Asia periodically face extended downtime due to unpredictable weather conditions, including squalls that are accompanied by strong winds, thunder, and heavy rains. This downtime results in financial losses. Hence, a real time weather risk advisory system is developed to provide the offshore Oil and Gas (O&G) industry specific weather warnings in support of safety and environment security. This system provides safe operating windows based on sensitivity of offshore operations to sea state. Information products for safety and security include area of squall occurrence for the next 24 hours, time before squall strike, and heavy sea state warning for the next 3, 6, 12 & 24 hours. These are predicted using radar now-cast, high resolution Numerical Weather Prediction (NWP) and Data Assimilation (DA). Radar based now-casting leverages the radar data to produce short term (up to 3 hours) predictions of severe weather events including squalls/thunderstorms. A sea state approximation is provided through developing a translational model based on these predictions to risk rank the sensitivity of operations. A high resolution Weather Research and Forecasting (WRF, an open source NWP model) is developed for offshore Brunei, Malaysia and the Philippines. This high resolution model is optimized and validated against the adaptation of temperate to tropical met-ocean parameterization. This locally specific parameters are calibrated against federated data to achieve a 24 hour forecast of high resolution Convective Available Potential Energy (CAPE). CAPE is being used as a proxy for the risk of squall occurrence. Spectral decomposition is used to blend the outputs of the now-cast and the forecast in order to assimilate near real time weather observations as an implementation of the integration of data sources. This system uses the now-cast for the first 3 hours and then the forecast prediction horizons of 3, 6, 12 & 24 hours. The output is a 24 hour window of high resolution/accuracy forecasts leveraging available data-model integration and CAPE prediction. The systems includes dissemination of WRF outputs over the World Wide Web. Components of the system (including WRF computational engine and results dissemination modules) are deployed in to computational cloud. This approach tends to increase system robustness and sustainability. The creation of such a system to share information between the public and private sectors and across territorial boundaries is an important step towards the next generation of governance for climate risk and extreme weather offshore. The system benefits offshore operators by reducing downtime related to accidents and incidents; eliminate unnecessary hiring costs related to waiting on weather; and improve the efficiency and planning of transport and logistics by providing a rolling weather risk advisory.
Economic Models for Projecting Industrial Capacity for Defense Production: A Review
1983-02-01
macroeconomic forecast to establish the level of civilian final demand; all use the DoD Bridge Table to allocate budget category outlays to industries. Civilian...output table.’ 3. Macroeconomic Assumptions and the Prediction of Final Demand All input-output models require as a starting point a prediction of final... macroeconomic fore- cast of GNP and its components and (2) a methodology to transform these forecast values of consumption, investment, exports, etc. into
NASA Astrophysics Data System (ADS)
Darmon, David
2018-03-01
In the absence of mechanistic or phenomenological models of real-world systems, data-driven models become necessary. The discovery of various embedding theorems in the 1980s and 1990s motivated a powerful set of tools for analyzing deterministic dynamical systems via delay-coordinate embeddings of observations of their component states. However, in many branches of science, the condition of operational determinism is not satisfied, and stochastic models must be brought to bear. For such stochastic models, the tool set developed for delay-coordinate embedding is no longer appropriate, and a new toolkit must be developed. We present an information-theoretic criterion, the negative log-predictive likelihood, for selecting the embedding dimension for a predictively optimal data-driven model of a stochastic dynamical system. We develop a nonparametric estimator for the negative log-predictive likelihood and compare its performance to a recently proposed criterion based on active information storage. Finally, we show how the output of the model selection procedure can be used to compare candidate predictors for a stochastic system to an information-theoretic lower bound.
Neural networks for satellite remote sensing and robotic sensor interpretation
NASA Astrophysics Data System (ADS)
Martens, Siegfried
Remote sensing of forests and robotic sensor fusion can be viewed, in part, as supervised learning problems, mapping from sensory input to perceptual output. This dissertation develops ARTMAP neural networks for real-time category learning, pattern recognition, and prediction tailored to remote sensing and robotics applications. Three studies are presented. The first two use ARTMAP to create maps from remotely sensed data, while the third uses an ARTMAP system for sensor fusion on a mobile robot. The first study uses ARTMAP to predict vegetation mixtures in the Plumas National Forest based on spectral data from the Landsat Thematic Mapper satellite. While most previous ARTMAP systems have predicted discrete output classes, this project develops new capabilities for multi-valued prediction. On the mixture prediction task, the new network is shown to perform better than maximum likelihood and linear mixture models. The second remote sensing study uses an ARTMAP classification system to evaluate the relative importance of spectral and terrain data for map-making. This project has produced a large-scale map of remotely sensed vegetation in the Sierra National Forest. Network predictions are validated with ground truth data, and maps produced using the ARTMAP system are compared to a map produced by human experts. The ARTMAP Sierra map was generated in an afternoon, while the labor intensive expert method required nearly a year to perform the same task. The robotics research uses an ARTMAP system to integrate visual information and ultrasonic sensory information on a B14 mobile robot. The goal is to produce a more accurate measure of distance than is provided by the raw sensors. ARTMAP effectively combines sensory sources both within and between modalities. The improved distance percept is used to produce occupancy grid visualizations of the robot's environment. The maps produced point to specific problems of raw sensory information processing and demonstrate the benefits of using a neural network system for sensor fusion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Behrang, M.A.; Assareh, E.; Ghanbarzadeh, A.
2010-08-15
The main objective of present study is to predict daily global solar radiation (GSR) on a horizontal surface, based on meteorological variables, using different artificial neural network (ANN) techniques. Daily mean air temperature, relative humidity, sunshine hours, evaporation, and wind speed values between 2002 and 2006 for Dezful city in Iran (32 16'N, 48 25'E), are used in this study. In order to consider the effect of each meteorological variable on daily GSR prediction, six following combinations of input variables are considered: (I)Day of the year, daily mean air temperature and relative humidity as inputs and daily GSR as output.more » (II)Day of the year, daily mean air temperature and sunshine hours as inputs and daily GSR as output. (III)Day of the year, daily mean air temperature, relative humidity and sunshine hours as inputs and daily GSR as output. (IV)Day of the year, daily mean air temperature, relative humidity, sunshine hours and evaporation as inputs and daily GSR as output. (V)Day of the year, daily mean air temperature, relative humidity, sunshine hours and wind speed as inputs and daily GSR as output. (VI)Day of the year, daily mean air temperature, relative humidity, sunshine hours, evaporation and wind speed as inputs and daily GSR as output. Multi-layer perceptron (MLP) and radial basis function (RBF) neural networks are applied for daily GSR modeling based on six proposed combinations. The measured data between 2002 and 2005 are used to train the neural networks while the data for 214 days from 2006 are used as testing data. The comparison of obtained results from ANNs and different conventional GSR prediction (CGSRP) models shows very good improvements (i.e. the predicted values of best ANN model (MLP-V) has a mean absolute percentage error (MAPE) about 5.21% versus 10.02% for best CGSRP model (CGSRP 5)). (author)« less
Output-Adaptive Tetrahedral Cut-Cell Validation for Sonic Boom Prediction
NASA Technical Reports Server (NTRS)
Park, Michael A.; Darmofal, David L.
2008-01-01
A cut-cell approach to Computational Fluid Dynamics (CFD) that utilizes the median dual of a tetrahedral background grid is described. The discrete adjoint is also calculated, which permits adaptation based on improving the calculation of a specified output (off-body pressure signature) in supersonic inviscid flow. These predicted signatures are compared to wind tunnel measurements on and off the configuration centerline 10 body lengths below the model to validate the method for sonic boom prediction. Accurate mid-field sonic boom pressure signatures are calculated with the Euler equations without the use of hybrid grid or signature propagation methods. Highly-refined, shock-aligned anisotropic grids were produced by this method from coarse isotropic grids created without prior knowledge of shock locations. A heuristic reconstruction limiter provided stable flow and adjoint solution schemes while producing similar signatures to Barth-Jespersen and Venkatakrishnan limiters. The use of cut-cells with an output-based adaptive scheme completely automated this accurate prediction capability after a triangular mesh is generated for the cut surface. This automation drastically reduces the manual intervention required by existing methods.
Economic Dispatch for Microgrid Containing Electric Vehicles via Probabilistic Modeling: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yao, Yin; Gao, Wenzhong; Momoh, James
In this paper, an economic dispatch model with probabilistic modeling is developed for a microgrid. The electric power supply in a microgrid consists of conventional power plants and renewable energy power plants, such as wind and solar power plants. Because of the fluctuation in the output of solar and wind power plants, an empirical probabilistic model is developed to predict their hourly output. According to different characteristics of wind and solar power plants, the parameters for probabilistic distribution are further adjusted individually for both. On the other hand, with the growing trend in plug-in electric vehicles (PHEVs), an integrated microgridmore » system must also consider the impact of PHEVs. The charging loads from PHEVs as well as the discharging output via the vehicle-to-grid (V2G) method can greatly affect the economic dispatch for all of the micro energy sources in a microgrid. This paper presents an optimization method for economic dispatch in a microgrid considering conventional power plants, renewable power plants, and PHEVs. The simulation results reveal that PHEVs with V2G capability can be an indispensable supplement in a modern microgrid.« less
Rebaudo, François; Faye, Emile; Dangles, Olivier
2016-01-01
A large body of literature has recently recognized the role of microclimates in controlling the physiology and ecology of species, yet the relevance of fine-scale climatic data for modeling species performance and distribution remains a matter of debate. Using a 6-year monitoring of three potato moth species, major crop pests in the tropical Andes, we asked whether the spatiotemporal resolution of temperature data affect the predictions of models of moth performance and distribution. For this, we used three different climatic data sets: (i) the WorldClim dataset (global dataset), (ii) air temperature recorded using data loggers (weather station dataset), and (iii) air crop canopy temperature (microclimate dataset). We developed a statistical procedure to calibrate all datasets to monthly and yearly variation in temperatures, while keeping both spatial and temporal variances (air monthly temperature at 1 km² for the WorldClim dataset, air hourly temperature for the weather station, and air minute temperature over 250 m radius disks for the microclimate dataset). Then, we computed pest performances based on these three datasets. Results for temperature ranging from 9 to 11°C revealed discrepancies in the simulation outputs in both survival and development rates depending on the spatiotemporal resolution of the temperature dataset. Temperature and simulated pest performances were then combined into multiple linear regression models to compare predicted vs. field data. We used an additional set of study sites to test the ability of the results of our model to be extrapolated over larger scales. Results showed that the model implemented with microclimatic data best predicted observed pest abundances for our study sites, but was less accurate than the global dataset model when performed at larger scales. Our simulations therefore stress the importance to consider different temperature datasets depending on the issue to be solved in order to accurately predict species abundances. In conclusion, keeping in mind that the mismatch between the size of organisms and the scale at which climate data are collected and modeled remains a key issue, temperature dataset selection should be balanced by the desired output spatiotemporal scale for better predicting pest dynamics and developing efficient pest management strategies.
Rebaudo, François; Faye, Emile; Dangles, Olivier
2016-01-01
A large body of literature has recently recognized the role of microclimates in controlling the physiology and ecology of species, yet the relevance of fine-scale climatic data for modeling species performance and distribution remains a matter of debate. Using a 6-year monitoring of three potato moth species, major crop pests in the tropical Andes, we asked whether the spatiotemporal resolution of temperature data affect the predictions of models of moth performance and distribution. For this, we used three different climatic data sets: (i) the WorldClim dataset (global dataset), (ii) air temperature recorded using data loggers (weather station dataset), and (iii) air crop canopy temperature (microclimate dataset). We developed a statistical procedure to calibrate all datasets to monthly and yearly variation in temperatures, while keeping both spatial and temporal variances (air monthly temperature at 1 km² for the WorldClim dataset, air hourly temperature for the weather station, and air minute temperature over 250 m radius disks for the microclimate dataset). Then, we computed pest performances based on these three datasets. Results for temperature ranging from 9 to 11°C revealed discrepancies in the simulation outputs in both survival and development rates depending on the spatiotemporal resolution of the temperature dataset. Temperature and simulated pest performances were then combined into multiple linear regression models to compare predicted vs. field data. We used an additional set of study sites to test the ability of the results of our model to be extrapolated over larger scales. Results showed that the model implemented with microclimatic data best predicted observed pest abundances for our study sites, but was less accurate than the global dataset model when performed at larger scales. Our simulations therefore stress the importance to consider different temperature datasets depending on the issue to be solved in order to accurately predict species abundances. In conclusion, keeping in mind that the mismatch between the size of organisms and the scale at which climate data are collected and modeled remains a key issue, temperature dataset selection should be balanced by the desired output spatiotemporal scale for better predicting pest dynamics and developing efficient pest management strategies. PMID:27148077
Cox-nnet: An artificial neural network method for prognosis prediction of high-throughput omics data
Ching, Travers; Zhu, Xun
2018-01-01
Artificial neural networks (ANN) are computing architectures with many interconnections of simple neural-inspired computing elements, and have been applied to biomedical fields such as imaging analysis and diagnosis. We have developed a new ANN framework called Cox-nnet to predict patient prognosis from high throughput transcriptomics data. In 10 TCGA RNA-Seq data sets, Cox-nnet achieves the same or better predictive accuracy compared to other methods, including Cox-proportional hazards regression (with LASSO, ridge, and mimimax concave penalty), Random Forests Survival and CoxBoost. Cox-nnet also reveals richer biological information, at both the pathway and gene levels. The outputs from the hidden layer node provide an alternative approach for survival-sensitive dimension reduction. In summary, we have developed a new method for accurate and efficient prognosis prediction on high throughput data, with functional biological insights. The source code is freely available at https://github.com/lanagarmire/cox-nnet. PMID:29634719
Modeling a multivariable reactor and on-line model predictive control.
Yu, D W; Yu, D L
2005-10-01
A nonlinear first principle model is developed for a laboratory-scaled multivariable chemical reactor rig in this paper and the on-line model predictive control (MPC) is implemented to the rig. The reactor has three variables-temperature, pH, and dissolved oxygen with nonlinear dynamics-and is therefore used as a pilot system for the biochemical industry. A nonlinear discrete-time model is derived for each of the three output variables and their model parameters are estimated from the real data using an adaptive optimization method. The developed model is used in a nonlinear MPC scheme. An accurate multistep-ahead prediction is obtained for MPC, where the extended Kalman filter is used to estimate system unknown states. The on-line control is implemented and a satisfactory tracking performance is achieved. The MPC is compared with three decentralized PID controllers and the advantage of the nonlinear MPC over the PID is clearly shown.
NASA Technical Reports Server (NTRS)
Bretherton, Christopher S.
2002-01-01
The goal of this project was to compare observations of marine and arctic boundary layers with: (1) parameterization systems used in climate and weather forecast models; and (2) two and three dimensional eddy resolving (LES) models for turbulent fluid flow. Based on this comparison, we hoped to better understand, predict, and parameterize the boundary layer structure and cloud amount, type, and thickness as functions of large scale conditions that are predicted by global climate models. The principal achievements of the project were as follows: (1) Development of a novel boundary layer parameterization for large-scale models that better represents the physical processes in marine boundary layer clouds; and (2) Comparison of column output from the ECMWF global forecast model with observations from the SHEBA experiment. Overall the forecast model did predict most of the major precipitation events and synoptic variability observed over the year of observation of the SHEBA ice camp.
NASA Astrophysics Data System (ADS)
Mandal, Sumantra; Sivaprasad, P. V.; Venugopal, S.; Murthy, K. P. N.
2006-09-01
An artificial neural network (ANN) model is developed to predict the constitutive flow behaviour of austenitic stainless steels during hot deformation. The input parameters are alloy composition and process variables whereas flow stress is the output. The model is based on a three-layer feed-forward ANN with a back-propagation learning algorithm. The neural network is trained with an in-house database obtained from hot compression tests on various grades of austenitic stainless steels. The performance of the model is evaluated using a wide variety of statistical indices. Good agreement between experimental and predicted data is obtained. The correlation between individual alloying elements and high temperature flow behaviour is investigated by employing the ANN model. The results are found to be consistent with the physical phenomena. The model can be used as a guideline for new alloy development.
SU-E-J-191: Motion Prediction Using Extreme Learning Machine in Image Guided Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jia, J; Cao, R; Pei, X
Purpose: Real-time motion tracking is a critical issue in image guided radiotherapy due to the time latency caused by image processing and system response. It is of great necessity to fast and accurately predict the future position of the respiratory motion and the tumor location. Methods: The prediction of respiratory position was done based on the positioning and tracking module in ARTS-IGRT system which was developed by FDS Team (www.fds.org.cn). An approach involving with the extreme learning machine (ELM) was adopted to predict the future respiratory position as well as the tumor’s location by training the past trajectories. For themore » training process, a feed-forward neural network with one single hidden layer was used for the learning. First, the number of hidden nodes was figured out for the single layered feed forward network (SLFN). Then the input weights and hidden layer biases of the SLFN were randomly assigned to calculate the hidden neuron output matrix. Finally, the predicted movement were obtained by applying the output weights and compared with the actual movement. Breathing movement acquired from the external infrared markers was used to test the prediction accuracy. And the implanted marker movement for the prostate cancer was used to test the implementation of the tumor motion prediction. Results: The accuracy of the predicted motion and the actual motion was tested. Five volunteers with different breathing patterns were tested. The average prediction time was 0.281s. And the standard deviation of prediction accuracy was 0.002 for the respiratory motion and 0.001 for the tumor motion. Conclusion: The extreme learning machine method can provide an accurate and fast prediction of the respiratory motion and the tumor location and therefore can meet the requirements of real-time tumor-tracking in image guided radiotherapy.« less
Enterocutaneous Fistulae: Etiology, Treatment, and Outcome – A Study from South India
Kumar, Prakash; Maroju, Nanda K.; Kate, Vikram
2011-01-01
Background/Aim: Enterocutaneous fistula (ECF) is a difficult condition managed in the surgical wards and is associated with significant morbidity and mortality. Sepsis, malnutrition, and electrolyte abnormality is the classical triad of complications of ECF. Sepsis with malnutrition is the leading cause of death in cases of ECF. Although it is a common condition, no recent report in literature on the profile of patients with ECF has been documented from the southern part of India. Materials and Methods: All consecutive patients who developed or presented with ECF during the study period were included in the study. The etiology, anatomic distribution, fistula output, clinical course, complications, predictive factors for spontaneous closure, and outcomes for patients with ECF were studied. Results: A total of 41 patients were included in this prospective observational study, of which 34 were males and 7 were females. About 95% of ECF were postoperative. Ileum was found to be the most common site of ECF. Also, 49% of fistulas were high output and 51% were low output. Serum albumin levels correlated significantly with fistula healing and mortality. Surgical intervention was required in 41% of patients. Conclusion: Most of the ECF are encountered in the postoperative period. Serum albumin levels can predict fistula healing and mortality. Conservative management should be the first line of treatment. Mortality in patients with ECF continues to be significant and is commonly related to malnutrition and sepsis. PMID:22064337
NASA Astrophysics Data System (ADS)
Frem, Dany
2017-01-01
In the present study, a relationship is proposed that is capable of predicting the output of the plate dent test. It is shown that the initial density ?; condensed phase heat of formation ?; the number of carbon (C), nitrogen (N), oxygen (O); and the composition molecular weight (MW) are the most important parameters needed in order to accurately predict the absolute dent depth ? produced on 1018 cold-rolled steel by a detonating organic explosive. The estimated ? values can be used to predict the detonation pressure (P) of high explosives; furthermore, we show that a correlation exists between ? and the Gurney velocity ? parameter. The new correlation is used to accurately estimate ? for several C-H-N-O explosive compositions.
Predicting human protein function with multi-task deep neural networks.
Fa, Rui; Cozzetto, Domenico; Wan, Cen; Jones, David T
2018-01-01
Machine learning methods for protein function prediction are urgently needed, especially now that a substantial fraction of known sequences remains unannotated despite the extensive use of functional assignments based on sequence similarity. One major bottleneck supervised learning faces in protein function prediction is the structured, multi-label nature of the problem, because biological roles are represented by lists of terms from hierarchically organised controlled vocabularies such as the Gene Ontology. In this work, we build on recent developments in the area of deep learning and investigate the usefulness of multi-task deep neural networks (MTDNN), which consist of upstream shared layers upon which are stacked in parallel as many independent modules (additional hidden layers with their own output units) as the number of output GO terms (the tasks). MTDNN learns individual tasks partially using shared representations and partially from task-specific characteristics. When no close homologues with experimentally validated functions can be identified, MTDNN gives more accurate predictions than baseline methods based on annotation frequencies in public databases or homology transfers. More importantly, the results show that MTDNN binary classification accuracy is higher than alternative machine learning-based methods that do not exploit commonalities and differences among prediction tasks. Interestingly, compared with a single-task predictor, the performance improvement is not linearly correlated with the number of tasks in MTDNN, but medium size models provide more improvement in our case. One of advantages of MTDNN is that given a set of features, there is no requirement for MTDNN to have a bootstrap feature selection procedure as what traditional machine learning algorithms do. Overall, the results indicate that the proposed MTDNN algorithm improves the performance of protein function prediction. On the other hand, there is still large room for deep learning techniques to further enhance prediction ability.
Measured and predicted rotor performance for the SERI advanced wind turbine blades
NASA Astrophysics Data System (ADS)
Tangler, J.; Smith, B.; Kelley, N.; Jager, D.
1992-02-01
Measured and predicted rotor performance for the Solar Energy Research Institute (SERI) advanced wind turbine blades were compared to assess the accuracy of predictions and to identify the sources of error affecting both predictions and measurements. An awareness of these sources of error contributes to improved prediction and measurement methods that will ultimately benefit future rotor design efforts. Propeller/vane anemometers were found to underestimate the wind speed in turbulent environments such as the San Gorgonio Pass wind farm area. Using sonic or cup anemometers, good agreement was achieved between predicted and measured power output for wind speeds up to 8 m/sec. At higher wind speeds an optimistic predicted power output and the occurrence of peak power at wind speeds lower than measurements resulted from the omission of turbulence and yaw error. In addition, accurate two-dimensional (2-D) airfoil data prior to stall and a post stall airfoil data synthesization method that reflects three-dimensional (3-D) effects were found to be essential for accurate performance prediction.
Andreas, Martin; Kuessel, Lorenz; Kastl, Stefan P; Wirth, Stefan; Gruber, Kathrin; Rhomberg, Franziska; Gomari-Grisar, Fatemeh A; Franz, Maximilian; Zeisler, Harald; Gottsauner-Wolf, Michael
2016-06-01
Pregnancy associated cardiovascular pathologies have a significant impact on outcome for mother and child. Bioimpedance cardiography may provide additional outcome-relevant information early in pregnancy and may also be used as a predictive instrument for pregnancy-associated diseases. We performed a prospective longitudinal cohort trial in an outpatient setting and included 242 pregnant women. Cardiac output and concomitant hemodynamic data were recorded from 11(th)-13(th) week of gestation every 5(th) week as well as at two occasions post partum employing bioimpedance cardiography. Cardiac output increased during pregnancy and peaked early in the third trimester. A higher heart rate and a decreased systemic vascular resistance were accountable for the observed changes. Women who had a pregnancy-associated disease during a previous pregnancy or developed hypertension or preeclampsia had a significantly increased cardiac output early in pregnancy. Furthermore, an effect of cardiac output on birthweight was found in healthy pregnancies and could be confirmed with multiple linear regression analysis. Cardiovascular adaptation during pregnancy is characterized by distinct pattern described herein. These may be altered in women at risk for preeclampsia or reduced birthweigth. The assessment of cardiac parameters by bioimpedance cardiography could be performed at low costs without additional risks.
NASA Astrophysics Data System (ADS)
Ou, Shiqi; Zhao, Yi; Aaron, Douglas S.; Regan, John M.; Mench, Matthew M.
2016-10-01
This work describes experiments and computational simulations to analyze single-chamber, air-cathode microbial fuel cell (MFC) performance and cathodic limitations in terms of current generation, power output, mass transport, biomass competition, and biofilm growth. Steady-state and transient cathode models were developed and experimentally validated. Two cathode gas mixtures were used to explore oxygen transport in the cathode: the MFCs exposed to a helium-oxygen mixture (heliox) produced higher current and power output than the group of MFCs exposed to air or a nitrogen-oxygen mixture (nitrox), indicating a dependence on gas-phase transport in the cathode. Multi-substance transport, biological reactions, and electrochemical reactions in a multi-layer and multi-biomass cathode biofilm were also simulated in a transient model. The transient model described biofilm growth over 15 days while providing insight into mass transport and cathodic dissolved species concentration profiles during biofilm growth. Simulation results predict that the dissolved oxygen content and diffusion in the cathode are key parameters affecting the power output of the air-cathode MFC system, with greater oxygen content in the cathode resulting in increased power output and fully-matured biomass.
Cutting the Wires: Modularization of Cellular Networks for Experimental Design
Lang, Moritz; Summers, Sean; Stelling, Jörg
2014-01-01
Understanding naturally evolved cellular networks requires the consecutive identification and revision of the interactions between relevant molecular species. In this process, initially often simplified and incomplete networks are extended by integrating new reactions or whole subnetworks to increase consistency between model predictions and new measurement data. However, increased consistency with experimental data alone is not sufficient to show the existence of biomolecular interactions, because the interplay of different potential extensions might lead to overall similar dynamics. Here, we present a graph-based modularization approach to facilitate the design of experiments targeted at independently validating the existence of several potential network extensions. Our method is based on selecting the outputs to measure during an experiment, such that each potential network extension becomes virtually insulated from all others during data analysis. Each output defines a module that only depends on one hypothetical network extension, and all other outputs act as virtual inputs to achieve insulation. Given appropriate experimental time-series measurements of the outputs, our modules can be analyzed, simulated, and compared to the experimental data separately. Our approach exemplifies the close relationship between structural systems identification and modularization, an interplay that promises development of related approaches in the future. PMID:24411264
Ou, Shiqi; Zhao, Yi; Aaron, Douglas S.; ...
2016-08-15
This work describes experiments and computational simulations to analyze single-chamber, air-cathode microbial fuel cell (MFC) performance and cathodic limitations in terms of current generation, power output, mass transport, biomass competition, and biofilm growth. Steady-state and transient cathode models were developed and experimentally validated. Two cathode gas mixtures were used to explore oxygen transport in the cathode: the MFCs exposed to a helium-oxygen mixture (heliox) produced higher current and power output than the group of MFCs exposed to air or a nitrogen-oxygen mixture (nitrox), indicating a dependence on gas-phase transport in the cathode. Multi-substance transport, biological reactions, and electrochemical reactions inmore » a multi-layer and multi-biomass cathode biofilm were also simulated in a transient model. The transient model described biofilm growth over 15 days while providing insight into mass transport and cathodic dissolved species concentration profiles during biofilm growth. Lastly, simulation results predict that the dissolved oxygen content and diffusion in the cathode are key parameters affecting the power output of the air-cathode MFC system, with greater oxygen content in the cathode resulting in increased power output and fully-matured biomass.« less
Modeling of a resonant heat engine
NASA Astrophysics Data System (ADS)
Preetham, B. S.; Anderson, M.; Richards, C.
2012-12-01
A resonant heat engine in which the piston assembly is replaced by a sealed elastic cavity is modeled and analyzed. A nondimensional lumped-parameter model is derived and used to investigate the factors that control the performance of the engine. The thermal efficiency predicted by the model agrees with that predicted from the relation for the Otto cycle based on compression ratio. The predictions show that for a fixed mechanical load, increasing the heat input results in increased efficiency. The output power and power density are shown to depend on the loading for a given heat input. The loading condition for maximum output power is different from that required for maximum power density.
Styron, J D; Cooper, G W; Ruiz, C L; Hahn, K D; Chandler, G A; Nelson, A J; Torres, J A; McWatters, B R; Carpenter, Ken; Bonura, M A
2014-11-01
A methodology for obtaining empirical curves relating absolute measured scintillation light output to beta energy deposited is presented. Output signals were measured from thin plastic scintillator using NIST traceable beta and gamma sources and MCNP5 was used to model the energy deposition from each source. Combining the experimental and calculated results gives the desired empirical relationships. To validate, the sensitivity of a beryllium/scintillator-layer neutron activation detector was predicted and then exposed to a known neutron fluence from a Deuterium-Deuterium fusion plasma (DD). The predicted and the measured sensitivity were in statistical agreement.
TFaNS-Tone Fan Noise Design/Prediction System: Users' Manual TFaNS Version 1.5
NASA Technical Reports Server (NTRS)
Topol, David A.; Huff, Dennis L. (Technical Monitor)
2003-01-01
TFaNS is the Tone Fan Noise Design/Prediction System developed by Pratt & Whitney under contract to NASA Glenn. The purpose of this system is to predict tone noise emanating from a fan stage including the effects of reflection and transmission by the rotor and stator and by the duct inlet and nozzle. The first version of this design system was developed under a previous NASA contract. Several improvements have been made to TFaNS. This users' manual shows how to run this new system. TFaNS consists of the codes that compute the acoustic properties (reflection and transmission coefficients) of the various elements and writes them to files, CUP3D Fan Noise Coupling Code that reads these files, solves the coupling problem, and outputs the desired noise predictions, and AWAKEN CFD/Measured Wake Postprocessor which reformats CFD wake predictions and/or measured wake data so they can be used by the system. This report provides information on code input and file structure essential for potential users of TFaNS.
Bonet, Isis; Franco-Montero, Pedro; Rivero, Virginia; Teijeira, Marta; Borges, Fernanda; Uriarte, Eugenio; Morales Helguera, Aliuska
2013-12-23
A(2B) adenosine receptor antagonists may be beneficial in treating diseases like asthma, diabetes, diabetic retinopathy, and certain cancers. This has stimulated research for the development of potent ligands for this subtype, based on quantitative structure-affinity relationships. In this work, a new ensemble machine learning algorithm is proposed for classification and prediction of the ligand-binding affinity of A(2B) adenosine receptor antagonists. This algorithm is based on the training of different classifier models with multiple training sets (composed of the same compounds but represented by diverse features). The k-nearest neighbor, decision trees, neural networks, and support vector machines were used as single classifiers. To select the base classifiers for combining into the ensemble, several diversity measures were employed. The final multiclassifier prediction results were computed from the output obtained by using a combination of selected base classifiers output, by utilizing different mathematical functions including the following: majority vote, maximum and average probability. In this work, 10-fold cross- and external validation were used. The strategy led to the following results: i) the single classifiers, together with previous features selections, resulted in good overall accuracy, ii) a comparison between single classifiers, and their combinations in the multiclassifier model, showed that using our ensemble gave a better performance than the single classifier model, and iii) our multiclassifier model performed better than the most widely used multiclassifier models in the literature. The results and statistical analysis demonstrated the supremacy of our multiclassifier approach for predicting the affinity of A(2B) adenosine receptor antagonists, and it can be used to develop other QSAR models.
Hilal-Alnaqbi, Ali; Mourad, Abdel-Hamid I; Yousef, Basem F
2014-01-01
A mathematical model is developed to predict oxygen transfer in the fiber-in-fiber (FIF) bioartificial liver device. The model parameters are taken from the constructed and tested FIF modules. We extended the Krogh cylinder model by including one more zone for oxygen transfer. Cellular oxygen uptake was based on Michaelis-Menten kinetics. The effect of varying a number of important model parameters is investigated, including (1) oxygen partial pressure at the inlet, (2) the hydraulic permeability of compartment B (cell region), (3) the hydraulic permeability of the inner membrane, and (4) the oxygen diffusivity of the outer membrane. The mathematical model is validated by comparing its output against the experimentally acquired values of an oxygen transfer rate and the hydrostatic pressure drop. Three governing simultaneous linear differential equations are derived to predict and validate the experimental measurements, e.g., the flow rate and the hydrostatic pressure drop. The model output simulated the experimental measurements to a high degree of accuracy. The model predictions show that the cells in the annulus can be oxygenated well even at high cell density or at a low level of gas phase PG if the value of the oxygen diffusion coefficient Dm is 16 × 10(-5) . The mathematical model also shows that the performance of the FIF improves by increasing the permeability of polypropylene membrane (inner fiber). Moreover, the model predicted that 60% of plasma has access to the cells in the annulus within the first 10% of the FIF bioreactor axial length for a specific polypropylene membrane permeability and can reach 95% within the first 30% of its axial length. © 2013 International Union of Biochemistry and Molecular Biology, Inc.
Olatinwo, Rabiu O; Prabha, Thara V; Paz, Joel O; Hoogenboom, Gerrit
2012-03-01
Early leaf spot of peanut (Arachis hypogaea L.), a disease caused by Cercospora arachidicola S. Hori, is responsible for an annual crop loss of several million dollars in the southeastern United States alone. The development of early leaf spot on peanut and subsequent spread of the spores of C. arachidicola relies on favorable weather conditions. Accurate spatio-temporal weather information is crucial for monitoring the progression of favorable conditions and determining the potential threat of the disease. Therefore, the development of a prediction model for mitigating the risk of early leaf spot in peanut production is important. The specific objective of this study was to demonstrate the application of the high-resolution Weather Research and Forecasting (WRF) model for management of early leaf spot in peanut. We coupled high-resolution weather output of the WRF, i.e. relative humidity and temperature, with the Oklahoma peanut leaf spot advisory model in predicting favorable conditions for early leaf spot infection over Georgia in 2007. Results showed a more favorable infection condition in the southeastern coastline of Georgia where the infection threshold were met sooner compared to the southwestern and central part of Georgia where the disease risk was lower. A newly introduced infection threat index indicates that the leaf spot threat threshold was met sooner at Alma, GA, compared to Tifton and Cordele, GA. The short-term prediction of weather parameters and their use in the management of peanut diseases is a viable and promising technique, which could help growers make accurate management decisions, and lower disease impact through optimum timing of fungicide applications.
NASA Astrophysics Data System (ADS)
Olatinwo, Rabiu O.; Prabha, Thara V.; Paz, Joel O.; Hoogenboom, Gerrit
2012-03-01
Early leaf spot of peanut ( Arachis hypogaea L.), a disease caused by Cercospora arachidicola S. Hori, is responsible for an annual crop loss of several million dollars in the southeastern United States alone. The development of early leaf spot on peanut and subsequent spread of the spores of C. arachidicola relies on favorable weather conditions. Accurate spatio-temporal weather information is crucial for monitoring the progression of favorable conditions and determining the potential threat of the disease. Therefore, the development of a prediction model for mitigating the risk of early leaf spot in peanut production is important. The specific objective of this study was to demonstrate the application of the high-resolution Weather Research and Forecasting (WRF) model for management of early leaf spot in peanut. We coupled high-resolution weather output of the WRF, i.e. relative humidity and temperature, with the Oklahoma peanut leaf spot advisory model in predicting favorable conditions for early leaf spot infection over Georgia in 2007. Results showed a more favorable infection condition in the southeastern coastline of Georgia where the infection threshold were met sooner compared to the southwestern and central part of Georgia where the disease risk was lower. A newly introduced infection threat index indicates that the leaf spot threat threshold was met sooner at Alma, GA, compared to Tifton and Cordele, GA. The short-term prediction of weather parameters and their use in the management of peanut diseases is a viable and promising technique, which could help growers make accurate management decisions, and lower disease impact through optimum timing of fungicide applications.
Multifidelity, Multidisciplinary Design Under Uncertainty with Non-Intrusive Polynomial Chaos
NASA Technical Reports Server (NTRS)
West, Thomas K., IV; Gumbert, Clyde
2017-01-01
The primary objective of this work is to develop an approach for multifidelity uncertainty quantification and to lay the framework for future design under uncertainty efforts. In this study, multifidelity is used to describe both the fidelity of the modeling of the physical systems, as well as the difference in the uncertainty in each of the models. For computational efficiency, a multifidelity surrogate modeling approach based on non-intrusive polynomial chaos using the point-collocation technique is developed for the treatment of both multifidelity modeling and multifidelity uncertainty modeling. Two stochastic model problems are used to demonstrate the developed methodologies: a transonic airfoil model and multidisciplinary aircraft analysis model. The results of both showed the multifidelity modeling approach was able to predict the output uncertainty predicted by the high-fidelity model as a significant reduction in computational cost.
Users' manual for the Langley high speed propeller noise prediction program (DFP-ATP)
NASA Technical Reports Server (NTRS)
Dunn, M. H.; Tarkenton, G. M.
1989-01-01
The use of the Dunn-Farassat-Padula Advanced Technology Propeller (DFP-ATP) noise prediction program which computes the periodic acoustic pressure signature and spectrum generated by propellers moving with supersonic helical tip speeds is described. The program has the capacity of predicting noise produced by a single-rotation propeller (SRP) or a counter-rotation propeller (CRP) system with steady or unsteady blade loading. The computational method is based on two theoretical formulations developed by Farassat. One formulation is appropriate for subsonic sources, and the other for transonic or supersonic sources. Detailed descriptions of user input, program output, and two test cases are presented, as well as brief discussions of the theoretical formulations and computational algorithms employed.
Predicting ad libitum dry matter intake and yields of Jersey cows.
Holter, J B; West, J W; McGilliard, M L; Pell, A N
1996-05-01
Two data files were used that contained weekly mean values for ad libitum DMI of lactating Jersey cows along with appropriate cow, ration, and environmental traits for predicting DMI. One data file (n = 666) was used to develop prediction equations for DMI because that file represented a number of separate experiments and contained more diversity in potential predictors, especially those related to ration, such as forage type. The other data file (n = 1613) was used primarily to verify these equations. Milk protein yield displaced 4% FCM output as a prediction variable and improved the R2 by several units but was not used in the final equations, however, for the sake of simplicity. All equations contained adjustments for the effects of heat stress, parity (1 vs. > 1), DIM > 15, BW, use of recombinant bST, and other significant independent variables. Equations were developed to predict DMI of cows fed individually or in groups and to predict daily yields of 4% FCM and milk protein; equations accounted for 0.69, 0.74, 0.81, and 0.76 of the variation in the dependent variables with standard deviations of 1.7, 1.6, 2.7, and 0.084 kg/ d, respectively. These equations should be applied to the development of software for computerized dairy ration balancing.
Erin K. Noonan-Wright; Nicole M. Vaillant; Alicia L. Reiner
2014-01-01
Fuel treatment effectiveness is often evaluated with fire behavior modeling systems that use fuel models to generate fire behavior outputs. How surface fuels are assigned, either using one of the 53 stylized fuel models or developing custom fuel models, can affect predicted fire behavior. We collected surface and canopy fuels data before and 1, 2, 5, and 8 years after...
NASA Astrophysics Data System (ADS)
Rehman, Naveed ur; Siddiqui, Mubashir Ali
2018-05-01
This work theoretically and experimentally investigated the performance of an arrayed solar flat-plate thermoelectric generator (ASFTEG). An analytical model, based on energy balances, was established for determining load voltage, power output and overall efficiency of ASFTEGs. An array consists of TEG devices (or modules) connected electrically in series and operating in closed-circuit mode with a load. The model takes into account the distinct temperature difference across each module, which is a major feature of this model. Parasitic losses have also been included in the model for realistic results. With the given set of simulation parameters, an ASFTEG consisting of four commercially available Bi2Te3 modules had a predicted load voltage of 200 mV and generated 3546 μW of electric power output. Predictions from the model were in good agreement with field experimental outcomes from a prototype ASFTEG, which was developed for validation purposes. Later, the model was simulated to maximize the performance of the ASFTEG by adjusting the thermal and electrical design of the system. Optimum values of design parameters were evaluated and discussed in detail. Beyond the current limitations associated with improvements in thermoelectric materials, this study will eventually lead to the successful development of portable roof-top renewable TEGs.
Learning spatially coherent properties of the visual world in connectionist networks
NASA Astrophysics Data System (ADS)
Becker, Suzanna; Hinton, Geoffrey E.
1991-10-01
In the unsupervised learning paradigm, a network of neuron-like units is presented with an ensemble of input patterns from a structured environment, such as the visual world, and learns to represent the regularities in that input. The major goal in developing unsupervised learning algorithms is to find objective functions that characterize the quality of the network's representation without explicitly specifying the desired outputs of any of the units. The sort of objective functions considered cause a unit to become tuned to spatially coherent features of visual images (such as texture, depth, shading, and surface orientation), by learning to predict the outputs of other units which have spatially adjacent receptive fields. Simulations show that using an information-theoretic algorithm called IMAX, a network can be trained to represent depth by observing random dot stereograms of surfaces with continuously varying disparities. Once a layer of depth-tuned units has developed, subsequent layers are trained to perform surface interpolation of curved surfaces, by learning to predict the depth of one image region based on depth measurements in surrounding regions. An extension of the basic model allows a population of competing neurons to learn a distributed code for disparity, which naturally gives rise to a representation of discontinuities.
A statistical approach to nuclear fuel design and performance
NASA Astrophysics Data System (ADS)
Cunning, Travis Andrew
As CANDU fuel failures can have significant economic and operational consequences on the Canadian nuclear power industry, it is essential that factors impacting fuel performance are adequately understood. Current industrial practice relies on deterministic safety analysis and the highly conservative "limit of operating envelope" approach, where all parameters are assumed to be at their limits simultaneously. This results in a conservative prediction of event consequences with little consideration given to the high quality and precision of current manufacturing processes. This study employs a novel approach to the prediction of CANDU fuel reliability. Probability distributions are fitted to actual fuel manufacturing datasets provided by Cameco Fuel Manufacturing, Inc. They are used to form input for two industry-standard fuel performance codes: ELESTRES for the steady-state case and ELOCA for the transient case---a hypothesized 80% reactor outlet header break loss of coolant accident. Using a Monte Carlo technique for input generation, 105 independent trials are conducted and probability distributions are fitted to key model output quantities. Comparing model output against recognized industrial acceptance criteria, no fuel failures are predicted for either case. Output distributions are well removed from failure limit values, implying that margin exists in current fuel manufacturing and design. To validate the results and attempt to reduce the simulation burden of the methodology, two dimensional reduction methods are assessed. Using just 36 trials, both methods are able to produce output distributions that agree strongly with those obtained via the brute-force Monte Carlo method, often to a relative discrepancy of less than 0.3% when predicting the first statistical moment, and a relative discrepancy of less than 5% when predicting the second statistical moment. In terms of global sensitivity, pellet density proves to have the greatest impact on fuel performance, with an average sensitivity index of 48.93% on key output quantities. Pellet grain size and dish depth are also significant contributors, at 31.53% and 13.46%, respectively. A traditional limit of operating envelope case is also evaluated. This case produces output values that exceed the maximum values observed during the 105 Monte Carlo trials for all output quantities of interest. In many cases the difference between the predictions of the two methods is very prominent, and the highly conservative nature of the deterministic approach is demonstrated. A reliability analysis of CANDU fuel manufacturing parametric data, specifically pertaining to the quantification of fuel performance margins, has not been conducted previously. Key Words: CANDU, nuclear fuel, Cameco, fuel manufacturing, fuel modelling, fuel performance, fuel reliability, ELESTRES, ELOCA, dimensional reduction methods, global sensitivity analysis, deterministic safety analysis, probabilistic safety analysis.
NASA Technical Reports Server (NTRS)
Topol, David A.
1999-01-01
TFaNS is the Tone Fan Noise Design/Prediction System developed by Pratt & Whitney under contract to NASA Lewis (presently NASA Glenn). The purpose of this system is to predict tone noise emanating from a fan stage including the effects of reflection and transmission by the rotor and stator and by the duct inlet and nozzle. These effects have been added to an existing annular duct/isolated stator noise prediction capability. TFaNS consists of: The codes that compute the acoustic properties (reflection and transmission coefficients) of the various elements and write them to files. Cup3D: Fan Noise Coupling Code that reads these files, solves the coupling problem, and outputs the desired noise predictions. AWAKEN: CFD/Measured Wake Postprocessor which reformats CFD wake predictions and/or measured wake data so it can be used by the system. This volume of the report provides technical background for TFaNS including the organization of the system and CUP3D technical documentation. This document also provides information for code developers who must write Acoustic Property Files in the CUP3D format. This report is divided into three volumes: Volume I: System Description, CUP3D Technical Documentation, and Manual for Code Developers; Volume II: User's Manual, TFaNS Vers. 1.4; Volume III: Evaluation of System Codes.
Uncertainty and sensitivity analysis for photovoltaic system modeling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Clifford W.; Pohl, Andrew Phillip; Jordan, Dirk
2013-12-01
We report an uncertainty and sensitivity analysis for modeling DC energy from photovoltaic systems. We consider two systems, each comprised of a single module using either crystalline silicon or CdTe cells, and located either at Albuquerque, NM, or Golden, CO. Output from a PV system is predicted by a sequence of models. Uncertainty in the output of each model is quantified by empirical distributions of each model's residuals. We sample these distributions to propagate uncertainty through the sequence of models to obtain an empirical distribution for each PV system's output. We considered models that: (1) translate measured global horizontal, directmore » and global diffuse irradiance to plane-of-array irradiance; (2) estimate effective irradiance from plane-of-array irradiance; (3) predict cell temperature; and (4) estimate DC voltage, current and power. We found that the uncertainty in PV system output to be relatively small, on the order of 1% for daily energy. Four alternative models were considered for the POA irradiance modeling step; we did not find the choice of one of these models to be of great significance. However, we observed that the POA irradiance model introduced a bias of upwards of 5% of daily energy which translates directly to a systematic difference in predicted energy. Sensitivity analyses relate uncertainty in the PV system output to uncertainty arising from each model. We found that the residuals arising from the POA irradiance and the effective irradiance models to be the dominant contributors to residuals for daily energy, for either technology or location considered. This analysis indicates that efforts to reduce the uncertainty in PV system output should focus on improvements to the POA and effective irradiance models.« less
Gouta, Houssemeddine; Hadj Saïd, Salim; Barhoumi, Nabil; M'Sahli, Faouzi
2017-03-01
This paper deals with the problem of the observer based control design for a coupled four-tank liquid level system. For this MIMO system's dynamics, motivated by a desire to provide precise and sensorless liquid level control, a nonlinear predictive controller based on a continuous-discrete observer is presented. First, an analytical solution from the model predictive control (MPC) technique is developed for a particular class of nonlinear MIMO systems and its corresponding exponential stability is proven. Then, a high gain observer that runs in continuous-time with an output error correction time that is updated in a mixed continuous-discrete fashion is designed in order to estimate the liquid levels in the two upper tanks. The effectiveness of the designed control schemes are validated by two tests; The first one is maintaining a constant level in the first bottom tank while making the level in the second bottom tank to follow a sinusoidal reference signal. The second test is more difficult and it is made using two trapezoidal reference signals in order to see the decoupling performance of the system's outputs. Simulation and experimental results validate the objective of the paper. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Modelling the distribution of chickens, ducks, and geese in China
Prosser, Diann J.; Wu, Junxi; Ellis, Erie C.; Gale, Fred; Van Boeckel, Thomas P.; Wint, William; Robinson, Tim; Xiao, Xiangming; Gilbert, Marius
2011-01-01
Global concerns over the emergence of zoonotic pandemics emphasize the need for high-resolution population distribution mapping and spatial modelling. Ongoing efforts to model disease risk in China have been hindered by a lack of available species level distribution maps for poultry. The goal of this study was to develop 1 km resolution population density models for China's chickens, ducks, and geese. We used an information theoretic approach to predict poultry densities based on statistical relationships between poultry census data and high-resolution agro-ecological predictor variables. Model predictions were validated by comparing goodness of fit measures (root mean square error and correlation coefficient) for observed and predicted values for 1/4 of the sample data which were not used for model training. Final output included mean and coefficient of variation maps for each species. We tested the quality of models produced using three predictor datasets and 4 regional stratification methods. For predictor variables, a combination of traditional predictors for livestock mapping and land use predictors produced the best goodness of fit scores. Comparison of regional stratifications indicated that for chickens and ducks, a stratification based on livestock production systems produced the best results; for geese, an agro-ecological stratification produced best results. However, for all species, each method of regional stratification produced significantly better goodness of fit scores than the global model. Here we provide descriptive methods, analytical comparisons, and model output for China's first high resolution, species level poultry distribution maps. Output will be made available to the scientific and public community for use in a wide range of applications from epidemiological studies to livestock policy and management initiatives.
Modelling the distribution of chickens, ducks, and geese in China
Prosser, Diann J.; Wu, Junxi; Ellis, Erle C.; Gale, Fred; Van Boeckel, Thomas P.; Wint, William; Robinson, Tim; Xiao, Xiangming; Gilbert, Marius
2011-01-01
Global concerns over the emergence of zoonotic pandemics emphasize the need for high-resolution population distribution mapping and spatial modelling. Ongoing efforts to model disease risk in China have been hindered by a lack of available species level distribution maps for poultry. The goal of this study was to develop 1 km resolution population density models for China’s chickens, ducks, and geese. We used an information theoretic approach to predict poultry densities based on statistical relationships between poultry census data and high-resolution agro-ecological predictor variables. Model predictions were validated by comparing goodness of fit measures (root mean square error and correlation coefficient) for observed and predicted values for ¼ of the sample data which was not used for model training. Final output included mean and coefficient of variation maps for each species. We tested the quality of models produced using three predictor datasets and 4 regional stratification methods. For predictor variables, a combination of traditional predictors for livestock mapping and land use predictors produced the best goodness of fit scores. Comparison of regional stratifications indicated that for chickens and ducks, a stratification based on livestock production systems produced the best results; for geese, an agro-ecological stratification produced best results. However, for all species, each method of regional stratification produced significantly better goodness of fit scores than the global model. Here we provide descriptive methods, analytical comparisons, and model output for China’s first high resolution, species level poultry distribution maps. Output will be made available to the scientific and public community for use in a wide range of applications from epidemiological studies to livestock policy and management initiatives. PMID:21765567
EMG-Torque correction on Human Upper extremity using Evolutionary Computation
NASA Astrophysics Data System (ADS)
JL, Veronica; Parasuraman, S.; Khan, M. K. A. Ahamed; Jeba DSingh, Kingsly
2016-09-01
There have been many studies indicating that control system of rehabilitative robot plays an important role in determining the outcome of the therapy process. Existing works have done the prediction of feedback signal in the controller based on the kinematics parameters and EMG readings of upper limb's skeletal system. Kinematics and kinetics based control signal system is developed by reading the output of the sensors such as position sensor, orientation sensor and F/T (Force/Torque) sensor and there readings are to be compared with the preceding measurement to decide on the amount of assistive force. There are also other works that incorporated the kinematics parameters to calculate the kinetics parameters via formulation and pre-defined assumptions. Nevertheless, these types of control signals analyze the movement of the upper limb only based on the movement of the upper joints. They do not anticipate the possibility of muscle plasticity. The focus of the paper is to make use of the kinematics parameters and EMG readings of skeletal system to predict the individual torque of upper extremity's joints. The surface EMG signals are fed into different mathematical models so that these data can be trained through Genetic Algorithm (GA) to find the best correlation between EMG signals and torques acting on the upper limb's joints. The estimated torque attained from the mathematical models is called simulated output. The simulated output will then be compared with the actual individual joint which is calculated based on the real time kinematics parameters of the upper movement of the skeleton when the muscle cells are activated. The findings from this contribution are extended into the development of the active control signal based controller for rehabilitation robot.
NASA Astrophysics Data System (ADS)
Häme, Tuomas; Mutanen, Teemu; Rauste, Yrjö; Antropov, Oleg; Molinier, Matthieu; Quegan, Shaun; Kantzas, Euripides; Mäkelä, Annikki; Minunno, Francesco; Atli Benediktsson, Jon; Falco, Nicola; Arnason, Kolbeinn; Storvold, Rune; Haarpaintner, Jörg; Elsakov, Vladimir; Rasinmäki, Jussi
2015-04-01
The objective of project North State, funded by Framework Program 7 of the European Union, is to develop innovative data fusion methods that exploit the new generation of multi-source data from Sentinels and other satellites in an intelligent, self-learning framework. The remote sensing outputs are interfaced with state-of-the-art carbon and water flux models for monitoring the fluxes over boreal Europe to reduce current large uncertainties. This will provide a paradigm for the development of products for future Copernicus services. The models to be interfaced are a dynamic vegetation model and a light use efficiency model. We have identified four groups of variables that will be estimated with remote sensed data: land cover variables, forest characteristics, vegetation activity, and hydrological variables. The estimates will be used as model inputs and to validate the model outputs. The earth observation variables are computed as automatically as possible, with an objective to completely automatic estimation. North State has two sites for intensive studies in southern and northern Finland, respectively, one in Iceland and one in state Komi of Russia. Additionally, the model input variables will be estimated and models applied over European boreal and sub-arctic region from Ural Mountains to Iceland. The accuracy assessment of the earth observation variables will follow statistical sampling design. Model output predictions are compared to earth observation variables. Also flux tower measurements are applied in the model assessment. In the paper, results of hyperspectral, Sentinel-1, and Landsat data and their use in the models is presented. Also an example of a completely automatic land cover class prediction is reported.
NASA Astrophysics Data System (ADS)
Perugu, Harikishan; Wei, Heng; Yao, Zhuo
2017-04-01
Air quality modelers often rely on regional travel demand models to estimate the vehicle activity data for emission models, however, most of the current travel demand models can only output reliable person travel activity rather than goods/service specific travel activity. This paper presents the successful application of data-driven, Spatial Regression and output optimization Truck model (SPARE-Truck) to develop truck-related activity inputs for the mobile emission model, and eventually to produce truck specific gridded emissions. To validate the proposed methodology, the Cincinnati metropolitan area in United States was selected as a case study site. From the results, it is found that the truck miles traveled predicted using traditional methods tend to underestimate - overall 32% less than proposed model- truck miles traveled. The coefficient of determination values for different truck types range between 0.82 and 0.97, except the motor homes which showed least model fit with 0.51. Consequently, the emission inventories calculated from the traditional methods were also underestimated i.e. -37% for NOx, -35% for SO2, -43% for VOC, -43% for BC, -47% for OC and - 49% for PM2.5. Further, the proposed method also predicted within ∼7% of the national emission inventory for all pollutants. The bottom-up gridding methodology used in this paper could allocate the emissions to grid cell where more truck activity is expected, and it is verified against regional land-use data. Most importantly, using proposed method it is easy to segregate gridded emission inventory by truck type, which is of particular interest for decision makers, since currently there is no reliable method to test different truck-category specific travel-demand management strategies for air pollution control.
Global and regional ecosystem modeling: comparison of model outputs and field measurements
NASA Astrophysics Data System (ADS)
Olson, R. J.; Hibbard, K.
2003-04-01
The Ecosystem Model-Data Intercomparison (EMDI) Workshops provide a venue for global ecosystem modeling groups to compare model outputs against measurements of net primary productivity (NPP). The objective of EMDI Workshops is to evaluate model performance relative to observations in order to improve confidence in global model projections terrestrial carbon cycling. The questions addressed by EMDI include: How does the simulated NPP compare with the field data across biome and environmental gradients? How sensitive are models to site-specific climate? Does additional mechanistic detail in models result in a better match with field measurements? How useful are the measures of NPP for evaluating model predictions? How well do models represent regional patterns of NPP? Initial EMDI results showed general agreement between model predictions and field measurements but with obvious differences that indicated areas for potential data and model improvement. The effort was built on the development and compilation of complete and consistent databases for model initialization and comparison. Database development improves the data as well as models; however, there is a need to incorporate additional observations and model outputs (LAI, hydrology, etc.) for comprehensive analyses of biogeochemical processes and their relationships to ecosystem structure and function. EMDI initialization and NPP data sets are available from the Oak Ridge National Laboratory Distributed Active Archive Center http://www.daac.ornl.gov/. Acknowledgements: This work was partially supported by the International Geosphere-Biosphere Programme - Data and Information System (IGBP-DIS); the IGBP-Global Analysis, Interpretation and Modelling Task Force (GAIM); the National Center for Ecological Analysis and Synthesis (NCEAS); and the National Aeronautics and Space Administration (NASA) Terrestrial Ecosystem Program. Oak Ridge National Laboratory is managed by UT-Battelle LLC for the U.S. Department of Energy under contract DE-AC05-00OR22725
Low Boom Configuration Analysis with FUN3D Adjoint Simulation Framework
NASA Technical Reports Server (NTRS)
Park, Michael A.
2011-01-01
Off-body pressure, forces, and moments for the Gulfstream Low Boom Model are computed with a Reynolds Averaged Navier Stokes solver coupled with the Spalart-Allmaras (SA) turbulence model. This is the first application of viscous output-based adaptation to reduce estimated discretization errors in off-body pressure for a wing body configuration. The output adaptation approach is compared to an a priori grid adaptation technique designed to resolve the signature on the centerline by stretching and aligning the grid to the freestream Mach angle. The output-based approach produced good predictions of centerline and off-centerline measurements. Eddy viscosity predicted by the SA turbulence model increased significantly with grid adaptation. Computed lift as a function of drag compares well with wind tunnel measurements for positive lift, but predicted lift, drag, and pitching moment as a function of angle of attack has significant differences from the measured data. The sensitivity of longitudinal forces and moment to grid refinement is much smaller than the differences between the computed and measured data.
Development of a Low Inductance Linear Alternator for Stirling Power Convertors
NASA Technical Reports Server (NTRS)
Geng, Steven M.; Schifer, Nicholas A.
2017-01-01
The free-piston Stirling power convertor is a promising technology for high efficiency heat-to-electricity power conversion in space. Stirling power convertors typically utilize linear alternators for converting mechanical motion into electricity. The linear alternator is one of the heaviest components of modern Stirling power convertors. In addition, state-of-art Stirling linear alternators usually require the use of tuning capacitors or active power factor correction controllers to maximize convertor output power. The linear alternator to be discussed in this paper, eliminates the need for tuning capacitors and delivers electrical power output in which current is inherently in phase with voltage. No power factor correction is needed. In addition, the linear alternator concept requires very little iron, so core loss has been virtually eliminated. This concept is a unique moving coil design where the magnetic flux path is defined by the magnets themselves. This paper presents computational predictions for two different low inductance alternator configurations, and compares the predictions with experimental data for one of the configurations that has been built and is currently being tested.
Development of a Low-Inductance Linear Alternator for Stirling Power Convertors
NASA Technical Reports Server (NTRS)
Geng, Steven M.; Schifer, Nicholas A.
2017-01-01
The free-piston Stirling power convertor is a promising technology for high-efficiency heat-to-electricity power conversion in space. Stirling power convertors typically utilize linear alternators for converting mechanical motion into electricity. The linear alternator is one of the heaviest components of modern Stirling power convertors. In addition, state-of-the-art Stirling linear alternators usually require the use of tuning capacitors or active power factor correction controllers to maximize convertor output power. The linear alternator to be discussed in this paper eliminates the need for tuning capacitors and delivers electrical power output in which current is inherently in phase with voltage. No power factor correction is needed. In addition, the linear alternator concept requires very little iron, so core loss has been virtually eliminated. This concept is a unique moving coil design where the magnetic flux path is defined by the magnets themselves. This paper presents computational predictions for two different low inductance alternator configurations. Additionally, one of the configurations was built and tested at GRC, and the experimental data is compared with the predictions.
NASA Technical Reports Server (NTRS)
Gyekenyesi, J. P.
1985-01-01
A computer program was developed for calculating the statistical fast fracture reliability and failure probability of ceramic components. The program includes the two-parameter Weibull material fracture strength distribution model, using the principle of independent action for polyaxial stress states and Batdorf's shear-sensitive as well as shear-insensitive crack theories, all for volume distributed flaws in macroscopically isotropic solids. Both penny-shaped cracks and Griffith cracks are included in the Batdorf shear-sensitive crack response calculations, using Griffith's maximum tensile stress or critical coplanar strain energy release rate criteria to predict mixed mode fracture. Weibull material parameters can also be calculated from modulus of rupture bar tests, using the least squares method with known specimen geometry and fracture data. The reliability prediction analysis uses MSC/NASTRAN stress, temperature and volume output, obtained from the use of three-dimensional, quadratic, isoparametric, or axisymmetric finite elements. The statistical fast fracture theories employed, along with selected input and output formats and options, are summarized. An example problem to demonstrate various features of the program is included.
Li, Yanpeng; Li, Xiang; Wang, Hongqiang; Chen, Yiping; Zhuang, Zhaowen; Cheng, Yongqiang; Deng, Bin; Wang, Liandong; Zeng, Yonghu; Gao, Lei
2014-01-01
This paper offers a compacted mechanism to carry out the performance evaluation work for an automatic target recognition (ATR) system: (a) a standard description of the ATR system's output is suggested, a quantity to indicate the operating condition is presented based on the principle of feature extraction in pattern recognition, and a series of indexes to assess the output in different aspects are developed with the application of statistics; (b) performance of the ATR system is interpreted by a quality factor based on knowledge of engineering mathematics; (c) through a novel utility called “context-probability” estimation proposed based on probability, performance prediction for an ATR system is realized. The simulation result shows that the performance of an ATR system can be accounted for and forecasted by the above-mentioned measures. Compared to existing technologies, the novel method can offer more objective performance conclusions for an ATR system. These conclusions may be helpful in knowing the practical capability of the tested ATR system. At the same time, the generalization performance of the proposed method is good. PMID:24967605
Rules and mechanisms for efficient two-stage learning in neural circuits
Teşileanu, Tiberiu; Ölveczky, Bence; Balasubramanian, Vijay
2017-01-01
Trial-and-error learning requires evaluating variable actions and reinforcing successful variants. In songbirds, vocal exploration is induced by LMAN, the output of a basal ganglia-related circuit that also contributes a corrective bias to the vocal output. This bias is gradually consolidated in RA, a motor cortex analogue downstream of LMAN. We develop a new model of such two-stage learning. Using stochastic gradient descent, we derive how the activity in ‘tutor’ circuits (e.g., LMAN) should match plasticity mechanisms in ‘student’ circuits (e.g., RA) to achieve efficient learning. We further describe a reinforcement learning framework through which the tutor can build its teaching signal. We show that mismatches between the tutor signal and the plasticity mechanism can impair learning. Applied to birdsong, our results predict the temporal structure of the corrective bias from LMAN given a plasticity rule in RA. Our framework can be applied predictively to other paired brain areas showing two-stage learning. DOI: http://dx.doi.org/10.7554/eLife.20944.001 PMID:28374674
Kinoshita, Kengo; Murakami, Yoichi; Nakamura, Haruki
2007-07-01
We have developed a method to predict ligand-binding sites in a new protein structure by searching for similar binding sites in the Protein Data Bank (PDB). The similarities are measured according to the shapes of the molecular surfaces and their electrostatic potentials. A new web server, eF-seek, provides an interface to our search method. It simply requires a coordinate file in the PDB format, and generates a prediction result as a virtual complex structure, with the putative ligands in a PDB format file as the output. In addition, the predicted interacting interface is displayed to facilitate the examination of the virtual complex structure on our own applet viewer with the web browser (URL: http://eF-site.hgc.jp/eF-seek).
NASA Astrophysics Data System (ADS)
Hayati, M.; Rashidi, A. M.; Rezaei, A.
2012-10-01
In this paper, the applicability of ANFIS as an accurate model for the prediction of the mass gain during high temperature oxidation using experimental data obtained for aluminized nanostructured (NS) nickel is presented. For developing the model, exposure time and temperature are taken as input and the mass gain as output. A hybrid learning algorithm consists of back-propagation and least-squares estimation is used for training the network. We have compared the proposed ANFIS model with experimental data. The predicted data are found to be in good agreement with the experimental data with mean relative error less than 1.1%. Therefore, we can use ANFIS model to predict the performances of thermal systems in engineering applications, such as modeling the mass gain for NS materials.
Computational problems in autoregressive moving average (ARMA) models
NASA Technical Reports Server (NTRS)
Agarwal, G. C.; Goodarzi, S. M.; Oneill, W. D.; Gottlieb, G. L.
1981-01-01
The choice of the sampling interval and the selection of the order of the model in time series analysis are considered. Band limited (up to 15 Hz) random torque perturbations are applied to the human ankle joint. The applied torque input, the angular rotation output, and the electromyographic activity using surface electrodes from the extensor and flexor muscles of the ankle joint are recorded. Autoregressive moving average models are developed. A parameter constraining technique is applied to develop more reliable models. The asymptotic behavior of the system must be taken into account during parameter optimization to develop predictive models.
NASA Astrophysics Data System (ADS)
Kuzma, H. A.; Boyle, K.; Pullman, S.; Reagan, M. T.; Moridis, G. J.; Blasingame, T. A.; Rector, J. W.; Nikolaou, M.
2010-12-01
A Self Teaching Expert System (SeTES) is being developed for the analysis, design and prediction of gas production from shales. An Expert System is a computer program designed to answer questions or clarify uncertainties that its designers did not necessarily envision which would otherwise have to be addressed by consultation with one or more human experts. Modern developments in computer learning, data mining, database management, web integration and cheap computing power are bringing the promise of expert systems to fruition. SeTES is a partial successor to Prospector, a system to aid in the identification and evaluation of mineral deposits developed by Stanford University and the USGS in the late 1970s, and one of the most famous early expert systems. Instead of the text dialogue used in early systems, the web user interface of SeTES helps a non-expert user to articulate, clarify and reason about a problem by navigating through a series of interactive wizards. The wizards identify potential solutions to queries by retrieving and combining together relevant records from a database. Inferences, decisions and predictions are made from incomplete and noisy inputs using a series of probabilistic models (Bayesian Networks) which incorporate records from the database, physical laws and empirical knowledge in the form of prior probability distributions. The database is mainly populated with empirical measurements, however an automatic algorithm supplements sparse data with synthetic data obtained through physical modeling. This constitutes the mechanism for how SeTES self-teaches. SeTES’ predictive power is expected to grow as users contribute more data into the system. Samples are appropriately weighted to favor high quality empirical data over low quality or synthetic data. Finally, a set of data visualization tools digests the output measurements into graphical outputs.
Sensitivity Analysis of the Integrated Medical Model for ISS Programs
NASA Technical Reports Server (NTRS)
Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M.
2016-01-01
Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral part of the overall verification, validation, and credibility review of IMM v4.0.
Innovative use of self-organising maps (SOMs) in model validation.
NASA Astrophysics Data System (ADS)
Jolly, Ben; McDonald, Adrian; Coggins, Jack
2016-04-01
We present an innovative combination of techniques for validation of numerical weather prediction (NWP) output against both observations and reanalyses using two classification schemes, demonstrated by a validation of the operational NWP 'AMPS' (the Antarctic Mesoscale Prediction System). Historically, model validation techniques have centred on case studies or statistics at various time scales (yearly/seasonal/monthly). Within the past decade the latter technique has been expanded by the addition of classification schemes in place of time scales, allowing more precise analysis. Classifications are typically generated for either the model or the observations, then used to create composites for both which are compared. Our method creates and trains a single self-organising map (SOM) on both the model output and observations, which is then used to classify both datasets using the same class definitions. In addition to the standard statistics on class composites, we compare the classifications themselves between the model and the observations. To add further context to the area studied, we use the same techniques to compare the SOM classifications with regimes developed for another study to great effect. The AMPS validation study compares model output against surface observations from SNOWWEB and existing University of Wisconsin-Madison Antarctic Automatic Weather Stations (AWS) during two months over the austral summer of 2014-15. Twelve SOM classes were defined in a '4 x 3' pattern, trained on both model output and observations of 2 m wind components, then used to classify both training datasets. Simple statistics (correlation, bias and normalised root-mean-square-difference) computed for SOM class composites showed that AMPS performed well during extreme weather events, but less well during lighter winds and poorly during the more changeable conditions between either extreme. Comparison of the classification time-series showed that, while correlations were lower during lighter wind periods, AMPS actually forecast the existence of those periods well suggesting that the correlations may be unfairly low. Further investigation showed poor temporal alignment during more changeable conditions, highlighting problems AMPS has around the exact timing of events. There was also a tendency for AMPS to over-predict certain wind flow patterns at the expense of others. In order to gain a larger scale perspective, we compared our mesoscale SOM classification time-series with synoptic scale regimes developed by another study using ERA-Interim reanalysis output and k-means clustering. There was good alignment between the regimes and the observations classifications (observations/regimes), highlighting the effect of synoptic scale forcing on the area. However, comparing the alignment between observations/regimes and AMPS/regimes showed that AMPS may have problems accurately resolving the strength and location of cyclones in the Ross Sea to the north of the target area.
Crevillén-García, D
2018-04-01
Time-consuming numerical simulators for solving groundwater flow and dissolution models of physico-chemical processes in deep aquifers normally require some of the model inputs to be defined in high-dimensional spaces in order to return realistic results. Sometimes, the outputs of interest are spatial fields leading to high-dimensional output spaces. Although Gaussian process emulation has been satisfactorily used for computing faithful and inexpensive approximations of complex simulators, these have been mostly applied to problems defined in low-dimensional input spaces. In this paper, we propose a method for simultaneously reducing the dimensionality of very high-dimensional input and output spaces in Gaussian process emulators for stochastic partial differential equation models while retaining the qualitative features of the original models. This allows us to build a surrogate model for the prediction of spatial fields in such time-consuming simulators. We apply the methodology to a model of convection and dissolution processes occurring during carbon capture and storage.
A 50-kW Module Power Station of Directly Solar-Pumped Iodine Laser
NASA Technical Reports Server (NTRS)
Choi, S. H.; Lee, J. H.; Meador, W. E.; Conway, E. J.
1997-01-01
The conceptual design of a 50 kW Directly Solar-Pumped Iodine Laser (DSPIL) module was developed for a space-based power station which transmits its coherent-beam power to users such as the moon, Martian rovers, or other satellites with large (greater than 25 kW) electric power requirements. Integration of multiple modules would provide an amount of power that exceeds the power of a single module by combining and directing the coherent beams to the user's receiver. The model developed for the DSPIL system conservatively predicts the laser output power (50 kW) that appears much less than the laser output (93 kW) obtained from the gain volume ratio extrapolation of experimental data. The difference in laser outputs may be attributed to reflector configurations adopted in both design and experiment. Even though the photon absorption by multiple reflections in experimental cavity setup was more efficient, the maximum secondary absorption amounts to be only 24.7 percent of the primary. However, the gain volume ratio shows 86 percent more power output than theoretical estimation that is roughly 60 percent more than the contribution by the secondary absorption. Such a difference indicates that the theoretical model adopted in the study underestimates the overall performance of the DSPIL. This fact may tolerate more flexible and radical selection of design parameters than used in this design study. The design achieves an overall specific power of approximately 5 W/kg and total mass of 10 metric tons.
TFaNS Tone Fan Noise Design/Prediction System. Volume 2; User's Manual; 1.4
NASA Technical Reports Server (NTRS)
Topol, David A.; Eversman, Walter
1999-01-01
TFaNS is the Tone Fan Noise Design/Prediction System developed by Pratt & Whitney under contract to NASA Lewis (presently NASA Glenn). The purpose of this system is to predict tone noise emanating from a fan stage including the effects of reflection and transmission by the rotor and stator and by the duct inlet and nozzle. These effects have been added to an existing annular duct/isolated stator noise prediction capability. TFaNS consists of: the codes that compute the acoustic properties (reflection and transmission coefficients) of the various elements and write them to files. CUP3D: Fan Noise Coupling Code that reads these files, solves the coupling problem, and outputs the desired noise predictions. AWAKEN: CFD/Measured Wake Postprocessor which reformats CFD wake predictions and/or measured wake data so it can be used by the system. This volume of the report provides information on code input and file structure essential for potential users of TFANS. This report is divided into three volumes: Volume 1. System Description, CUP3D Technical Documentation, and Manual for Code Developers; Volume 2. User's Manual, TFANS Vers. 1.4; Volume 3. Evaluation of System Codes.
TFaNS Tone Fan Noise Design/Prediction System. Volume 3; Evaluation of System Codes
NASA Technical Reports Server (NTRS)
Topol, David A.
1999-01-01
TFANS is the Tone Fan Noise Design/Prediction System developed by Pratt & Whitney under contract to NASA Lewis (presently NASA Glenn). The purpose of this system is to predict tone noise emanating from a fan stage including the effects of reflection and transmission by the rotor and stator and by the duct inlet and nozzle. These effects have been added to an existing annular duct/isolated stator noise prediction capability. TFANS consists of: The codes that compute the acoustic properties (reflection and transmission coefficients) of the various elements and write them to files. Cup3D: Fan Noise Coupling Code that reads these files, solves the coupling problem, and outputs the desired noise predictions. AWAKEN: CFD/Measured Wake Postprocessor which reformats CFD wake predictions and/or measured wake data so it can be used by the system. This volume of the report evaluates TFANS versus full-scale and ADP 22" fig data using the semi-empirical wake modelling in the system. This report is divided into three volumes: Volume 1: System Description, CUP3D Technical Documentation, and Manual for Code Developers; Volume II: User's Manual, TFANS Version 1.4; Volume III: Evaluation of System Codes.
NASA Astrophysics Data System (ADS)
Deo, Ravinesh C.; Şahin, Mehmet
2015-07-01
The forecasting of drought based on cumulative influence of rainfall, temperature and evaporation is greatly beneficial for mitigating adverse consequences on water-sensitive sectors such as agriculture, ecosystems, wildlife, tourism, recreation, crop health and hydrologic engineering. Predictive models of drought indices help in assessing water scarcity situations, drought identification and severity characterization. In this paper, we tested the feasibility of the Artificial Neural Network (ANN) as a data-driven model for predicting the monthly Standardized Precipitation and Evapotranspiration Index (SPEI) for eight candidate stations in eastern Australia using predictive variable data from 1915 to 2005 (training) and simulated data for the period 2006-2012. The predictive variables were: monthly rainfall totals, mean temperature, minimum temperature, maximum temperature and evapotranspiration, which were supplemented by large-scale climate indices (Southern Oscillation Index, Pacific Decadal Oscillation, Southern Annular Mode and Indian Ocean Dipole) and the Sea Surface Temperatures (Nino 3.0, 3.4 and 4.0). A total of 30 ANN models were developed with 3-layer ANN networks. To determine the best combination of learning algorithms, hidden transfer and output functions of the optimum model, the Levenberg-Marquardt and Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton backpropagation algorithms were utilized to train the network, tangent and logarithmic sigmoid equations used as the activation functions and the linear, logarithmic and tangent sigmoid equations used as the output function. The best ANN architecture had 18 input neurons, 43 hidden neurons and 1 output neuron, trained using the Levenberg-Marquardt learning algorithm using tangent sigmoid equation as the activation and output functions. An evaluation of the model performance based on statistical rules yielded time-averaged Coefficient of Determination, Root Mean Squared Error and the Mean Absolute Error ranging from 0.9945-0.9990, 0.0466-0.1117, and 0.0013-0.0130, respectively for individual stations. Also, the Willmott's Index of Agreement and the Nash-Sutcliffe Coefficient of Efficiency were between 0.932-0.959 and 0.977-0.998, respectively. When checked for the severity (S), duration (D) and peak intensity (I) of drought events determined from the simulated and observed SPEI, differences in drought parameters ranged from - 1.41-0.64%, - 2.17-1.92% and - 3.21-1.21%, respectively. Based on performance evaluation measures, we aver that the Artificial Neural Network model is a useful data-driven tool for forecasting monthly SPEI and its drought-related properties in the region of study.
Optical fiber sensors and signal processing for intelligent structure monitoring
NASA Technical Reports Server (NTRS)
Thomas, Daniel; Cox, Dave; Lindner, D. K.; Claus, R. O.
1989-01-01
Few mode optical fibers have been shown to produce predictable interference patterns when placed under strain. The use is described of a modal domain sensor in a vibration control experiment. An optical fiber is bonded along the length of a flexible beam. Output from the modal domain sensor is used to suppress vibrations induced in the beam. A distributed effect model for the modal domain sensor is developed. This model is combined with the beam and actuator dynamics to produce a system suitable for control design. Computer simulations predict open and closed loop dynamic responses. An experimental apparatus is described and experimental results are presented.
Random Predictor Models for Rigorous Uncertainty Quantification: Part 2
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2015-01-01
This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean, the variance, and the range of the model's parameter, thus of the output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, is bounded rigorously.
Estimates of emergency operating capacity in U.S. manufacturing industries: 1994--2005
DOE Office of Scientific and Technical Information (OSTI.GOV)
Belzer, D.B.
1997-02-01
To develop integrated policies for mobilization preparedness, planners require estimates and projections of available productive capacity during national emergency conditions. This report develops projections of national emergency operating capacity (EOC) for 458 US manufacturing industries at the 4-digit Standard Industrial Classification (SIC) level. These measures are intended for use in planning models that are designed to predict the demands for detailed industry sectors that would occur under conditions such as a military mobilization or a major national disaster. This report is part of an ongoing series of studies prepared by the Pacific Northwest National Laboratory to support mobilization planning studiesmore » of the Federal Emergency Planning Agency/US Department of Defense (FEMA/DOD). Earlier sets of EOC estimates were developed in 1985 and 1991. This study presents estimates of EOC through 2005. As in the 1991 study, projections of capacity were based upon extrapolations of equipment capital stocks. The methodology uses time series regression models based on industry data to obtain a response function of industry capital stock to levels of industrial output. The distributed lag coefficients of these response function are then used with projected outputs to extrapolate the 1994 level of EOC. Projections of industrial outputs were taken from the intermediate-term forecast of the US economy prepared by INFORUM (Interindustry Forecasting Model, University of Maryland) in the spring of 1996.« less
NASA Technical Reports Server (NTRS)
Morris, A. Terry
1999-01-01
This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.
Use of medium-range numerical weather prediction model output to produce forecasts of streamflow
Clark, M.P.; Hay, L.E.
2004-01-01
This paper examines an archive containing over 40 years of 8-day atmospheric forecasts over the contiguous United States from the NCEP reanalysis project to assess the possibilities for using medium-range numerical weather prediction model output for predictions of streamflow. This analysis shows the biases in the NCEP forecasts to be quite extreme. In many regions, systematic precipitation biases exceed 100% of the mean, with temperature biases exceeding 3??C. In some locations, biases are even higher. The accuracy of NCEP precipitation and 2-m maximum temperature forecasts is computed by interpolating the NCEP model output for each forecast day to the location of each station in the NWS cooperative network and computing the correlation with station observations. Results show that the accuracy of the NCEP forecasts is rather low in many areas of the country. Most apparent is the generally low skill in precipitation forecasts (particularly in July) and low skill in temperature forecasts in the western United States, the eastern seaboard, and the southern tier of states. These results outline a clear need for additional processing of the NCEP Medium-Range Forecast Model (MRF) output before it is used for hydrologic predictions. Techniques of model output statistics (MOS) are used in this paper to downscale the NCEP forecasts to station locations. Forecasted atmospheric variables (e.g., total column precipitable water, 2-m air temperature) are used as predictors in a forward screening multiple linear regression model to improve forecasts of precipitation and temperature for stations in the National Weather Service cooperative network. This procedure effectively removes all systematic biases in the raw NCEP precipitation and temperature forecasts. MOS guidance also results in substantial improvements in the accuracy of maximum and minimum temperature forecasts throughout the country. For precipitation, forecast improvements were less impressive. MOS guidance increases he accuracy of precipitation forecasts over the northeastern United States, but overall, the accuracy of MOS-based precipitation forecasts is slightly lower than the raw NCEP forecasts. Four basins in the United States were chosen as case studies to evaluate the value of MRF output for predictions of streamflow. Streamflow forecasts using MRF output were generated for one rainfall-dominated basin (Alapaha River at Statenville, Georgia) and three snowmelt-dominated basins (Animas River at Durango, Colorado: East Fork of the Carson River near Gardnerville, Nevada: and Cle Elum River near Roslyn, Washington). Hydrologic model output forced with measured-station data were used as "truth" to focus attention on the hydrologic effects of errors in the MRF forecasts. Eight-day streamflow forecasts produced using the MOS-corrected MRF output as input (MOS) were compared with those produced using the climatic Ensemble Streamflow Prediction (ESP) technique. MOS-based streamflow forecasts showed increased skill in the snowmelt-dominated river basins, where daily variations in streamflow are strongly forced by temperature. In contrast, the skill of MOS forecasts in the rainfall-dominated basin (the Alapaha River) were equivalent to the skill of the ESP forecasts. Further improvements in streamflow forecasts require more accurate local-scale forecasts of precipitation and temperature, more accurate specification of basin initial conditions, and more accurate model simulations of streamflow. ?? 2004 American Meteorological Society.
NASA Astrophysics Data System (ADS)
Abiriand Bhekisipho Twala, Olufunminiyi
2017-08-01
In this paper, a multilayer feedforward neural network with Bayesian regularization constitutive model is developed for alloy 316L during high strain rate and high temperature plastic deformation. The input variables are strain rate, temperature and strain while the output value is the flow stress of the material. The results show that the use of Bayesian regularized technique reduces the potential of overfitting and overtraining. The prediction quality of the model is thereby improved. The model predictions are in good agreement with experimental measurements. The measurement data used for the network training and model comparison were taken from relevant literature. The developed model is robust as it can be generalized to deformation conditions slightly below or above the training dataset.
Khanali, Majid; Mobli, Hossein; Hosseinzadeh-Bandbafha, Homa
2017-12-01
In this study, an artificial neural network (ANN) model was developed for predicting the yield and life cycle environmental impacts based on energy inputs required in processing of black tea, green tea, and oolong tea in Guilan province of Iran. A life cycle assessment (LCA) approach was used to investigate the environmental impact categories of processed tea based on the cradle to gate approach, i.e., from production of input materials using raw materials to the gate of tea processing units, i.e., packaged tea. Thus, all the tea processing operations such as withering, rolling, fermentation, drying, and packaging were considered in the analysis. The initial data were obtained from tea processing units while the required data about the background system was extracted from the EcoInvent 2.2 database. LCA results indicated that diesel fuel and corrugated paper box used in drying and packaging operations, respectively, were the main hotspots. Black tea processing unit caused the highest pollution among the three processing units. Three feed-forward back-propagation ANN models based on Levenberg-Marquardt training algorithm with two hidden layers accompanied by sigmoid activation functions and a linear transfer function in output layer, were applied for three types of processed tea. The neural networks were developed based on energy equivalents of eight different input parameters (energy equivalents of fresh tea leaves, human labor, diesel fuel, electricity, adhesive, carton, corrugated paper box, and transportation) and 11 output parameters (yield, global warming, abiotic depletion, acidification, eutrophication, ozone layer depletion, human toxicity, freshwater aquatic ecotoxicity, marine aquatic ecotoxicity, terrestrial ecotoxicity, and photochemical oxidation). The results showed that the developed ANN models with R 2 values in the range of 0.878 to 0.990 had excellent performance in predicting all the output variables based on inputs. Energy consumption for processing of green tea, oolong tea, and black tea were calculated as 58,182, 60,947, and 66,301 MJ per ton of dry tea, respectively.
Pandey, Daya Shankar; Das, Saptarshi; Pan, Indranil; Leahy, James J; Kwapinski, Witold
2016-12-01
In this paper, multi-layer feed forward neural networks are used to predict the lower heating value of gas (LHV), lower heating value of gasification products including tars and entrained char (LHV p ) and syngas yield during gasification of municipal solid waste (MSW) during gasification in a fluidized bed reactor. These artificial neural networks (ANNs) with different architectures are trained using the Levenberg-Marquardt (LM) back-propagation algorithm and a cross validation is also performed to ensure that the results generalise to other unseen datasets. A rigorous study is carried out on optimally choosing the number of hidden layers, number of neurons in the hidden layer and activation function in a network using multiple Monte Carlo runs. Nine input and three output parameters are used to train and test various neural network architectures in both multiple output and single output prediction paradigms using the available experimental datasets. The model selection procedure is carried out to ascertain the best network architecture in terms of predictive accuracy. The simulation results show that the ANN based methodology is a viable alternative which can be used to predict the performance of a fluidized bed gasifier. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Schroeder, R.; Jacobs, J. M.; Vuyovich, C.; Cho, E.; Tuttle, S. E.
2017-12-01
Each spring the Red River basin (RRB) of the North, located between the states of Minnesota and North Dakota and southern Manitoba, is vulnerable to dangerous spring snowmelt floods. Flat terrain, low permeability soils and a lack of satisfactory ground observations of snow pack conditions make accurate predictions of the onset and magnitude of major spring flood events in the RRB very challenging. This study investigated the potential benefit of using gridded snow water equivalent (SWE) products from passive microwave satellite missions and model output simulations to improve snowmelt flood predictions in the RRB using NOAA's operational Community Hydrologic Prediction System (CHPS). Level-3 satellite SWE products from AMSR-E, AMSR2 and SSM/I, as well as SWE computed from Level-2 brightness temperatures (Tb) measurements, including model output simulations of SWE from SNODAS and GlobSnow-2 were chosen to support the snowmelt modeling exercises. SWE observations were aggregated spatially (i.e. to the NOAA North Central River Forecast Center forecast basins) and temporally (i.e. by obtaining daily screened and weekly unscreened maximum SWE composites) to assess the value of daily satellite SWE observations relative to weekly maximums. Data screening methods removed the impacts of snow melt and cloud contamination on SWE and consisted of diurnal SWE differences and a temperature-insensitive polarization difference ratio, respectively. We examined the ability of the satellite and model output simulations to capture peak SWE and investigated temporal accuracies of screened and unscreened satellite and model output SWE. The resulting SWE observations were employed to update the SNOW-17 snow accumulation and ablation model of CHPS to assess the benefit of using temporally and spatially consistent SWE observations for snow melt predictions in two test basins in the RRB.
Berlinguer, Fiammetta; Madeddu, Manuela; Pasciu, Valeria; Succu, Sara; Spezzigu, Antonio; Satta, Valentina; Mereu, Paolo; Leoni, Giovanni G; Naitana, Salvatore
2009-01-01
Currently, the assessment of sperm function in a raw or processed semen sample is not able to reliably predict sperm ability to withstand freezing and thawing procedures and in vivo fertility and/or assisted reproductive biotechnologies (ART) outcome. The aim of the present study was to investigate which parameters among a battery of analyses could predict subsequent spermatozoa in vitro fertilization ability and hence blastocyst output in a goat model. Ejaculates were obtained by artificial vagina from 3 adult goats (Capra hircus) aged 2 years (A, B and C). In order to assess the predictive value of viability, computer assisted sperm analyzer (CASA) motility parameters and ATP intracellular concentration before and after thawing and of DNA integrity after thawing on subsequent embryo output after an in vitro fertility test, a logistic regression analysis was used. Individual differences in semen parameters were evident for semen viability after thawing and DNA integrity. Results of IVF test showed that spermatozoa collected from A and B lead to higher cleavage rates (0 < 0.01) and blastocysts output (p < 0.05) compared with C. Logistic regression analysis model explained a deviance of 72% (p < 0.0001), directly related with the mean percentage of rapid spermatozoa in fresh semen (p < 0.01), semen viability after thawing (p < 0.01), and with two of the three comet parameters considered, i.e tail DNA percentage and comet length (p < 0.0001). DNA integrity alone had a high predictive value on IVF outcome with frozen/thawed semen (deviance explained: 57%). The model proposed here represents one of the many possible ways to explain differences found in embryo output following IVF with different semen donors and may represent a useful tool to select the most suitable donors for semen cryopreservation. PMID:19900288
Roy, Subrata K
2002-03-01
In developing countries like India, where the incidence of protein-calorie malnutrition is high and mechanization is at a minimum, human labor provides much of the power for physical activity. This study presents anthropometric measurements, somatotypes, food intakes, energy expenditures, and work outputs of Oraon agricultural laborers of the Jalpaiguri district, West Bengal, in an attempt to identify the factors that predict high work productivity. Specifically, this study investigates 1) the relationship between morphological variation (anthropometric measurements and somatotype) and work productivity, 2) the nature and extent of the relationship between nutritional status and work productivity, and 3) the best predictor variables of work output. Classification of groups on the basis of median values of work output show that in the aggregate, the high productive groups are significantly younger than low-productive groups in both sexes. Before age-adjustment, the high productive groups show higher mean values of a few body dimensions, though these differ by sex, and both males and females exhibit a normal range of blood pressure and pulse rate values. Mean values of grip strength and back strength are higher in high-output men and women. Mean values of both food intake and energy expenditure are also higher among men in high-output groups, with only food intake higher in high-output women. However, after eliminating the effects of age, the differences between low-productive groups and high-productive groups in most of the variables are not significant. Productivity predictors in males consist of age, food intake and chest girth (inhalation). Females, on the other hand, show age and grip strength (left) as work output predictors. Copyright 2002 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Perez, Marc J. R.
With extraordinary recent growth of the solar photovoltaic industry, it is paramount to address the biggest barrier to its high-penetration across global electrical grids: the inherent variability of the solar resource. This resource variability arises from largely unpredictable meteorological phenomena and from the predictable rotation of the earth around the sun and about its own axis. To achieve very high photovoltaic penetration, the imbalance between the variable supply of sunlight and demand must be alleviated. The research detailed herein consists of the development of a computational model which seeks to optimize the combination of 3 supply-side solutions to solar variability that minimizes the aggregate cost of electricity generated therefrom: Storage (where excess solar generation is stored when it exceeds demand for utilization when it does not meet demand), interconnection (where solar generation is spread across a large geographic area and electrically interconnected to smooth overall regional output) and smart curtailment (where solar capacity is oversized and excess generation is curtailed at key times to minimize the need for storage.). This model leverages a database created in the context of this doctoral work of satellite-derived photovoltaic output spanning 10 years at a daily interval for 64,000 unique geographic points across the globe. Underpinning the model's design and results, the database was used to further the understanding of solar resource variability at timescales greater than 1-day. It is shown that--as at shorter timescales--cloud/weather-induced solar variability decreases with geographic extent and that the geographic extent at which variability is mitigated increases with timescale and is modulated by the prevailing speed of clouds/weather systems. Unpredictable solar variability up to the timescale of 30 days is shown to be mitigated across a geographic extent of only 1500km if that geographic extent is oriented in a north/south bearing. Using technical and economic data reflecting today's real costs for solar generation technology, storage and electric transmission in combination with this model, we determined the minimum cost combination of these solutions to transform the variable output from solar plants into 3 distinct output profiles: A constant output equivalent to a baseload power plant, a well-defined seasonally-variable output with no weather-induced variability and a variable output but one that is 100% predictable on a multi-day ahead basis. In order to do this, over 14,000 model runs were performed by varying the desired output profile, the amount of energy curtailment, the penetration of solar energy and the geographic region across the continental United States. Despite the cost of supplementary electric transmission, geographic interconnection has the potential to reduce the levelized cost of electricity when meeting any of the studied output profiles by over 65% compared to when only storage is used. Energy curtailment, despite the cost of underutilizing solar energy capacity, has the potential to reduce the total cost of electricity when meeting any of the studied output profiles by over 75% compared to when only storage is used. The three variability mitigation strategies are thankfully not mutually exclusive. When combined at their ideal levels, each of the regions studied saw a reduction in cost of electricity of over 80% compared to when only energy storage is used to meet a specified output profile. When including current costs for solar generation, transmission and energy storage, an optimum configuration can conservatively provide guaranteed baseload power generation with solar across the entire continental United States (equivalent to a nuclear power plant with no down time) for less than 0.19 per kilowatt-hour. If solar is preferentially clustered in the southwest instead of evenly spread throughout the United States, and we adopt future expected costs for solar generation of 1 per watt, optimal model results show that meeting a 100% predictable output target with solar will cost no more than $0.08 per kilowatt-hour.
Information Processing and Collective Behavior in a Model Neuronal System
2014-03-28
for an AFOSR project headed by Steve Reppert on Monarch Butterfly navigation. We visited the Reppert lab at the UMASS Medical School and have had many...developed a detailed mathematical model of the mammalian circadian clock. Our model can accurately predict diverse experimental data including the...i.e. P1 affects P2 which affects P3 …). The output of the system is calculated (measurements), and the interactions are forgotten. Based on
Modified linear predictive coding approach for moving target tracking by Doppler radar
NASA Astrophysics Data System (ADS)
Ding, Yipeng; Lin, Xiaoyi; Sun, Ke-Hui; Xu, Xue-Mei; Liu, Xi-Yao
2016-07-01
Doppler radar is a cost-effective tool for moving target tracking, which can support a large range of civilian and military applications. A modified linear predictive coding (LPC) approach is proposed to increase the target localization accuracy of the Doppler radar. Based on the time-frequency analysis of the received echo, the proposed approach first real-time estimates the noise statistical parameters and constructs an adaptive filter to intelligently suppress the noise interference. Then, a linear predictive model is applied to extend the available data, which can help improve the resolution of the target localization result. Compared with the traditional LPC method, which empirically decides the extension data length, the proposed approach develops an error array to evaluate the prediction accuracy and thus, adjust the optimum extension data length intelligently. Finally, the prediction error array is superimposed with the predictor output to correct the prediction error. A series of experiments are conducted to illustrate the validity and performance of the proposed techniques.
The Inception of OMA in the Development of Modal Testing Technology for Wind Turbines
NASA Technical Reports Server (NTRS)
James, George H., III; Carne. Thomas G.
2008-01-01
Wind turbines are immense, flexible structures with aerodynamic forces acting on the rotating blades at harmonics of the turbine rotational frequency, which are comparable to the modal frequencies of the structure. Predicting and experimentally measuring the modal frequencies of wind turbines has been important to their successful design and operation. Performing modal tests on wind turbine structures over 100 meters tall is a substantial challenge, which has inspired innovative developments in modal test technology. For wind turbines, a further complication is that the modal frequencies are dependent on the turbine rotation speed. The history and development of a new technique for acquiring the modal parameters using output-only response data, called the Natural Excitation Technique (NExT), will be reviewed, showing historical tests and techniques. The initial attempts at output-only modal testing began in the late 1980's with the development of NExT in the 1990's. NExT was a predecessor to OMA, developed to overcome these challenges of testing immense structures excited with environmental inputs. We will trace the difficulties and successes of wind turbine modal testing from 1982 to the present. Keywords: OMA, Modal Analysis, NExT, Wind Turbines, Wind Excitation
Marken, Richard S; Horth, Brittany
2011-06-01
Experimental research in psychology is based on an open-loop causal model which assumes that sensory input causes behavioral output. This model was tested in a tracking experiment where participants were asked to control a cursor, keeping it aligned with a target by moving a mouse to compensate for disturbances of differing difficulty. Since cursor movements (inputs) are the only observable cause of mouse movements (outputs), the open-loop model predicts that there will be a correlation between input and output that increases as tracking performance improves. In fact, the correlation between sensory input and motor output is very low regardless of the quality of tracking performance; causality, in terms of the effect of input on output, does not seem to imply correlation in this situation. This surprising result can be explained by a closed-loop model which assumes that input is causing output while output is causing input.
NASA Astrophysics Data System (ADS)
Wang, Bowen; Li, Yuanyuan; Xie, Xinliang; Huang, Wenmei; Weng, Ling; Zhang, Changgeng
2018-05-01
Based on the Wiedemann effect and inverse magnetostritive effect, the output voltage model of a magnetostrictive displacement sensor has been established. The output voltage of the magnetostrictive displacement sensor is calculated in different magnetic fields. It is found that the calculating result is in an agreement with the experimental one. The theoretical and experimental results show that the output voltage of the displacement sensor is linearly related to the magnetostrictive differences, (λl-λt), of waveguide wires. The measured output voltages for Fe-Ga and Fe-Ni wire sensors are 51.5mV and 36.5mV, respectively, and the output voltage of Fe-Ga wire sensor is obviously higher than that of Fe-Ni wire sensor under the same magnetic field. The model can be used to predict the output voltage of the sensor and to provide guidance for the optimization design of the sensor.
An Automated Solar Synoptic Analysis Software System
NASA Astrophysics Data System (ADS)
Hong, S.; Lee, S.; Oh, S.; Kim, J.; Lee, J.; Kim, Y.; Lee, J.; Moon, Y.; Lee, D.
2012-12-01
We have developed an automated software system of identifying solar active regions, filament channels, and coronal holes, those are three major solar sources causing the space weather. Space weather forecasters of NOAA Space Weather Prediction Center produce the solar synoptic drawings as a daily basis to predict solar activities, i.e., solar flares, filament eruptions, high speed solar wind streams, and co-rotating interaction regions as well as their possible effects to the Earth. As an attempt to emulate this process with a fully automated and consistent way, we developed a software system named ASSA(Automated Solar Synoptic Analysis). When identifying solar active regions, ASSA uses high-resolution SDO HMI intensitygram and magnetogram as inputs and providing McIntosh classification and Mt. Wilson magnetic classification of each active region by applying appropriate image processing techniques such as thresholding, morphology extraction, and region growing. At the same time, it also extracts morphological and physical properties of active regions in a quantitative way for the short-term prediction of flares and CMEs. When identifying filament channels and coronal holes, images of global H-alpha network and SDO AIA 193 are used for morphological identification and also SDO HMI magnetograms for quantitative verification. The output results of ASSA are routinely checked and validated against NOAA's daily SRS(Solar Region Summary) and UCOHO(URSIgram code for coronal hole information). A couple of preliminary scientific results are to be presented using available output results. ASSA will be deployed at the Korean Space Weather Center and serve its customers in an operational status by the end of 2012.
49 CFR Appendix D to Part 222 - Determining Risk Levels
Code of Federal Regulations, 2011 CFR
2011-10-01
... prediction formulas can be used to derive the following for each crossing: 1. the predicted collisions (PC) 2... for errors such as data entry errors. The final output is the predicted number of collisions (PC). (e... collisions (PC). (f) For the prediction and severity index formulas, please see the following DOT...
NASA Technical Reports Server (NTRS)
Hinton, David A.
2001-01-01
A ground-based system has been developed to demonstrate the feasibility of automating the process of collecting relevant weather data, predicting wake vortex behavior from a data base of aircraft, prescribing safe wake vortex spacing criteria, estimating system benefit, and comparing predicted and observed wake vortex behavior. This report describes many of the system algorithms, features, limitations, and lessons learned, as well as suggested system improvements. The system has demonstrated concept feasibility and the potential for airport benefit. Significant opportunities exist however for improved system robustness and optimization. A condensed version of the development lab book is provided along with samples of key input and output file types. This report is intended to document the technical development process and system architecture, and to augment archived internal documents that provide detailed descriptions of software and file formats.
NASA Technical Reports Server (NTRS)
Dare, P. M.; Smith, P. J.
1983-01-01
The eddy kinetic energy budget is calculated for a 48-hour forecast of an intense occluding winter cyclone associated with a strong well-developed jet stream. The model output consists of the initialized (1200 GMT January 9, 1975) and the 12, 24, 36, and 48 hour forecast fields from the Drexel/NCAR Limited Area Mesoscale Prediction System (LAMPS) model. The LAMPS forecast compares well with observations for the first 24 hours, but then overdevelops the low-level cyclone while inadequately developing the upper-air wave and jet. Eddy kinetic energy was found to be concentrated in the upper-troposphere with maxima flanking the primary trough. The increases in kinetic energy were found to be due to an excess of the primary source term of kinetic energy content, which is the horizontal flux of eddy kinetic energy over the primary sinks, and the generation and dissipation of eddy kinetic energy.
Numerical modeling of friction welding of bi-metal joints for electrical applications
NASA Astrophysics Data System (ADS)
Velu, P. Shenbaga; Hynes, N. Rajesh Jesudoss
2018-05-01
In the manufacturing industries, and more especially in electrical engineering applications, the usage of non-ferrous materials plays a vital role. Today's engineering applications relies upon some of the significant properties such as a good corrosion resistance, mechanical properties, good heat conductivity and higher electrical conductivity. Copper-aluminum bi-metal joint is one such combination that meets the demands requirements for electrical applications. In this work, the numerical simulation of AA 6061 T6 alloy/Copper was carried out under joining conditions. By using this developed model, the temperature distribution along the length of the dissimilar joint is predicted and the time-temperature profile has also been generated. Besides, a Finite Element Model has been developed by using the numerical simulation Tool "ABAQUS". This developed FEM is helpful in predicting various output parameters during friction welding of this dissimilar joint combination.
The urine output definition of acute kidney injury is too liberal
2013-01-01
Introduction The urine output criterion of 0.5 ml/kg/hour for 6 hours for acute kidney injury (AKI) has not been prospectively validated. Urine output criteria for AKI (AKIUO) as predictors of in-hospital mortality or dialysis need were compared. Methods All admissions to a general ICU were prospectively screened for 12 months and hourly urine output analysed in collection intervals between 1 and 12 hours. Prediction of the composite of mortality or dialysis by urine output was analysed in increments of 0.1 ml/kg/hour from 0.1 to 1 ml/kg/hour and the optimal threshold for each collection interval determined. AKICr was defined as an increase in plasma creatinine ≥26.5 μmol/l within 48 hours or ≥50% from baseline. Results Of 725 admissions, 72% had either AKICr or AKIUO or both. AKIUO (33.7%) alone was more frequent than AKICr (11.0%) alone (P <0.0001). A 6-hour urine output collection threshold of 0.3 ml/kg/hour was associated with a stepped increase in in-hospital mortality or dialysis (from 10% above to 30% less than 0.3 ml/kg/hour). Hazard ratios for in-hospital mortality and 1-year mortality were 2.25 (1.40 to 3.61) and 2.15 (1.47 to 3.15) respectively after adjustment for age, body weight, severity of illness, fluid balance, and vasopressor use. In contrast, after adjustment AKIUO was not associated with in-hospital mortality or 1-year mortality. The optimal urine output threshold was linearly related to duration of urine collection (r2 = 0.93). Conclusions A 6-hour urine output threshold of 0.3 ml/kg/hour best associated with mortality and dialysis, and was independently predictive of both hospital mortality and 1-year mortality. This suggests that the current AKI urine output definition is too liberally defined. Shorter urine collection intervals may be used to define AKI using lower urine output thresholds. PMID:23787055
Competitive Learning Neural Network Ensemble Weighted by Predicted Performance
ERIC Educational Resources Information Center
Ye, Qiang
2010-01-01
Ensemble approaches have been shown to enhance classification by combining the outputs from a set of voting classifiers. Diversity in error patterns among base classifiers promotes ensemble performance. Multi-task learning is an important characteristic for Neural Network classifiers. Introducing a secondary output unit that receives different…
Optimal cycling time trial position models: aerodynamics versus power output and metabolic energy.
Fintelman, D M; Sterling, M; Hemida, H; Li, F-X
2014-06-03
The aerodynamic drag of a cyclist in time trial (TT) position is strongly influenced by the torso angle. While decreasing the torso angle reduces the drag, it limits the physiological functioning of the cyclist. Therefore the aims of this study were to predict the optimal TT cycling position as function of the cycling speed and to determine at which speed the aerodynamic power losses start to dominate. Two models were developed to determine the optimal torso angle: a 'Metabolic Energy Model' and a 'Power Output Model'. The Metabolic Energy Model minimised the required cycling energy expenditure, while the Power Output Model maximised the cyclists׳ power output. The input parameters were experimentally collected from 19 TT cyclists at different torso angle positions (0-24°). The results showed that for both models, the optimal torso angle depends strongly on the cycling speed, with decreasing torso angles at increasing speeds. The aerodynamic losses outweigh the power losses at cycling speeds above 46km/h. However, a fully horizontal torso is not optimal. For speeds below 30km/h, it is beneficial to ride in a more upright TT position. The two model outputs were not completely similar, due to the different model approaches. The Metabolic Energy Model could be applied for endurance events, while the Power Output Model is more suitable in sprinting or in variable conditions (wind, undulating course, etc.). It is suggested that despite some limitations, the models give valuable information about improving the cycling performance by optimising the TT cycling position. Copyright © 2014 Elsevier Ltd. All rights reserved.
Analysis of transverse field distributions in Porro prism resonators
NASA Astrophysics Data System (ADS)
Litvin, Igor A.; Burger, Liesl; Forbes, Andrew
2007-05-01
A model to describe the transverse field distribution of the output beam from porro prism resonators is proposed. The model allows the prediction of the output transverse field distribution by assuming that the main areas of loss are located at the apexes of the porro prisms. Experimental work on a particular system showed some interested correlations between the time domain behavior of the resonator and the transverse field output. These findings are presented and discussed.
Ruiz-Felter, Roxanna; Cooperson, Solaman J; Bedore, Lisa M; Peña, Elizabeth D
2016-07-01
Although some investigations of phonological development have found that segmental accuracy is comparable in monolingual children and their bilingual peers, there is evidence that language use affects segmental accuracy in both languages. To investigate the influence of age of first exposure to English and the amount of current input-output on phonological accuracy in English and Spanish in early bilingual Spanish-English kindergarteners. Also whether parent and teacher ratings of the children's intelligibility are correlated with phonological accuracy and the amount of experience with each language. Data for 91 kindergarteners (mean age = 5;6 years) were selected from a larger dataset focusing on Spanish-English bilingual language development. All children were from Central Texas, spoke a Mexican Spanish dialect and were learning American English. Children completed a single-word phonological assessment with separate forms for English and Spanish. The assessment was analyzed for segmental accuracy: percentage of consonants and vowels correct and percentage of early-, middle- and late-developing (EML) sounds correct were calculated. Children were more accurate on vowel production than consonant production and showed a decrease in accuracy from early to middle to late sounds. The amount of current input-output explained more of the variance in phonological accuracy than age of first English exposure. Although greater current input-output of a language was associated with greater accuracy in that language, English-dominant children were only significantly more accurate in English than Spanish on late sounds, whereas Spanish-dominant children were only significantly more accurate in Spanish than English on early sounds. Higher parent and teacher ratings of intelligibility in Spanish were correlated with greater consonant accuracy in Spanish, but the same did not hold for English. Higher intelligibility ratings in English were correlated with greater current English input-output, and the same held for Spanish. Current input-output appears to be a better predictor of phonological accuracy than age of first English exposure for early bilinguals, consistent with findings on the effect of language experience on performance in other language domains in bilingual children. Although greater current input-output in a language predicts higher accuracy in that language, this interacts with sound complexity. The results highlight the utility of the EML classification in assessing bilingual children's phonology. The relationships of intelligibility ratings with current input-output and sound accuracy can shed light on the process of referral of bilingual children for speech and language services. © 2016 Royal College of Speech and Language Therapists.
Application of Wavelet Filters in an Evaluation of ...
Air quality model evaluation can be enhanced with time-scale specific comparisons of outputs and observations. For example, high-frequency (hours to one day) time scale information in observed ozone is not well captured by deterministic models and its incorporation into model performance metrics lead one to devote resources to stochastic variations in model outputs. In this analysis, observations are compared with model outputs at seasonal, weekly, diurnal and intra-day time scales. Filters provide frequency specific information that can be used to compare the strength (amplitude) and timing (phase) of observations and model estimates. The National Exposure Research Laboratory′s (NERL′s) Atmospheric Modeling and Analysis Division (AMAD) conducts research in support of EPA′s mission to protect human health and the environment. AMAD′s research program is engaged in developing and evaluating predictive atmospheric models on all spatial and temporal scales for forecasting the Nation′s air quality and for assessing changes in air quality and air pollutant exposures, as affected by changes in ecosystem management and regulatory decisions. AMAD is responsible for providing a sound scientific and technical basis for regulatory policies based on air quality models to improve ambient air quality. The models developed by AMAD are being used by EPA, NOAA, and the air pollution community in understanding and forecasting not only the magnitude of the air pollu
A downscaling scheme for atmospheric variables to drive soil-vegetation-atmosphere transfer models
NASA Astrophysics Data System (ADS)
Schomburg, A.; Venema, V.; Lindau, R.; Ament, F.; Simmer, C.
2010-09-01
For driving soil-vegetation-transfer models or hydrological models, high-resolution atmospheric forcing data is needed. For most applications the resolution of atmospheric model output is too coarse. To avoid biases due to the non-linear processes, a downscaling system should predict the unresolved variability of the atmospheric forcing. For this purpose we derived a disaggregation system consisting of three steps: (1) a bi-quadratic spline-interpolation of the low-resolution data, (2) a so-called `deterministic' part, based on statistical rules between high-resolution surface variables and the desired atmospheric near-surface variables and (3) an autoregressive noise-generation step. The disaggregation system has been developed and tested based on high-resolution model output (400m horizontal grid spacing). A novel automatic search-algorithm has been developed for deriving the deterministic downscaling rules of step 2. When applied to the atmospheric variables of the lowest layer of the atmospheric COSMO-model, the disaggregation is able to adequately reconstruct the reference fields. Applying downscaling step 1 and 2, root mean square errors are decreased. Step 3 finally leads to a close match of the subgrid variability and temporal autocorrelation with the reference fields. The scheme can be applied to the output of atmospheric models, both for stand-alone offline simulations, and a fully coupled model system.
Brandt, Laura A.; Benscoter, Allison; Harvey, Rebecca G.; Speroterra, Carolina; Bucklin, David N.; Romañach, Stephanie; Watling, James I.; Mazzotti, Frank J.
2017-01-01
Climate envelope models are widely used to describe potential future distribution of species under different climate change scenarios. It is broadly recognized that there are both strengths and limitations to using climate envelope models and that outcomes are sensitive to initial assumptions, inputs, and modeling methods Selection of predictor variables, a central step in modeling, is one of the areas where different techniques can yield varying results. Selection of climate variables to use as predictors is often done using statistical approaches that develop correlations between occurrences and climate data. These approaches have received criticism in that they rely on the statistical properties of the data rather than directly incorporating biological information about species responses to temperature and precipitation. We evaluated and compared models and prediction maps for 15 threatened or endangered species in Florida based on two variable selection techniques: expert opinion and a statistical method. We compared model performance between these two approaches for contemporary predictions, and the spatial correlation, spatial overlap and area predicted for contemporary and future climate predictions. In general, experts identified more variables as being important than the statistical method and there was low overlap in the variable sets (<40%) between the two methods Despite these differences in variable sets (expert versus statistical), models had high performance metrics (>0.9 for area under the curve (AUC) and >0.7 for true skill statistic (TSS). Spatial overlap, which compares the spatial configuration between maps constructed using the different variable selection techniques, was only moderate overall (about 60%), with a great deal of variability across species. Difference in spatial overlap was even greater under future climate projections, indicating additional divergence of model outputs from different variable selection techniques. Our work is in agreement with other studies which have found that for broad-scale species distribution modeling, using statistical methods of variable selection is a useful first step, especially when there is a need to model a large number of species or expert knowledge of the species is limited. Expert input can then be used to refine models that seem unrealistic or for species that experts believe are particularly sensitive to change. It also emphasizes the importance of using multiple models to reduce uncertainty and improve map outputs for conservation planning. Where outputs overlap or show the same direction of change there is greater certainty in the predictions. Areas of disagreement can be used for learning by asking why the models do not agree, and may highlight areas where additional on-the-ground data collection could improve the models.
NASA Technical Reports Server (NTRS)
Chen, H. C.; Neback, H. E.; Kao, T. J.; Yu, N. Y.; Kusunose, K.
1991-01-01
This manual explains how to use an Euler based computational method for predicting the airframe/propulsion integration effects for an aft-mounted turboprop transport. The propeller power effects are simulated by the actuator disk concept. This method consists of global flow field analysis and the embedded flow solution for predicting the detailed flow characteristics in the local vicinity of an aft-mounted propfan engine. The computational procedure includes the use of several computer programs performing four main functions: grid generation, Euler solution, grid embedding, and streamline tracing. This user's guide provides information for these programs, including input data preparations with sample input decks, output descriptions, and sample Unix scripts for program execution in the UNICOS environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1991-02-01
This appendix is a compilation of work done to predict overall cycle performance from gasifier to generator terminals. A spreadsheet has been generated for each case to show flows within a cycle. The spreadsheet shows gaseous or solid composition of flow, temperature of flow, quantity of flow, and heat heat content of flow. Prediction of steam and gas turbine performance was obtained by the computer program GTPro. Outputs of all runs for each combined cycle reviewed has been added to this appendix. A process schematic displaying all flows predicted through GTPro and the spreadsheet is also added to this appendix.more » The numbered bubbles on the schematic correspond to columns on the top headings of the spreadsheet.« less
Plans and Example Results for the 2nd AIAA Aeroelastic Prediction Workshop
NASA Technical Reports Server (NTRS)
Heeg, Jennifer; Chwalowski, Pawel; Schuster, David M.; Raveh, Daniella; Jirasek, Adam; Dalenbring, Mats
2015-01-01
This paper summarizes the plans for the second AIAA Aeroelastic Prediction Workshop. The workshop is designed to assess the state-of-the-art of computational methods for predicting unsteady flow fields and aeroelastic response. The goals are to provide an impartial forum to evaluate the effectiveness of existing computer codes and modeling techniques, and to identify computational and experimental areas needing additional research and development. This paper provides guidelines and instructions for participants including the computational aerodynamic model, the structural dynamic properties, the experimental comparison data and the expected output data from simulations. The Benchmark Supercritical Wing (BSCW) has been chosen as the configuration for this workshop. The analyses to be performed will include aeroelastic flutter solutions of the wing mounted on a pitch-and-plunge apparatus.
Development of a General Form CO 2 and Brine Flux Input Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mansoor, K.; Sun, Y.; Carroll, S.
2014-08-01
The National Risk Assessment Partnership (NRAP) project is developing a science-based toolset for the quantitative analysis of the potential risks associated with changes in groundwater chemistry from CO 2 injection. In order to address uncertainty probabilistically, NRAP is developing efficient, reduced-order models (ROMs) as part of its approach. These ROMs are built from detailed, physics-based process models to provide confidence in the predictions over a range of conditions. The ROMs are designed to reproduce accurately the predictions from the computationally intensive process models at a fraction of the computational time, thereby allowing the utilization of Monte Carlo methods to probemore » variability in key parameters. This report presents the procedures used to develop a generalized model for CO 2 and brine leakage fluxes based on the output of a numerical wellbore simulation. The resulting generalized parameters and ranges reported here will be used for the development of third-generation groundwater ROMs.« less
A predictive model for assistive technology adoption for people with dementia.
Zhang, Shuai; McClean, Sally I; Nugent, Chris D; Donnelly, Mark P; Galway, Leo; Scotney, Bryan W; Cleland, Ian
2014-01-01
Assistive technology has the potential to enhance the level of independence of people with dementia, thereby increasing the possibility of supporting home-based care. In general, people with dementia are reluctant to change; therefore, it is important that suitable assistive technologies are selected for them. Consequently, the development of predictive models that are able to determine a person's potential to adopt a particular technology is desirable. In this paper, a predictive adoption model for a mobile phone-based video streaming system, developed for people with dementia, is presented. Taking into consideration characteristics related to a person's ability, living arrangements, and preferences, this paper discusses the development of predictive models, which were based on a number of carefully selected data mining algorithms for classification. For each, the learning on different relevant features for technology adoption has been tested, in conjunction with handling the imbalance of available data for output classes. Given our focus on providing predictive tools that could be used and interpreted by healthcare professionals, models with ease-of-use, intuitive understanding, and clear decision making processes are preferred. Predictive models have, therefore, been evaluated on a multi-criterion basis: in terms of their prediction performance, robustness, bias with regard to two types of errors and usability. Overall, the model derived from incorporating a k-Nearest-Neighbour algorithm using seven features was found to be the optimal classifier of assistive technology adoption for people with dementia (prediction accuracy 0.84 ± 0.0242).
NASA Astrophysics Data System (ADS)
Maizir, H.; Suryanita, R.
2018-01-01
A few decades, many methods have been developed to predict and evaluate the bearing capacity of driven piles. The problem of the predicting and assessing the bearing capacity of the pile is very complicated and not yet established, different soil testing and evaluation produce a widely different solution. However, the most important thing is to determine methods used to predict and evaluate the bearing capacity of the pile to the required degree of accuracy and consistency value. Accurate prediction and evaluation of axial bearing capacity depend on some variables, such as the type of soil, diameter, and length of pile, etc. The aims of the study of Artificial Neural Networks (ANNs) are utilized to obtain more accurate and consistent axial bearing capacity of a driven pile. ANNs can be described as mapping an input to the target output data. The method using the ANN model developed to predict and evaluate the axial bearing capacity of the pile based on the pile driving analyzer (PDA) test data for more than 200 selected data. The results of the predictions obtained by the ANN model and the PDA test were then compared. This research as the neural network models give a right prediction and evaluation of the axial bearing capacity of piles using neural networks.
NASA Astrophysics Data System (ADS)
Gorai, A. K.; Hasni, S. A.; Iqbal, Jawed
2016-11-01
Groundwater is the most important natural resource for drinking water to many people around the world, especially in rural areas where the supply of treated water is not available. Drinking water resources cannot be optimally used and sustained unless the quality of water is properly assessed. To this end, an attempt has been made to develop a suitable methodology for the assessment of drinking water quality on the basis of 11 physico-chemical parameters. The present study aims to select the fuzzy aggregation approach for estimation of the water quality index of a sample to check the suitability for drinking purposes. Based on expert's opinion and author's judgement, 11 water quality (pollutant) variables (Alkalinity, Dissolved Solids (DS), Hardness, pH, Ca, Mg, Fe, Fluoride, As, Sulphate, Nitrates) are selected for the quality assessment. The output results of proposed methodology are compared with the output obtained from widely used deterministic method (weighted arithmetic mean aggregation) for the suitability of the developed methodology.
Status Report on the CEBAF IR and UV FELs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leemann, Christoph; Bisognano, Joseph; Douglas, David
1993-07-01
The CEBAF five pass recirculating, superconducting linac, being developed as a high power electron source for nuclear physics, is also an ideal FEL driver.The 45 MeV front end linac is presently operational with a CW (low peak current) nuclear physics gun and has met all CEBAF performance specifications including low emittance and energy spread (< 1 * 10^-4). Progress will be reported in commissioning.This experience leads to predictions of excellent FEL performance.Initial designs reported last year have been advanced.Using the output of a high charge DC photoemission gun under development with a 6 cm period wiggler produces kilowatt output powersmore » in the 3.6 to 17 micrometer range in the fundamental.Third harmonic operation extends IR performance down to 1.2 micrometer.Beam at energies up to 400 MeV from the first full CEBAF linac will interact in a similar but longer wiggler to yield kilowatt UV light production at wavelengths as short as 0.15 micrometers.Full power FEL« less
Modeling and predicting intertidal variations of the salinity field in the Bay/Delta
Knowles, Noah; Uncles, Reginald J.
1995-01-01
One approach to simulating daily to monthly variability in the bay is the development of intertidal model using tidally-averaged equations and a time step on the order of the day. An intertidal numerical model of the bay's physics, capable of portraying seasonal and inter-annual variability, would have several uses. Observations are limited in time and space, so simulation could help fill the gaps. Also, the ability to simulate multi-year episodes (eg, an extended drought) could provide insight into the response of the ecosystem to such events. Finally, such a model could be used in a forecast mode wherein predicted delta flow is used as model input, and predicted salinity distribution is output with estimates days and months in advance. This note briefly introduces such a tidally-averaged model (Uncles and Peterson, in press) and a corresponding predictive scheme for baywide forecasting.
Prediction, dynamics, and visualization of antigenic phenotypes of seasonal influenza viruses
Neher, Richard A.; Bedford, Trevor; Daniels, Rodney S.; Shraiman, Boris I.
2016-01-01
Human seasonal influenza viruses evolve rapidly, enabling the virus population to evade immunity and reinfect previously infected individuals. Antigenic properties are largely determined by the surface glycoprotein hemagglutinin (HA), and amino acid substitutions at exposed epitope sites in HA mediate loss of recognition by antibodies. Here, we show that antigenic differences measured through serological assay data are well described by a sum of antigenic changes along the path connecting viruses in a phylogenetic tree. This mapping onto the tree allows prediction of antigenicity from HA sequence data alone. The mapping can further be used to make predictions about the makeup of the future A(H3N2) seasonal influenza virus population, and we compare predictions between models with serological and sequence data. To make timely model output readily available, we developed a web browser-based application that visualizes antigenic data on a continuously updated phylogeny. PMID:26951657
DOE Office of Scientific and Technical Information (OSTI.GOV)
Etingov, Pavel; Makarov, PNNL Yuri; Subbarao, PNNL Kris
RUT software is designed for use by the Balancing Authorities to predict and display additional requirements caused by the variability and uncertainty in load and generation. The prediction is made for the next operating hours as well as for the next day. The tool predicts possible deficiencies in generation capability and ramping capability. This deficiency of balancing resources can cause serious risks to power system stability and also impact real-time market energy prices. The tool dynamically and adaptively correlates changing system conditions with the additional balancing needs triggered by the interplay between forecasted and actual load and output of variablemore » resources. The assessment is performed using a specially developed probabilistic algorithm incorporating multiple sources of uncertainty including wind, solar and load forecast errors. The tool evaluates required generation for a worst case scenario, with a user-specified confidence level.« less
Predictive Multiple Model Switching Control with the Self-Organizing Map
NASA Technical Reports Server (NTRS)
Motter, Mark A.
2000-01-01
A predictive, multiple model control strategy is developed by extension of self-organizing map (SOM) local dynamic modeling of nonlinear autonomous systems to a control framework. Multiple SOMs collectively model the global response of a nonautonomous system to a finite set of representative prototype controls. Each SOM provides a codebook representation of the dynamics corresponding to a prototype control. Different dynamic regimes are organized into topological neighborhoods where the adjacent entries in the codebook represent the global minimization of a similarity metric. The SOM is additionally employed to identify the local dynamical regime, and consequently implements a switching scheme that selects the best available model for the applied control. SOM based linear models are used to predict the response to a larger family of control sequences which are clustered on the representative prototypes. The control sequence which corresponds to the prediction that best satisfies the requirements on the system output is applied as the external driving signal.
Ashford, Paul; Moss, David S; Alex, Alexander; Yeap, Siew K; Povia, Alice; Nobeli, Irene; Williams, Mark A
2012-03-14
Protein structures provide a valuable resource for rational drug design. For a protein with no known ligand, computational tools can predict surface pockets that are of suitable size and shape to accommodate a complementary small-molecule drug. However, pocket prediction against single static structures may miss features of pockets that arise from proteins' dynamic behaviour. In particular, ligand-binding conformations can be observed as transiently populated states of the apo protein, so it is possible to gain insight into ligand-bound forms by considering conformational variation in apo proteins. This variation can be explored by considering sets of related structures: computationally generated conformers, solution NMR ensembles, multiple crystal structures, homologues or homology models. It is non-trivial to compare pockets, either from different programs or across sets of structures. For a single structure, difficulties arise in defining particular pocket's boundaries. For a set of conformationally distinct structures the challenge is how to make reasonable comparisons between them given that a perfect structural alignment is not possible. We have developed a computational method, Provar, that provides a consistent representation of predicted binding pockets across sets of related protein structures. The outputs are probabilities that each atom or residue of the protein borders a predicted pocket. These probabilities can be readily visualised on a protein using existing molecular graphics software. We show how Provar simplifies comparison of the outputs of different pocket prediction algorithms, of pockets across multiple simulated conformations and between homologous structures. We demonstrate the benefits of use of multiple structures for protein-ligand and protein-protein interface analysis on a set of complexes and consider three case studies in detail: i) analysis of a kinase superfamily highlights the conserved occurrence of surface pockets at the active and regulatory sites; ii) a simulated ensemble of unliganded Bcl2 structures reveals extensions of a known ligand-binding pocket not apparent in the apo crystal structure; iii) visualisations of interleukin-2 and its homologues highlight conserved pockets at the known receptor interfaces and regions whose conformation is known to change on inhibitor binding. Through post-processing of the output of a variety of pocket prediction software, Provar provides a flexible approach to the analysis and visualization of the persistence or variability of pockets in sets of related protein structures.
How to make predictions about future infectious disease risks
Woolhouse, Mark
2011-01-01
Formal, quantitative approaches are now widely used to make predictions about the likelihood of an infectious disease outbreak, how the disease will spread, and how to control it. Several well-established methodologies are available, including risk factor analysis, risk modelling and dynamic modelling. Even so, predictive modelling is very much the ‘art of the possible’, which tends to drive research effort towards some areas and away from others which may be at least as important. Building on the undoubted success of quantitative modelling of the epidemiology and control of human and animal diseases such as AIDS, influenza, foot-and-mouth disease and BSE, attention needs to be paid to developing a more holistic framework that captures the role of the underlying drivers of disease risks, from demography and behaviour to land use and climate change. At the same time, there is still considerable room for improvement in how quantitative analyses and their outputs are communicated to policy makers and other stakeholders. A starting point would be generally accepted guidelines for ‘good practice’ for the development and the use of predictive models. PMID:21624924
Cutting the wires: modularization of cellular networks for experimental design.
Lang, Moritz; Summers, Sean; Stelling, Jörg
2014-01-07
Understanding naturally evolved cellular networks requires the consecutive identification and revision of the interactions between relevant molecular species. In this process, initially often simplified and incomplete networks are extended by integrating new reactions or whole subnetworks to increase consistency between model predictions and new measurement data. However, increased consistency with experimental data alone is not sufficient to show the existence of biomolecular interactions, because the interplay of different potential extensions might lead to overall similar dynamics. Here, we present a graph-based modularization approach to facilitate the design of experiments targeted at independently validating the existence of several potential network extensions. Our method is based on selecting the outputs to measure during an experiment, such that each potential network extension becomes virtually insulated from all others during data analysis. Each output defines a module that only depends on one hypothetical network extension, and all other outputs act as virtual inputs to achieve insulation. Given appropriate experimental time-series measurements of the outputs, our modules can be analyzed, simulated, and compared to the experimental data separately. Our approach exemplifies the close relationship between structural systems identification and modularization, an interplay that promises development of related approaches in the future. Copyright © 2014 Biophysical Society. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Dua, Rohit; Watkins, Steve E.
2009-03-01
Strain analysis due to vibration can provide insight into structural health. An Extrinsic Fabry-Perot Interferometric (EFPI) sensor under vibrational strain generates a non-linear modulated output. Advanced signal processing techniques, to extract important information such as absolute strain, are required to demodulate this non-linear output. Past research has employed Artificial Neural Networks (ANN) and Fast Fourier Transforms (FFT) to demodulate the EFPI sensor for limited conditions. These demodulation systems could only handle variations in absolute value of strain and frequency of actuation during a vibration event. This project uses an ANN approach to extend the demodulation system to include the variation in the damping coefficient of the actuating vibration, in a near real-time vibration scenario. A computer simulation provides training and testing data for the theoretical output of the EFPI sensor to demonstrate the approaches. FFT needed to be performed on a window of the EFPI output data. A small window of observation is obtained, while maintaining low absolute-strain prediction errors, heuristically. Results are obtained and compared from employing different ANN architectures including multi-layered feedforward ANN trained using Backpropagation Neural Network (BPNN), and Generalized Regression Neural Networks (GRNN). A two-layered algorithm fusion system is developed and tested that yields better results.
Prediction system of hydroponic plant growth and development using algorithm Fuzzy Mamdani method
NASA Astrophysics Data System (ADS)
Sudana, I. Made; Purnawirawan, Okta; Arief, Ulfa Mediaty
2017-03-01
Hydroponics is a method of farming without soil. One of the Hydroponic plants is Watercress (Nasturtium Officinale). The development and growth process of hydroponic Watercress was influenced by levels of nutrients, acidity and temperature. The independent variables can be used as input variable system to predict the value level of plants growth and development. The prediction system is using Fuzzy Algorithm Mamdani method. This system was built to implement the function of Fuzzy Inference System (Fuzzy Inference System/FIS) as a part of the Fuzzy Logic Toolbox (FLT) by using MATLAB R2007b. FIS is a computing system that works on the principle of fuzzy reasoning which is similar to humans' reasoning. Basically FIS consists of four units which are fuzzification unit, fuzzy logic reasoning unit, base knowledge unit and defuzzification unit. In addition to know the effect of independent variables on the plants growth and development that can be visualized with the function diagram of FIS output surface that is shaped three-dimensional, and statistical tests based on the data from the prediction system using multiple linear regression method, which includes multiple linear regression analysis, T test, F test, the coefficient of determination and donations predictor that are calculated using SPSS (Statistical Product and Service Solutions) software applications.
NASA Astrophysics Data System (ADS)
Sutherland, Herbert J.
1988-08-01
Sandia National Laboratories has erected a research oriented, 34- meter diameter, Darrieus vertical axis wind turbine near Bushland, Texas. This machine, designated the Sandia 34-m VAWT Test Bed, is equipped with a large array of strain gauges that have been placed at critical positions about the blades. This manuscript details a series of four-point bend experiments that were conducted to validate the output of the blade strain gauge circuits. The output of a particular gauge circuit is validated by comparing its output to equivalent gauge circuits (in this stress state) and to theoretical predictions. With only a few exceptions, the difference between measured and predicted strain values for a gauge circuit was found to be of the order of the estimated repeatability for the measurement system.
Adaptive model predictive process control using neural networks
Buescher, K.L.; Baum, C.C.; Jones, R.D.
1997-08-19
A control system for controlling the output of at least one plant process output parameter is implemented by adaptive model predictive control using a neural network. An improved method and apparatus provides for sampling plant output and control input at a first sampling rate to provide control inputs at the fast rate. The MPC system is, however, provided with a network state vector that is constructed at a second, slower rate so that the input control values used by the MPC system are averaged over a gapped time period. Another improvement is a provision for on-line training that may include difference training, curvature training, and basis center adjustment to maintain the weights and basis centers of the neural in an updated state that can follow changes in the plant operation apart from initial off-line training data. 46 figs.
Hybrid zero-voltage switching (ZVS) control for power inverters
Amirahmadi, Ahmadreza; Hu, Haibing; Batarseh, Issa
2016-11-01
A power inverter combination includes a half-bridge power inverter including first and second semiconductor power switches receiving input power having an intermediate node therebetween providing an inductor current through an inductor. A controller includes input comparison circuitry receiving the inductor current having outputs coupled to first inputs of pulse width modulation (PWM) generation circuitry, and a predictive control block having an output coupled to second inputs of the PWM generation circuitry. The predictive control block is coupled to receive a measure of Vin and an output voltage at a grid connection point. A memory stores a current control algorithm configured for resetting a PWM period for a switching signal applied to control nodes of the first and second power switch whenever the inductor current reaches a predetermined upper limit or a predetermined lower limit.
Adaptive model predictive process control using neural networks
Buescher, Kevin L.; Baum, Christopher C.; Jones, Roger D.
1997-01-01
A control system for controlling the output of at least one plant process output parameter is implemented by adaptive model predictive control using a neural network. An improved method and apparatus provides for sampling plant output and control input at a first sampling rate to provide control inputs at the fast rate. The MPC system is, however, provided with a network state vector that is constructed at a second, slower rate so that the input control values used by the MPC system are averaged over a gapped time period. Another improvement is a provision for on-line training that may include difference training, curvature training, and basis center adjustment to maintain the weights and basis centers of the neural in an updated state that can follow changes in the plant operation apart from initial off-line training data.
NASA Astrophysics Data System (ADS)
Nair, Archana; Acharya, Nachiketa; Singh, Ankita; Mohanty, U. C.; Panda, T. C.
2013-11-01
In this study the predictability of northeast monsoon (Oct-Nov-Dec) rainfall over peninsular India by eight general circulation model (GCM) outputs was analyzed. These GCM outputs (forecasts for the whole season issued in September) were compared with high-resolution observed gridded rainfall data obtained from the India Meteorological Department for the period 1982-2010. Rainfall, interannual variability (IAV), correlation coefficients, and index of agreement were examined for the outputs of eight GCMs and compared with observation. It was found that the models are able to reproduce rainfall and IAV to different extents. The predictive power of GCMs was also judged by determining the signal-to-noise ratio and the external error variance; it was noted that the predictive power of the models was usually very low. To examine dominant modes of interannual variability, empirical orthogonal function (EOF) analysis was also conducted. EOF analysis of the models revealed they were capable of representing the observed precipitation variability to some extent. The teleconnection between the sea surface temperature (SST) and northeast monsoon rainfall was also investigated and results suggest that during OND the SST over the equatorial Indian Ocean, the Bay of Bengal, the central Pacific Ocean (over Nino3 region), and the north and south Atlantic Ocean enhances northeast monsoon rainfall. This observed phenomenon is only predicted by the CCM3v6 model.
Cortical activity predicts good variation in human motor output.
Babikian, Sarine; Kanso, Eva; Kutch, Jason J
2017-04-01
Human movement patterns have been shown to be particularly variable if many combinations of activity in different muscles all achieve the same task goal (i.e., are goal-equivalent). The nervous system appears to automatically vary its output among goal-equivalent combinations of muscle activity to minimize muscle fatigue or distribute tissue loading, but the neural mechanism of this "good" variation is unknown. Here we use a bimanual finger task, electroencephalography (EEG), and machine learning to determine if cortical signals can predict goal-equivalent variation in finger force output. 18 healthy participants applied left and right index finger forces to repeatedly perform a task that involved matching a total (sum of right and left) finger force. As in previous studies, we observed significantly more variability in goal-equivalent muscle activity across task repetitions compared to variability in muscle activity that would not achieve the goal: participants achieved the task in some repetitions with more right finger force and less left finger force (right > left) and in other repetitions with less right finger force and more left finger force (left > right). We found that EEG signals from the 500 milliseconds (ms) prior to each task repetition could make a significant prediction of which repetitions would have right > left and which would have left > right. We also found that cortical maps of sites contributing to the prediction contain both motor and pre-motor representation in the appropriate hemisphere. Thus, goal-equivalent variation in motor output may be implemented at a cortical level.
NASA Technical Reports Server (NTRS)
Meyer, H. D.
1993-01-01
The Acoustic Radiation Code (ARC) is a finite element program used on the IBM mainframe to predict far-field acoustic radiation from a turbofan engine inlet. In this report, requirements for developers of internal aerodynamic codes regarding use of their program output an input for the ARC are discussed. More specifically, the particular input needed from the Bolt, Beranek and Newman/Pratt and Whitney (turbofan source noise generation) Code (BBN/PWC) is described. In a separate analysis, a method of coupling the source and radiation models, that recognizes waves crossing the interface in both directions, has been derived. A preliminary version of the coupled code has been developed and used for initial evaluation of coupling issues. Results thus far have shown that reflection from the inlet is sufficient to indicate that full coupling of the source and radiation fields is needed for accurate noise predictions ' Also, for this contract, the ARC has been modified for use on the Sun and Silicon Graphics Iris UNIX workstations. Changes and additions involved in this effort are described in an appendix.
NASA Technical Reports Server (NTRS)
Freeman, W.; Ilcewicz, L.; Swanson, G.; Gutowski, T.
1992-01-01
The Structures Technology Program Office (STPO) at NASA LaRC has initiated development of a conceptual and preliminary designers' cost prediction model. The model will provide a technically sound method for evaluating the relative cost of different composite structural designs, fabrication processes, and assembly methods that can be compared to equivalent metallic parts or assemblies. The feasibility of developing cost prediction software in a modular form for interfacing with state-of-the-art preliminary design tools and computer aided design programs is being evaluated. The goal of this task is to establish theoretical cost functions that relate geometric design features to summed material cost and labor content in terms of process mechanics and physics. The output of the designers' present analytical tools will be input for the designers' cost prediction model to provide the designer with a database and deterministic cost methodology that allows one to trade and synthesize designs with both cost and weight as objective functions for optimization. This paper presents the team members, approach, goals, plans, and progress to date for development of COSTADE (Cost Optimization Software for Transport Aircraft Design Evaluation).
Siaw, Fei-Lu; Chong, Kok-Keong
2013-01-01
This paper presents a new systematic approach to analyze all possible array configurations in order to determine the most optimal dense-array configuration for concentrator photovoltaic (CPV) systems. The proposed method is fast, simple, reasonably accurate, and very useful as a preliminary study before constructing a dense-array CPV panel. Using measured flux distribution data, each CPV cells' voltage and current values at three critical points which are at short-circuit, open-circuit, and maximum power point are determined. From there, an algorithm groups the cells into basic modules. The next step is I-V curve prediction, to find the maximum output power of each array configuration. As a case study, twenty different I-V predictions are made for a prototype of nonimaging planar concentrator, and the array configuration that yields the highest output power is determined. The result is then verified by assembling and testing of an actual dense-array on the prototype. It was found that the I-V curve closely resembles simulated I-V prediction, and measured maximum output power varies by only 1.34%.
A Systematic Method of Interconnection Optimization for Dense-Array Concentrator Photovoltaic System
Siaw, Fei-Lu
2013-01-01
This paper presents a new systematic approach to analyze all possible array configurations in order to determine the most optimal dense-array configuration for concentrator photovoltaic (CPV) systems. The proposed method is fast, simple, reasonably accurate, and very useful as a preliminary study before constructing a dense-array CPV panel. Using measured flux distribution data, each CPV cells' voltage and current values at three critical points which are at short-circuit, open-circuit, and maximum power point are determined. From there, an algorithm groups the cells into basic modules. The next step is I-V curve prediction, to find the maximum output power of each array configuration. As a case study, twenty different I-V predictions are made for a prototype of nonimaging planar concentrator, and the array configuration that yields the highest output power is determined. The result is then verified by assembling and testing of an actual dense-array on the prototype. It was found that the I-V curve closely resembles simulated I-V prediction, and measured maximum output power varies by only 1.34%. PMID:24453823
Tisa, Farhana; Davoody, Meysam; Abdul Raman, Abdul Aziz; Daud, Wan Mohd Ashri Wan
2015-01-01
The efficiency of phenol degradation via Fenton reaction using mixture of heterogeneous goethite catalyst with homogeneous ferrous ion was analyzed as a function of three independent variables, initial concentration of phenol (60 to 100 mg /L), weight ratio of initial concentration of phenol to that of H2O2 (1: 6 to 1: 14) and, weight ratio of initial concentration of goethite catalyst to that of H2O2 (1: 0.3 to 1: 0.7). More than 90 % of phenol removal and more than 40% of TOC removal were achieved within 60 minutes of reaction. Two separate models were developed using artificial neural networks to predict degradation percentage by a combination of Fe3+ and Fe2+ catalyst. Five operational parameters were employed as inputs while phenol degradation and TOC removal were considered as outputs of the developed models. Satisfactory agreement was observed between testing data and the predicted values (R2 Phenol = 0.9214 and R2TOC= 0.9082). PMID:25849556
NASA Technical Reports Server (NTRS)
Farassat, F.; Succi, G. P.
1980-01-01
A review of propeller noise prediction technology is presented which highlights the developments in the field from the successful attempt of Gutin to the current sophisticated techniques. Two methods for the predictions of the discrete frequency noise from conventional and advanced propellers in forward flight are described. These methods developed at MIT and NASA Langley Research Center are based on different time domain formulations. Brief description of the computer algorithms based on these formulations are given. The output of these two programs, which is the acoustic pressure signature, is Fourier analyzed to get the acoustic pressure spectrum. The main difference between the programs as they are coded now is that the Langley program can handle propellers with supersonic tip speed while the MIT program is for subsonic tip speed propellers. Comparisons of the calculated and measured acoustic data for a conventional and an advanced propeller show good agreement in general.
Prefrontal mediation of the reading network predicts intervention response in dyslexia.
Aboud, Katherine S; Barquero, Laura A; Cutting, Laurie E
2018-04-01
A primary challenge facing the development of interventions for dyslexia is identifying effective predictors of intervention response. While behavioral literature has identified core cognitive characteristics of response, the distinction of reading versus executive cognitive contributions to response profiles remains unclear, due in part to the difficulty of segregating these constructs using behavioral outputs. In the current study we used functional neuroimaging to piece apart the mechanisms of how/whether executive and reading network relationships are predictive of intervention response. We found that readers who are responsive to intervention have more typical pre-intervention functional interactions between executive and reading systems compared to nonresponsive readers. These findings suggest that intervention response in dyslexia is influenced not only by domain-specific reading regions, but also by contributions from intervening domain-general networks. Our results make a significant gain in identifying predictive bio-markers of outcomes in dyslexia, and have important implications for the development of personalized clinical interventions. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Freeman, William T.; Ilcewicz, L. B.; Swanson, G. D.; Gutowski, T.
1992-01-01
A conceptual and preliminary designers' cost prediction model has been initiated. The model will provide a technically sound method for evaluating the relative cost of different composite structural designs, fabrication processes, and assembly methods that can be compared to equivalent metallic parts or assemblies. The feasibility of developing cost prediction software in a modular form for interfacing with state of the art preliminary design tools and computer aided design programs is being evaluated. The goal of this task is to establish theoretical cost functions that relate geometric design features to summed material cost and labor content in terms of process mechanics and physics. The output of the designers' present analytical tools will be input for the designers' cost prediction model to provide the designer with a data base and deterministic cost methodology that allows one to trade and synthesize designs with both cost and weight as objective functions for optimization. The approach, goals, plans, and progress is presented for development of COSTADE (Cost Optimization Software for Transport Aircraft Design Evaluation).
Integrated Wind Power Planning Tool
NASA Astrophysics Data System (ADS)
Rosgaard, M. H.; Giebel, G.; Nielsen, T. S.; Hahmann, A.; Sørensen, P.; Madsen, H.
2012-04-01
This poster presents the current state of the public service obligation (PSO) funded project PSO 10464, with the working title "Integrated Wind Power Planning Tool". The project commenced October 1, 2011, and the goal is to integrate a numerical weather prediction (NWP) model with purely statistical tools in order to assess wind power fluctuations, with focus on long term power system planning for future wind farms as well as short term forecasting for existing wind farms. Currently, wind power fluctuation models are either purely statistical or integrated with NWP models of limited resolution. With regard to the latter, one such simulation tool has been developed at the Wind Energy Division, Risø DTU, intended for long term power system planning. As part of the PSO project the inferior NWP model used at present will be replaced by the state-of-the-art Weather Research & Forecasting (WRF) model. Furthermore, the integrated simulation tool will be improved so it can handle simultaneously 10-50 times more turbines than the present ~ 300, as well as additional atmospheric parameters will be included in the model. The WRF data will also be input for a statistical short term prediction model to be developed in collaboration with ENFOR A/S; a danish company that specialises in forecasting and optimisation for the energy sector. This integrated prediction model will allow for the description of the expected variability in wind power production in the coming hours to days, accounting for its spatio-temporal dependencies, and depending on the prevailing weather conditions defined by the WRF output. The output from the integrated prediction tool constitute scenario forecasts for the coming period, which can then be fed into any type of system model or decision making problem to be solved. The high resolution of the WRF results loaded into the integrated prediction model will ensure a high accuracy data basis is available for use in the decision making process of the Danish transmission system operator, and the need for high accuracy predictions will only increase over the next decade as Denmark approaches the goal of 50% wind power based electricity in 2020, from the current 20%.
A ten-year review of enterocutaneous fistulas after laparotomy for trauma.
Fischer, Peter E; Fabian, Timothy C; Magnotti, Louis J; Schroeppel, Thomas J; Bee, Tiffany K; Maish, George O; Savage, Stephanie A; Laing, Ashley E; Barker, Andrew B; Croce, Martin A
2009-11-01
In the era of open abdomen management, the complication of enterocutaneous fistula (ECF) seems to be increasing in frequency. In nontrauma patients, reported mortality rates are 7% to 20%, and spontaneous closure rates are approximately 25%. This study is the largest series of ECFs reported exclusively caused by trauma and examines the characteristics unique to this population. Trauma patients with an ECF at a single regional trauma center over a 10-year period were reviewed. Parameters studied included fistula output, site, nutritional status, operative history, and fistula resolution (spontaneous vs. operative). Approximately 2,224 patients received a trauma laparotomy and survived longer than 4 days. Of these, 43 patients (1.9%) had ECF. The rate of ECF in men was 2.22% and 0.74% in women. Patients with open abdomen had a higher ECF incidence (8% vs. 0.5%) and lower rate of spontaneous closure (37% vs. 45%). Spontaneous closure occurred in 31% with high-output fistulas, 13% with medium output, and 55% with low output. The mortality rate of ECF was 14% after an average stay of 59 days in the intensive care unit. With damage-control laparotomies, the traumatic ECF rate is increasing and is a different entity than nontraumatic ECF. Although the two populations have similar mortality rates, the trauma cohort demonstrates higher spontaneous closure rates and a curiously higher rate of development in men. Fistula output was not predictive of spontaneous closure.
NASA Astrophysics Data System (ADS)
Zhu, Kaiqun; Song, Yan; Zhang, Sunjie; Zhong, Zhaozhun
2017-07-01
In this paper, a non-fragile observer-based output feedback control problem for the polytopic uncertain system under distributed model predictive control (MPC) approach is discussed. By decomposing the global system into some subsystems, the computation complexity is reduced, so it follows that the online designing time can be saved.Moreover, an observer-based output feedback control algorithm is proposed in the framework of distributed MPC to deal with the difficulties in obtaining the states measurements. In this way, the presented observer-based output-feedback MPC strategy is more flexible and applicable in practice than the traditional state-feedback one. What is more, the non-fragility of the controller has been taken into consideration in favour of increasing the robustness of the polytopic uncertain system. After that, a sufficient stability criterion is presented by using Lyapunov-like functional approach, meanwhile, the corresponding control law and the upper bound of the quadratic cost function are derived by solving an optimisation subject to convex constraints. Finally, some simulation examples are employed to show the effectiveness of the method.
Program Predicts Performance of Optical Parametric Oscillators
NASA Technical Reports Server (NTRS)
Cross, Patricia L.; Bowers, Mark
2006-01-01
A computer program predicts the performances of solid-state lasers that operate at wavelengths from ultraviolet through mid-infrared and that comprise various combinations of stable and unstable resonators, optical parametric oscillators (OPOs), and sum-frequency generators (SFGs), including second-harmonic generators (SHGs). The input to the program describes the signal, idler, and pump beams; the SFG and OPO crystals; and the laser geometry. The program calculates the electric fields of the idler, pump, and output beams at three locations (inside the laser resonator, just outside the input mirror, and just outside the output mirror) as functions of time for the duration of the pump beam. For each beam, the electric field is used to calculate the fluence at the output mirror, plus summary parameters that include the centroid location, the radius of curvature of the wavefront leaving through the output mirror, the location and size of the beam waist, and a quantity known, variously, as a propagation constant or beam-quality factor. The program provides a typical Windows interface for entering data and selecting files. The program can include as many as six plot windows, each containing four graphs.
Mesoscale Numerical Simulations of the IAS Circulation
NASA Astrophysics Data System (ADS)
Mooers, C. N.; Ko, D.
2008-05-01
Real-time nowcasts and forecasts of the IAS circulation have been made for several years with mesoscale resolution using the Navy Coastal Ocean Model (NCOM) implemented for the IAS. It is commonly called IASNFS and is driven by the lower resolution Global NCOM on the open boundaries, synoptic atmospheric forcing obtained from the Navy Global Atmospheric Prediction System (NOGAPS), and assimilated satellite-derived sea surface height anomalies and sea surface temperature. Here, examples of the model output are demonstrated; e.g., Gulf of Mexico Loop Current eddy shedding events and the meandering Caribbean Current jet and associated eddies. Overall, IASNFS is ready for further analysis, application to a variety of studies, and downscaling to even higher resolution shelf models. Its output fields are available online through NOAA's National Coastal Data Development Center (NCDDC), located at the Stennis Space Center.
CRISPRDetect: A flexible algorithm to define CRISPR arrays.
Biswas, Ambarish; Staals, Raymond H J; Morales, Sergio E; Fineran, Peter C; Brown, Chris M
2016-05-17
CRISPR (clustered regularly interspaced short palindromic repeats) RNAs provide the specificity for noncoding RNA-guided adaptive immune defence systems in prokaryotes. CRISPR arrays consist of repeat sequences separated by specific spacer sequences. CRISPR arrays have previously been identified in a large proportion of prokaryotic genomes. However, currently available detection algorithms do not utilise recently discovered features regarding CRISPR loci. We have developed a new approach to automatically detect, predict and interactively refine CRISPR arrays. It is available as a web program and command line from bioanalysis.otago.ac.nz/CRISPRDetect. CRISPRDetect discovers putative arrays, extends the array by detecting additional variant repeats, corrects the direction of arrays, refines the repeat/spacer boundaries, and annotates different types of sequence variations (e.g. insertion/deletion) in near identical repeats. Due to these features, CRISPRDetect has significant advantages when compared to existing identification tools. As well as further support for small medium and large repeats, CRISPRDetect identified a class of arrays with 'extra-large' repeats in bacteria (repeats 44-50 nt). The CRISPRDetect output is integrated with other analysis tools. Notably, the predicted spacers can be directly utilised by CRISPRTarget to predict targets. CRISPRDetect enables more accurate detection of arrays and spacers and its gff output is suitable for inclusion in genome annotation pipelines and visualisation. It has been used to analyse all complete bacterial and archaeal reference genomes.
miRanalyzer: a microRNA detection and analysis tool for next-generation sequencing experiments.
Hackenberg, Michael; Sturm, Martin; Langenberger, David; Falcón-Pérez, Juan Manuel; Aransay, Ana M
2009-07-01
Next-generation sequencing allows now the sequencing of small RNA molecules and the estimation of their expression levels. Consequently, there will be a high demand of bioinformatics tools to cope with the several gigabytes of sequence data generated in each single deep-sequencing experiment. Given this scene, we developed miRanalyzer, a web server tool for the analysis of deep-sequencing experiments for small RNAs. The web server tool requires a simple input file containing a list of unique reads and its copy numbers (expression levels). Using these data, miRanalyzer (i) detects all known microRNA sequences annotated in miRBase, (ii) finds all perfect matches against other libraries of transcribed sequences and (iii) predicts new microRNAs. The prediction of new microRNAs is an especially important point as there are many species with very few known microRNAs. Therefore, we implemented a highly accurate machine learning algorithm for the prediction of new microRNAs that reaches AUC values of 97.9% and recall values of up to 75% on unseen data. The web tool summarizes all the described steps in a single output page, which provides a comprehensive overview of the analysis, adding links to more detailed output pages for each analysis module. miRanalyzer is available at http://web.bioinformatics.cicbiogune.es/microRNA/.
NASA Technical Reports Server (NTRS)
Morin, Bruce L.
2010-01-01
Pratt & Whitney has developed a Broadband Fan Noise Prediction System (BFaNS) for turbofan engines. This system computes the noise generated by turbulence impinging on the leading edges of the fan and fan exit guide vane, and noise generated by boundary-layer turbulence passing over the fan trailing edge. BFaNS has been validated on three fan rigs that were tested during the NASA Advanced Subsonic Technology Program (AST). The predicted noise spectra agreed well with measured data. The predicted effects of fan speed, vane count, and vane sweep also agreed well with measurements. The noise prediction system consists of two computer programs: Setup_BFaNS and BFaNS. Setup_BFaNS converts user-specified geometry and flow-field information into a BFaNS input file. From this input file, BFaNS computes the inlet and aft broadband sound power spectra generated by the fan and FEGV. The output file from BFaNS contains the inlet, aft and total sound power spectra from each noise source. This report is the second volume of a three-volume set documenting the Broadband Fan Noise Prediction System: Volume 1: Setup_BFaNS User s Manual and Developer s Guide; Volume 2: BFaNS User s Manual and Developer s Guide; and Volume 3: Validation and Test Cases. The present volume begins with an overview of the Broadband Fan Noise Prediction System, followed by step-by-step instructions for installing and running BFaNS. It concludes with technical documentation of the BFaNS computer program.
NASA Technical Reports Server (NTRS)
Morin, Bruce L.
2010-01-01
Pratt & Whitney has developed a Broadband Fan Noise Prediction System (BFaNS) for turbofan engines. This system computes the noise generated by turbulence impinging on the leading edges of the fan and fan exit guide vane, and noise generated by boundary-layer turbulence passing over the fan trailing edge. BFaNS has been validated on three fan rigs that were tested during the NASA Advanced Subsonic Technology Program (AST). The predicted noise spectra agreed well with measured data. The predicted effects of fan speed, vane count, and vane sweep also agreed well with measurements. The noise prediction system consists of two computer programs: Setup_BFaNS and BFaNS. Setup_BFaNS converts user-specified geometry and flow-field information into a BFaNS input file. From this input file, BFaNS computes the inlet and aft broadband sound power spectra generated by the fan and FEGV. The output file from BFaNS contains the inlet, aft and total sound power spectra from each noise source. This report is the first volume of a three-volume set documenting the Broadband Fan Noise Prediction System: Volume 1: Setup_BFaNS User s Manual and Developer s Guide; Volume 2: BFaNS User's Manual and Developer s Guide; and Volume 3: Validation and Test Cases. The present volume begins with an overview of the Broadband Fan Noise Prediction System, followed by step-by-step instructions for installing and running Setup_BFaNS. It concludes with technical documentation of the Setup_BFaNS computer program.
Integration of Tuyere, Raceway and Shaft Models for Predicting Blast Furnace Process
NASA Astrophysics Data System (ADS)
Fu, Dong; Tang, Guangwu; Zhao, Yongfu; D'Alessio, John; Zhou, Chenn Q.
2018-06-01
A novel modeling strategy is presented for simulating the blast furnace iron making process. Such physical and chemical phenomena are taking place across a wide range of length and time scales, and three models are developed to simulate different regions of the blast furnace, i.e., the tuyere model, the raceway model and the shaft model. This paper focuses on the integration of the three models to predict the entire blast furnace process. Mapping output and input between models and an iterative scheme are developed to establish communications between models. The effects of tuyere operation and burden distribution on blast furnace fuel efficiency are investigated numerically. The integration of different models provides a way to realistically simulate the blast furnace by improving the modeling resolution on local phenomena and minimizing the model assumptions.
NASA Astrophysics Data System (ADS)
Ma, Xunjun; Lu, Yang; Wang, Fengjiao
2017-09-01
This paper presents the recent advances in reduction of multifrequency noise inside helicopter cabin using an active structural acoustic control system, which is based on active gearbox struts technical approach. To attenuate the multifrequency gearbox vibrations and resulting noise, a new scheme of discrete model predictive sliding mode control has been proposed based on controlled auto-regressive moving average model. Its implementation only needs input/output data, hence a broader frequency range of controlled system is modelled and the burden on the state observer design is released. Furthermore, a new iteration form of the algorithm is designed, improving the developing efficiency and run speed. To verify the algorithm's effectiveness and self-adaptability, experiments of real-time active control are performed on a newly developed helicopter model system. The helicopter model can generate gear meshing vibration/noise similar to a real helicopter with specially designed gearbox and active struts. The algorithm's control abilities are sufficiently checked by single-input single-output and multiple-input multiple-output experiments via different feedback strategies progressively: (1) control gear meshing noise through attenuating vibrations at the key points on the transmission path, (2) directly control the gear meshing noise in the cabin using the actuators. Results confirm that the active control system is practical for cancelling multifrequency helicopter interior noise, which also weakens the frequency-modulation of the tones. For many cases, the attenuations of the measured noise exceed the level of 15 dB, with maximum reduction reaching 31 dB. Also, the control process is demonstrated to be smoother and faster.
Principal Dynamic Mode Analysis of the Hodgkin–Huxley Equations
Eikenberry, Steffen E.; Marmarelis, Vasilis Z.
2015-01-01
We develop an autoregressive model framework based on the concept of Principal Dynamic Modes (PDMs) for the process of action potential (AP) generation in the excitable neuronal membrane described by the Hodgkin–Huxley (H–H) equations. The model's exogenous input is injected current, and whenever the membrane potential output exceeds a specified threshold, it is fed back as a second input. The PDMs are estimated from the previously developed Nonlinear Autoregressive Volterra (NARV) model, and represent an efficient functional basis for Volterra kernel expansion. The PDM-based model admits a modular representation, consisting of the forward and feedback PDM bases as linear filterbanks for the exogenous and autoregressive inputs, respectively, whose outputs are then fed to a static nonlinearity composed of polynomials operating on the PDM outputs and cross-terms of pair-products of PDM outputs. A two-step procedure for model reduction is performed: first, influential subsets of the forward and feedback PDM bases are identified and selected as the reduced PDM bases. Second, the terms of the static nonlinearity are pruned. The first step reduces model complexity from a total of 65 coefficients to 27, while the second further reduces the model coefficients to only eight. It is demonstrated that the performance cost of model reduction in terms of out-of-sample prediction accuracy is minimal. Unlike the full model, the eight coefficient pruned model can be easily visualized to reveal the essential system components, and thus the data-derived PDM model can yield insight into the underlying system structure and function. PMID:25630480
Predicting Power Output of Upper Body using the OMNI-RES Scale.
Bautista, Iker J; Chirosa, Ignacio J; Tamayo, Ignacio Martín; González, Andrés; Robinson, Joseph E; Chirosa, Luis J; Robertson, Robert J
2014-12-09
The main aim of this study was to determine the optimal training zone for maximum power output. This was to be achieved through estimating mean bar velocity of the concentric phase of a bench press using a prediction equation. The values for the prediction equation would be obtained using OMNI-RES scale values of different loads of the bench press exercise. Sixty males (age 23.61 2.81 year; body height 176.29 6.73 cm; body mass 73.28 4.75 kg) voluntarily participated in the study and were tested using an incremental protocol on a Smith machine to determine one repetition maximum (1RM) in the bench press exercise. A linear regression analysis produced a strong correlation (r = -0.94) between rating of perceived exertion (RPE) and mean bar velocity (Velmean). The Pearson correlation analysis between real power output (PotReal) and estimated power (PotEst) showed a strong correlation coefficient of r = 0.77, significant at a level of p = 0.01. Therefore, the OMNI-RES scale can be used to predict Velmean in the bench press exercise to control the intensity of the exercise. The positive relationship between PotReal and PotEst allowed for the identification of a maximum power-training zone.
Predicting Power Output of Upper Body using the OMNI-RES Scale
Bautista, Iker J.; Chirosa, Ignacio J.; Tamayo, Ignacio Martín; González, Andrés; Robinson, Joseph E.; Chirosa, Luis J.; Robertson, Robert J.
2014-01-01
The main aim of this study was to determine the optimal training zone for maximum power output. This was to be achieved through estimating mean bar velocity of the concentric phase of a bench press using a prediction equation. The values for the prediction equation would be obtained using OMNI–RES scale values of different loads of the bench press exercise. Sixty males (age 23.61 2.81 year; body height 176.29 6.73 cm; body mass 73.28 4.75 kg) voluntarily participated in the study and were tested using an incremental protocol on a Smith machine to determine one repetition maximum (1RM) in the bench press exercise. A linear regression analysis produced a strong correlation (r = −0.94) between rating of perceived exertion (RPE) and mean bar velocity (Velmean). The Pearson correlation analysis between real power output (PotReal) and estimated power (PotEst) showed a strong correlation coefficient of r = 0.77, significant at a level of p = 0.01. Therefore, the OMNI–RES scale can be used to predict Velmean in the bench press exercise to control the intensity of the exercise. The positive relationship between PotReal and PotEst allowed for the identification of a maximum power-training zone. PMID:25713677
Three predictions of the economic concept of unit price in a choice context.
Madden, G J; Bickel, W K; Jacobs, E A
2000-01-01
Economic theory makes three predictions about consumption and response output in a choice situation: (a) When plotted on logarithmic coordinates, total consumption (i.e., summed across concurrent sources of reinforcement) should be a positively decelerating function, and total response output should be a bitonic function of unit price increases; (b) total consumption and response output should be determined by the value of the unit price ratio, independent of its cost and benefit components; and (c) when a reinforcer is available at the same unit price across all sources of reinforcement, consumption should be equal between these sources. These predictions were assessed in human cigarette smokers who earned cigarette puffs in a two-choice situation at a range of unit prices. In some sessions, smokers chose between different amounts of puffs, both available at identical unit prices. Individual subjects' data supported the first two predictions but failed to support the third. Instead, at low unit prices, the relatively larger reinforcer (and larger response requirement) was preferred, whereas at high unit prices, the smaller reinforcer (and smaller response requirement) was preferred. An expansion of unit price is proposed in which handling costs and the discounted value of reinforcers available according to ratio schedules are incorporated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ajami, N K; Duan, Q; Gao, X
2005-04-11
This paper examines several multi-model combination techniques: the Simple Multi-model Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniquesmore » affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.« less
An improved predictive functional control method with application to PMSM systems
NASA Astrophysics Data System (ADS)
Li, Shihua; Liu, Huixian; Fu, Wenshu
2017-01-01
In common design of prediction model-based control method, usually disturbances are not considered in the prediction model as well as the control design. For the control systems with large amplitude or strong disturbances, it is difficult to precisely predict the future outputs according to the conventional prediction model, and thus the desired optimal closed-loop performance will be degraded to some extent. To this end, an improved predictive functional control (PFC) method is developed in this paper by embedding disturbance information into the system model. Here, a composite prediction model is thus obtained by embedding the estimated value of disturbances, where disturbance observer (DOB) is employed to estimate the lumped disturbances. So the influence of disturbances on system is taken into account in optimisation procedure. Finally, considering the speed control problem for permanent magnet synchronous motor (PMSM) servo system, a control scheme based on the improved PFC method is designed to ensure an optimal closed-loop performance even in the presence of disturbances. Simulation and experimental results based on a hardware platform are provided to confirm the effectiveness of the proposed algorithm.
Assessment of Predictive Capabilities of L1 Orbiters using Realtime Solar Wind Data
NASA Astrophysics Data System (ADS)
Holmes, J.; Kasper, J. C.; Welling, D. T.
2017-12-01
Realtime measurements of solar wind conditions at L1 point allow us to predict geomagnetic activity at Earth up to an hour in advance. These predictions are quantified in the form of geomagnetic indices such as Kp and Ap, allowing for a concise, standardized prediction and measurement system. For years, the Space Weather Prediction Center used ACE realtime solar wind data to develop its one and four-hour Kp forecasts, but has in the past year switched to using DSCOVR data as its source. In this study, the performance of both orbiters in predicting Kp over the course of one month was assessed in an attempt to determine whether or not switching to DSCOVR data has resulted in improved forecasts. The period of study was chosen to encompass a time when the satellites were close to each other, and when moderate to high activity was observed. Kp predictions were made using the Geospace Model, part of the Space Weather Modeling Framework, to simulate conditions based on observed solar wind parameters. The performance of each satellite was assessed by comparing the model output to observed data.
Head-target tracking control of well drilling
NASA Astrophysics Data System (ADS)
Agzamov, Z. V.
2018-05-01
The method of directional drilling trajectory control for oil and gas wells using predictive models is considered in the paper. The developed method does not apply optimization and therefore there is no need for the high-performance computing. Nevertheless, it allows following the well-plan with high precision taking into account process input saturation. Controller output is calculated both from the present target reference point of the well-plan and from well trajectory prediction with using the analytical model. This method allows following a well-plan not only on angular, but also on the Cartesian coordinates. Simulation of the control system has confirmed the high precision and operation performance with a wide range of random disturbance action.
Developing an Online Framework for Publication of Uncertainty Information in Hydrological Modeling
NASA Astrophysics Data System (ADS)
Etienne, E.; Piasecki, M.
2012-12-01
Inaccuracies in data collection and parameters estimation, and imperfection of models structures imply uncertain predictions of the hydrological models. Finding a way to communicate the uncertainty information in a model output is important in decision-making. This work aims to publish uncertainty information (computed by project partner at Penn State) associated with hydrological predictions on catchments. To this end we have developed a DB schema (derived from the CUAHSI ODM design) which is focused on storing uncertainty information and its associated metadata. The technologies used to build the system are: OGC's Sensor Observation Service (SOS) for publication, the uncertML markup language (also developed by the OGC) to describe uncertainty information, and use of the Interoperability and Automated Mapping (INTAMAP) Web Processing Service (WPS) that handles part of the statistics computations. We develop a service to provide users with the capability to exploit all the functionality of the system (based on DRUPAL). Users will be able to request and visualize uncertainty data, and also publish their data in the system.
Thermomechanical conditions and stresses on the friction stir welding tool
NASA Astrophysics Data System (ADS)
Atthipalli, Gowtam
Friction stir welding has been commercially used as a joining process for aluminum and other soft materials. However, the use of this process in joining of hard alloys is still developing primarily because of the lack of cost effective, long lasting tools. Here I have developed numerical models to understand the thermo mechanical conditions experienced by the FSW tool and to improve its reusability. A heat transfer and visco-plastic flow model is used to calculate the torque, and traverse force on the tool during FSW. The computed values of torque and traverse force are validated using the experimental results for FSW of AA7075, AA2524, AA6061 and Ti-6Al-4V alloys. The computed torque components are used to determine the optimum tool shoulder diameter based on the maximum use of torque and maximum grip of the tool on the plasticized workpiece material. The estimation of the optimum tool shoulder diameter for FSW of AA6061 and AA7075 was verified with experimental results. The computed values of traverse force and torque are used to calculate the maximum shear stress on the tool pin to determine the load bearing ability of the tool pin. The load bearing ability calculations are used to explain the failure of H13 steel tool during welding of AA7075 and commercially pure tungsten during welding of L80 steel. Artificial neural network (ANN) models are developed to predict the important FSW output parameters as function of selected input parameters. These ANN consider tool shoulder radius, pin radius, pin length, welding velocity, tool rotational speed and axial pressure as input parameters. The total torque, sliding torque, sticking torque, peak temperature, traverse force, maximum shear stress and bending stress are considered as the output for ANN models. These output parameters are selected since they define the thermomechanical conditions around the tool during FSW. The developed ANN models are used to understand the effect of various input parameters on the total torque and traverse force during FSW of AA7075 and 1018 mild steel. The ANN models are also used to determine tool safety factor for wide range of input parameters. A numerical model is developed to calculate the strain and strain rates along the streamlines during FSW. The strain and strain rate values are calculated for FSW of AA2524. Three simplified models are also developed for quick estimation of output parameters such as material velocity field, torque and peak temperature. The material velocity fields are computed by adopting an analytical method of calculating velocities for flow of non-compressible fluid between two discs where one is rotating and other is stationary. The peak temperature is estimated based on a non-dimensional correlation with dimensionless heat input. The dimensionless heat input is computed using known welding parameters and material properties. The torque is computed using an analytical function based on shear strength of the workpiece material. These simplified models are shown to be able to predict these output parameters successfully.
Advanced thermionic converter developments with microwave external pumping
NASA Technical Reports Server (NTRS)
Chiu, H. S.; Shaw, D. T.; Manikopulos, C. N.; Lee, C. H.
1977-01-01
This work reports ion generation in a cesium thermionic converter as part of advanced-model thermionic converter development research. A microwave with frequency in the range between 1-2 GHz is used to externally pump a thermionic converter as part of our effort in the verification of Lam's theory. It is found that the motive peak as predicted in the theory disappears whenever microwave power is used to excite the cesium plasma of the converter. The electron temperature is effectively heated by the microwave and the experimental data agrees with theory in the low-power output region.
NASA Astrophysics Data System (ADS)
Korobko, Dmitry A.; Zolotovskii, Igor O.; Panajotov, Krassimir; Spirin, Vasily V.; Fotiadi, Andrei A.
2017-12-01
We develop a theoretical framework for modeling of semiconductor laser coupled to an external fiber-optic ring resonator. The developed approach has shown good qualitative agreement between theoretical predictions and experimental results for particular configuration of a self-injection locked DFB laser delivering narrow-band radiation. The model is capable of describing the main features of the experimentally measured laser outputs such as laser line narrowing, spectral shape of generated radiation, mode-hoping instabilities and makes possible exploring the key physical mechanisms responsible for the laser operation stability.
NASA Astrophysics Data System (ADS)
Lan, Ganhui; Tu, Yuhai
2016-05-01
Living systems have to constantly sense their external environment and adjust their internal state in order to survive and reproduce. Biological systems, from as complex as the brain to a single E. coli cell, have to process these data in order to make appropriate decisions. How do biological systems sense external signals? How do they process the information? How do they respond to signals? Through years of intense study by biologists, many key molecular players and their interactions have been identified in different biological machineries that carry out these signaling functions. However, an integrated, quantitative understanding of the whole system is still lacking for most cellular signaling pathways, not to say the more complicated neural circuits. To study signaling processes in biology, the key thing to measure is the input-output relationship. The input is the signal itself, such as chemical concentration, external temperature, light (intensity and frequency), and more complex signals such as the face of a cat. The output can be protein conformational changes and covalent modifications (phosphorylation, methylation, etc), gene expression, cell growth and motility, as well as more complex output such as neuron firing patterns and behaviors of higher animals. Due to the inherent noise in biological systems, the measured input-output dependence is often noisy. These noisy data can be analysed by using powerful tools and concepts from information theory such as mutual information, channel capacity, and the maximum entropy hypothesis. This information theory approach has been successfully used to reveal the underlying correlations between key components of biological networks, to set bounds for network performance, and to understand possible network architecture in generating observed correlations. Although the information theory approach provides a general tool in analysing noisy biological data and may be used to suggest possible network architectures in preserving information, it does not reveal the underlying mechanism that leads to the observed input-output relationship, nor does it tell us much about which information is important for the organism and how biological systems use information to carry out specific functions. To do that, we need to develop models of the biological machineries, e.g. biochemical networks and neural networks, to understand the dynamics of biological information processes. This is a much more difficult task. It requires deep knowledge of the underlying biological network—the main players (nodes) and their interactions (links)—in sufficient detail to build a model with predictive power, as well as quantitative input-output measurements of the system under different perturbations (both genetic variations and different external conditions) to test the model predictions to guide further development of the model. Due to the recent growth of biological knowledge thanks in part to high throughput methods (sequencing, gene expression microarray, etc) and development of quantitative in vivo techniques such as various florescence technology, these requirements are starting to be realized in different biological systems. The possible close interaction between quantitative experimentation and theoretical modeling has made systems biology an attractive field for physicists interested in quantitative biology. In this review, we describe some of the recent work in developing a quantitative predictive model of bacterial chemotaxis, which can be considered as the hydrogen atom of systems biology. Using statistical physics approaches, such as the Ising model and Langevin equation, we study how bacteria, such as E. coli, sense and amplify external signals, how they keep a working memory of the stimuli, and how they use these data to compute the chemical gradient. In particular, we will describe how E. coli cells avoid cross-talk in a heterogeneous receptor cluster to keep a ligand-specific memory. We will also study the thermodynamic costs of adaptation for cells to maintain an accurate memory. The statistical physics based approach described here should be useful in understanding design principles for cellular biochemical circuits in general.
Lan, Ganhui; Tu, Yuhai
2016-05-01
Living systems have to constantly sense their external environment and adjust their internal state in order to survive and reproduce. Biological systems, from as complex as the brain to a single E. coli cell, have to process these data in order to make appropriate decisions. How do biological systems sense external signals? How do they process the information? How do they respond to signals? Through years of intense study by biologists, many key molecular players and their interactions have been identified in different biological machineries that carry out these signaling functions. However, an integrated, quantitative understanding of the whole system is still lacking for most cellular signaling pathways, not to say the more complicated neural circuits. To study signaling processes in biology, the key thing to measure is the input-output relationship. The input is the signal itself, such as chemical concentration, external temperature, light (intensity and frequency), and more complex signals such as the face of a cat. The output can be protein conformational changes and covalent modifications (phosphorylation, methylation, etc), gene expression, cell growth and motility, as well as more complex output such as neuron firing patterns and behaviors of higher animals. Due to the inherent noise in biological systems, the measured input-output dependence is often noisy. These noisy data can be analysed by using powerful tools and concepts from information theory such as mutual information, channel capacity, and the maximum entropy hypothesis. This information theory approach has been successfully used to reveal the underlying correlations between key components of biological networks, to set bounds for network performance, and to understand possible network architecture in generating observed correlations. Although the information theory approach provides a general tool in analysing noisy biological data and may be used to suggest possible network architectures in preserving information, it does not reveal the underlying mechanism that leads to the observed input-output relationship, nor does it tell us much about which information is important for the organism and how biological systems use information to carry out specific functions. To do that, we need to develop models of the biological machineries, e.g. biochemical networks and neural networks, to understand the dynamics of biological information processes. This is a much more difficult task. It requires deep knowledge of the underlying biological network-the main players (nodes) and their interactions (links)-in sufficient detail to build a model with predictive power, as well as quantitative input-output measurements of the system under different perturbations (both genetic variations and different external conditions) to test the model predictions to guide further development of the model. Due to the recent growth of biological knowledge thanks in part to high throughput methods (sequencing, gene expression microarray, etc) and development of quantitative in vivo techniques such as various florescence technology, these requirements are starting to be realized in different biological systems. The possible close interaction between quantitative experimentation and theoretical modeling has made systems biology an attractive field for physicists interested in quantitative biology. In this review, we describe some of the recent work in developing a quantitative predictive model of bacterial chemotaxis, which can be considered as the hydrogen atom of systems biology. Using statistical physics approaches, such as the Ising model and Langevin equation, we study how bacteria, such as E. coli, sense and amplify external signals, how they keep a working memory of the stimuli, and how they use these data to compute the chemical gradient. In particular, we will describe how E. coli cells avoid cross-talk in a heterogeneous receptor cluster to keep a ligand-specific memory. We will also study the thermodynamic costs of adaptation for cells to maintain an accurate memory. The statistical physics based approach described here should be useful in understanding design principles for cellular biochemical circuits in general.
The spatial structure of a nonlinear receptive field.
Schwartz, Gregory W; Okawa, Haruhisa; Dunn, Felice A; Morgan, Josh L; Kerschensteiner, Daniel; Wong, Rachel O; Rieke, Fred
2012-11-01
Understanding a sensory system implies the ability to predict responses to a variety of inputs from a common model. In the retina, this includes predicting how the integration of signals across visual space shapes the outputs of retinal ganglion cells. Existing models of this process generalize poorly to predict responses to new stimuli. This failure arises in part from properties of the ganglion cell response that are not well captured by standard receptive-field mapping techniques: nonlinear spatial integration and fine-scale heterogeneities in spatial sampling. Here we characterize a ganglion cell's spatial receptive field using a mechanistic model based on measurements of the physiological properties and connectivity of only the primary excitatory circuitry of the retina. The resulting simplified circuit model successfully predicts ganglion-cell responses to a variety of spatial patterns and thus provides a direct correspondence between circuit connectivity and retinal output.
A novel method for predicting the power outputs of wave energy converters
NASA Astrophysics Data System (ADS)
Wang, Yingguang
2018-03-01
This paper focuses on realistically predicting the power outputs of wave energy converters operating in shallow water nonlinear waves. A heaving two-body point absorber is utilized as a specific calculation example, and the generated power of the point absorber has been predicted by using a novel method (a nonlinear simulation method) that incorporates a second order random wave model into a nonlinear dynamic filter. It is demonstrated that the second order random wave model in this article can be utilized to generate irregular waves with realistic crest-trough asymmetries, and consequently, more accurate generated power can be predicted by subsequently solving the nonlinear dynamic filter equation with the nonlinearly simulated second order waves as inputs. The research findings demonstrate that the novel nonlinear simulation method in this article can be utilized as a robust tool for ocean engineers in their design, analysis and optimization of wave energy converters.
NASA Astrophysics Data System (ADS)
Dhingra, Sunil; Bhushan, Gian; Dubey, Kashyap Kumar
2014-03-01
The present work studies and identifies the different variables that affect the output parameters involved in a single cylinder direct injection compression ignition (CI) engine using jatropha biodiesel. Response surface methodology based on Central composite design (CCD) is used to design the experiments. Mathematical models are developed for combustion parameters (Brake specific fuel consumption (BSFC) and peak cylinder pressure (Pmax)), performance parameter brake thermal efficiency (BTE) and emission parameters (CO, NO x , unburnt HC and smoke) using regression techniques. These regression equations are further utilized for simultaneous optimization of combustion (BSFC, Pmax), performance (BTE) and emission (CO, NO x , HC, smoke) parameters. As the objective is to maximize BTE and minimize BSFC, Pmax, CO, NO x , HC, smoke, a multiobjective optimization problem is formulated. Nondominated sorting genetic algorithm-II is used in predicting the Pareto optimal sets of solution. Experiments are performed at suitable optimal solutions for predicting the combustion, performance and emission parameters to check the adequacy of the proposed model. The Pareto optimal sets of solution can be used as guidelines for the end users to select optimal combination of engine output and emission parameters depending upon their own requirements.
Martinez, Carlos A.; Barr, Kenneth; Kim, Ah-Ram; Reinitz, John
2013-01-01
Synthetic biology offers novel opportunities for elucidating transcriptional regulatory mechanisms and enhancer logic. Complex cis-regulatory sequences—like the ones driving expression of the Drosophila even-skipped gene—have proven difficult to design from existing knowledge, presumably due to the large number of protein-protein interactions needed to drive the correct expression patterns of genes in multicellular organisms. This work discusses two novel computational methods for the custom design of enhancers that employ a sophisticated, empirically validated transcriptional model, optimization algorithms, and synthetic biology. These synthetic elements have both utilitarian and academic value, including improving existing regulatory models as well as evolutionary questions. The first method involves the use of simulated annealing to explore the sequence space for synthetic enhancers whose expression output fit a given search criterion. The second method uses a novel optimization algorithm to find functionally accessible pathways between two enhancer sequences. These paths describe a set of mutations wherein the predicted expression pattern does not significantly vary at any point along the path. Both methods rely on a predictive mathematical framework that maps the enhancer sequence space to functional output. PMID:23732772
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gonzalez, R.R.; McLellan, T.M.; Withey, W.R.
This report represents the results of TTCP-UTP6 efforts on modeling aspects when chemical protective ensembles are worn which need to be considered in warm environments. Since 1983, a significant data base has been collected using human experimental studies and wide clothing systems from which predictive modeling equations have been developed with individuals working in temperate and hot environments, but few comparisons of the -- results from various model outputs have ever been carried out. This initial comparison study was part of a key technical area (KIA) project for The Technical Cooperation Program (TTCP) UTP-6 working party. A modeling workshop wasmore » conducted in Toronto, Canada on 9-10 June 1994 to discuss the data reduction and results acquired in an initial clothing analysis study of TTCP using various chemical protective garments. To our knowledge, no comprehensive study to date has ever focused on comparing experimental results using an international standardized heat stress procedure matched to physiological outputs from various model predictions in individuals dressed in chemical protective clothing systems. This is the major focus of this TTCP key technical study. This technical report covers one aspect of the working party`s results.« less
Precision matters for position decoding in the early fly embryo
NASA Astrophysics Data System (ADS)
Petkova, Mariela D.; Tkacik, Gasper; Wieschaus, Eric F.; Bialek, William; Gregor, Thomas
Genetic networks can determine cell fates in multicellular organisms with precision that often reaches the physical limits of the system. However, it is unclear how the organism uses this precision and whether it has biological content. Here we address this question in the developing fly embryo, in which a genetic network of patterning genes reaches 1% precision in positioning cells along the embryo axis. The network consists of three interconnected layers: an input layer of maternal gradients, a processing layer of gap genes, and an output layer of pair-rule genes with seven-striped patterns. From measurements of gap gene protein expression in hundreds of wild-type embryos we construct a ``decoder'', which is a look-up table that determines cellular positions from the concentration means, variances and co-variances. When we apply the decoder to measurements in mutant embryos lacking various combinations of the maternal inputs, we predict quantitative changes in the output layer such as missing, altered or displaced stripes. We confirm these predictions by measuring pair-rule expression in the mutant embryos. Our results thereby show that the precision of the patterning network is biologically meaningful and a necessary feature for decoding cell positions in the early fly embryo.
Enhancement of Local Climate Analysis Tool
NASA Astrophysics Data System (ADS)
Horsfall, F. M.; Timofeyeva, M. M.; Dutton, J.
2012-12-01
The National Oceanographic and Atmospheric Administration (NOAA) National Weather Service (NWS) will enhance its Local Climate Analysis Tool (LCAT) to incorporate specific capabilities to meet the needs of various users including energy, health, and other communities. LCAT is an online interactive tool that provides quick and easy access to climate data and allows users to conduct analyses at the local level such as time series analysis, trend analysis, compositing, correlation and regression techniques, with others to be incorporated as needed. LCAT uses principles of Artificial Intelligence in connecting human and computer perceptions on application of data and scientific techniques in multiprocessing simultaneous users' tasks. Future development includes expanding the type of data currently imported by LCAT (historical data at stations and climate divisions) to gridded reanalysis and General Circulation Model (GCM) data, which are available on global grids and thus will allow for climate studies to be conducted at international locations. We will describe ongoing activities to incorporate NOAA Climate Forecast System (CFS) reanalysis data (CFSR), NOAA model output data, including output from the National Multi Model Ensemble Prediction System (NMME) and longer term projection models, and plans to integrate LCAT into the Earth System Grid Federation (ESGF) and its protocols for accessing model output and observational data to ensure there is no redundancy in development of tools that facilitate scientific advancements and use of climate model information in applications. Validation and inter-comparison of forecast models will be included as part of the enhancement to LCAT. To ensure sustained development, we will investigate options for open sourcing LCAT development, in particular, through the University Corporation for Atmospheric Research (UCAR).
A Hybrid Parachute Simulation Environment for the Orion Parachute Development Project
NASA Technical Reports Server (NTRS)
Moore, James W.
2011-01-01
A parachute simulation environment (PSE) has been developed that aims to take advantage of legacy parachute simulation codes and modern object-oriented programming techniques. This hybrid simulation environment provides the parachute analyst with a natural and intuitive way to construct simulation tasks while preserving the pedigree and authority of established parachute simulations. NASA currently employs four simulation tools for developing and analyzing air-drop tests performed by the CEV Parachute Assembly System (CPAS) Project. These tools were developed at different times, in different languages, and with different capabilities in mind. As a result, each tool has a distinct interface and set of inputs and outputs. However, regardless of the simulation code that is most appropriate for the type of test, engineers typically perform similar tasks for each drop test such as prediction of loads, assessment of altitude, and sequencing of disreefs or cut-aways. An object-oriented approach to simulation configuration allows the analyst to choose models of real physical test articles (parachutes, vehicles, etc.) and sequence them to achieve the desired test conditions. Once configured, these objects are translated into traditional input lists and processed by the legacy simulation codes. This approach minimizes the number of sim inputs that the engineer must track while configuring an input file. An object oriented approach to simulation output allows a common set of post-processing functions to perform routine tasks such as plotting and timeline generation with minimal sensitivity to the simulation that generated the data. Flight test data may also be translated into the common output class to simplify test reconstruction and analysis.
Knijnenburg, Theo A; Klau, Gunnar W; Iorio, Francesco; Garnett, Mathew J; McDermott, Ultan; Shmulevich, Ilya; Wessels, Lodewyk F A
2016-11-23
Mining large datasets using machine learning approaches often leads to models that are hard to interpret and not amenable to the generation of hypotheses that can be experimentally tested. We present 'Logic Optimization for Binary Input to Continuous Output' (LOBICO), a computational approach that infers small and easily interpretable logic models of binary input features that explain a continuous output variable. Applying LOBICO to a large cancer cell line panel, we find that logic combinations of multiple mutations are more predictive of drug response than single gene predictors. Importantly, we show that the use of the continuous information leads to robust and more accurate logic models. LOBICO implements the ability to uncover logic models around predefined operating points in terms of sensitivity and specificity. As such, it represents an important step towards practical application of interpretable logic models.
Jewell, Shannon L; Luecken, Linda J; Gress-Smith, Jenna; Crnic, Keith A; Gonzales, Nancy A
2015-01-01
Low-income Mexican American women experience significant health disparities during the postpartum period. Contextual stressors, such as economic stress, are theorized to affect health via dysregulated cortisol output. However, cultural protective factors including strong family support may buffer the impact of stress. In a sample of 322 low-income Mexican American women (mother age 18-42; 82% Spanish-speaking; modal family income $10,000-$15,000), we examined the interactive influence of economic stress and family support at 6 weeks postpartum on maternal cortisol output (AUCg) during a mildly challenging mother-infant interaction task at 12 weeks postpartum, controlling for 6-week maternal cortisol and depressive symptoms. The interaction significantly predicted cortisol output such that higher economic stress predicted higher cortisol only among women reporting low family support. These results suggest that family support is an important protective resource for postpartum Mexican American women experiencing elevated economic stress.
NASA Technical Reports Server (NTRS)
Brentner, K. S.
1986-01-01
A computer program has been developed at the Langley Research Center to predict the discrete frequency noise of conventional and advanced helicopter rotors. The program, called WOPWOP, uses the most advanced subsonic formulation of Farassat that is less sensitive to errors and is valid for nearly all helicopter rotor geometries and flight conditions. A brief derivation of the acoustic formulation is presented along with a discussion of the numerical implementation of the formulation. The computer program uses realistic helicopter blade motion and aerodynamic loadings, input by the user, for noise calculation in the time domain. A detailed definition of all the input variables, default values, and output data is included. A comparison with experimental data shows good agreement between prediction and experiment; however, accurate aerodynamic loading is needed.
Forecast Method of Solar Irradiance with Just-In-Time Modeling
NASA Astrophysics Data System (ADS)
Suzuki, Takanobu; Goto, Yusuke; Terazono, Takahiro; Wakao, Shinji; Oozeki, Takashi
PV power output mainly depends on the solar irradiance which is affected by various meteorological factors. So, it is required to predict solar irradiance in the future for the efficient operation of PV systems. In this paper, we develop a novel approach for solar irradiance forecast, in which we introduce to combine the black-box model (JIT Modeling) with the physical model (GPV data). We investigate the predictive accuracy of solar irradiance over wide controlled-area of each electric power company by utilizing the measured data on the 44 observation points throughout Japan offered by JMA and the 64 points around Kanto by NEDO. Finally, we propose the application forecast method of solar irradiance to the point which is difficulty in compiling the database. And we consider the influence of different GPV default time on solar irradiance prediction.
Emotional Reactivity and Parenting Sensitivity Interact to Predict Cortisol Output in Toddlers
ERIC Educational Resources Information Center
Blair, Clancy; Ursache, Alexandra; Mills-Koonce, Roger; Stifter, Cynthia; Voegtline, Kristin; Granger, Douglas A.
2015-01-01
Cortisol output in response to emotion induction procedures was examined at child age 24 months in a prospective longitudinal sample of 1,292 children and families in predominantly low-income and nonurban communities in two regions of high poverty in the United States. Multilevel analysis indicated that observed emotional reactivity to a mask…
Turkdogan-Aydinol, F Ilter; Yetilmezsoy, Kaan
2010-10-15
A MIMO (multiple inputs and multiple outputs) fuzzy-logic-based model was developed to predict biogas and methane production rates in a pilot-scale 90-L mesophilic up-flow anaerobic sludge blanket (UASB) reactor treating molasses wastewater. Five input variables such as volumetric organic loading rate (OLR), volumetric total chemical oxygen demand (TCOD) removal rate (R(V)), influent alkalinity, influent pH and effluent pH were fuzzified by the use of an artificial intelligence-based approach. Trapezoidal membership functions with eight levels were conducted for the fuzzy subsets, and a Mamdani-type fuzzy inference system was used to implement a total of 134 rules in the IF-THEN format. The product (prod) and the centre of gravity (COG, centroid) methods were employed as the inference operator and defuzzification methods, respectively. Fuzzy-logic predicted results were compared with the outputs of two exponential non-linear regression models derived in this study. The UASB reactor showed a remarkable performance on the treatment of molasses wastewater, with an average TCOD removal efficiency of 93 (+/-3)% and an average volumetric TCOD removal rate of 6.87 (+/-3.93) kg TCOD(removed)/m(3)-day, respectively. Findings of this study clearly indicated that, compared to non-linear regression models, the proposed MIMO fuzzy-logic-based model produced smaller deviations and exhibited a superior predictive performance on forecasting of both biogas and methane production rates with satisfactory determination coefficients over 0.98. 2010 Elsevier B.V. All rights reserved.
Fuzzy logic-based analogue forecasting and hybrid modelling of horizontal visibility
NASA Astrophysics Data System (ADS)
Tuba, Zoltán; Bottyán, Zsolt
2018-04-01
Forecasting visibility is one of the greatest challenges in aviation meteorology. At the same time, high accuracy visibility forecasts can significantly reduce or make avoidable weather-related risk in aviation as well. To improve forecasting visibility, this research links fuzzy logic-based analogue forecasting and post-processed numerical weather prediction model outputs in hybrid forecast. Performance of analogue forecasting model was improved by the application of Analytic Hierarchy Process. Then, linear combination of the mentioned outputs was applied to create ultra-short term hybrid visibility prediction which gradually shifts the focus from statistical to numerical products taking their advantages during the forecast period. It gives the opportunity to bring closer the numerical visibility forecast to the observations even it is wrong initially. Complete verification of categorical forecasts was carried out; results are available for persistence and terminal aerodrome forecasts (TAF) as well in order to compare. The average value of Heidke Skill Score (HSS) of examined airports of analogue and hybrid forecasts shows very similar results even at the end of forecast period where the rate of analogue prediction in the final hybrid output is 0.1-0.2 only. However, in case of poor visibility (1000-2500 m), hybrid (0.65) and analogue forecasts (0.64) have similar average of HSS in the first 6 h of forecast period, and have better performance than persistence (0.60) or TAF (0.56). Important achievement that hybrid model takes into consideration physics and dynamics of the atmosphere due to the increasing part of the numerical weather prediction. In spite of this, its performance is similar to the most effective visibility forecasting methods and does not follow the poor verification results of clearly numerical outputs.
Sensor response rate accelerator
Vogt, Michael C.
2002-01-01
An apparatus and method for sensor signal prediction and for improving sensor signal response time, is disclosed. An adaptive filter or an artificial neural network is utilized to provide predictive sensor signal output and is further used to reduce sensor response time delay.
USDA-ARS?s Scientific Manuscript database
Given a set of biallelic molecular markers, such as SNPs, with genotype values encoded numerically on a collection of plant, animal or human samples, the goal of genetic trait prediction is to predict the quantitative trait values by simultaneously modeling all marker effects. Genetic trait predicti...
A summary and evaluation of semi-empirical methods for the prediction of helicopter rotor noise
NASA Technical Reports Server (NTRS)
Pegg, R. J.
1979-01-01
Existing prediction techniques are compiled and described. The descriptions include input and output parameter lists, required equations and graphs, and the range of validity for each part of the prediction procedures. Examples are provided illustrating the analysis procedure and the degree of agreement with experimental results.
NASA Astrophysics Data System (ADS)
Engel, Dave W.; Reichardt, Thomas A.; Kulp, Thomas J.; Graff, David L.; Thompson, Sandra E.
2016-05-01
Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensor level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.
Economic Impacts of Wind Turbine Development in U.S. Counties
DOE Office of Scientific and Technical Information (OSTI.GOV)
J., Brown; B., Hoen; E., Lantz
2011-07-25
The objective is to address the research question using post-project construction, county-level data, and econometric evaluation methods. Wind energy is expanding rapidly in the United States: Over the last 4 years, wind power has contributed approximately 35 percent of all new electric power capacity. Wind power plants are often developed in rural areas where local economic development impacts from the installation are projected, including land lease and property tax payments and employment growth during plant construction and operation. Wind energy represented 2.3 percent of the U.S. electricity supply in 2010, but studies show that penetrations of at least 20 percentmore » are feasible. Several studies have used input-output models to predict direct, indirect, and induced economic development impacts. These analyses have often been completed prior to project construction. Available studies have not yet investigated the economic development impacts of wind development at the county level using post-construction econometric evaluation methods. Analysis of county-level impacts is limited. However, previous county-level analyses have estimated operation-period employment at 0.2 to 0.6 jobs per megawatt (MW) of power installed and earnings at $9,000/MW to $50,000/MW. We find statistically significant evidence of positive impacts of wind development on county-level per capita income from the OLS and spatial lag models when they are applied to the full set of wind and non-wind counties. The total impact on annual per capita income of wind turbine development (measured in MW per capita) in the spatial lag model was $21,604 per MW. This estimate is within the range of values estimated in the literature using input-output models. OLS results for the wind-only counties and matched samples are similar in magnitude, but are not statistically significant at the 10-percent level. We find a statistically significant impact of wind development on employment in the OLS analysis for wind counties only, but not in the other models. Our estimates of employment impacts are not precise enough to assess the validity of employment impacts from input-output models applied in advance of wind energy project construction. The analysis provides empirical evidence of positive income effects at the county level from cumulative wind turbine development, consistent with the range of impacts estimated using input-output models. Employment impacts are less clear.« less
US EPA 2012 Air Quality Fused Surface for the Conterminous U.S. Map Service
This web service contains a polygon layer that depicts fused air quality predictions for 2012 for census tracts in the conterminous United States. Fused air quality predictions (for ozone and PM2.5) are modeled using a Bayesian space-time downscaling fusion model approach described in a series of three published journal papers: 1) (Berrocal, V., Gelfand, A. E. and Holland, D. M. (2012). Space-time fusion under error in computer model output: an application to modeling air quality. Biometrics 68, 837-848; 2) Berrocal, V., Gelfand, A. E. and Holland, D. M. (2010). A bivariate space-time downscaler under space and time misalignment. The Annals of Applied Statistics 4, 1942-1975; and 3) Berrocal, V., Gelfand, A. E., and Holland, D. M. (2010). A spatio-temporal downscaler for output from numerical models. J. of Agricultural, Biological,and Environmental Statistics 15, 176-197) is used to provide daily, predictive PM2.5 (daily average) and O3 (daily 8-hr maximum) surfaces for 2012. Summer (O3) and annual (PM2.5) means calculated and published. The downscaling fusion model uses both air quality monitoring data from the National Air Monitoring Stations/State and Local Air Monitoring Stations (NAMS/SLAMS) and numerical output from the Models-3/Community Multiscale Air Quality (CMAQ). Currently, predictions at the US census tract centroid locations within the 12 km CMAQ domain are archived. Predictions at the CMAQ grid cell centroids, or any desired set of locations co
Nonlinear Modeling of Causal Interrelationships in Neuronal Ensembles
Zanos, Theodoros P.; Courellis, Spiros H.; Berger, Theodore W.; Hampson, Robert E.; Deadwyler, Sam A.; Marmarelis, Vasilis Z.
2009-01-01
The increasing availability of multiunit recordings gives new urgency to the need for effective analysis of “multidimensional” time-series data that are derived from the recorded activity of neuronal ensembles in the form of multiple sequences of action potentials—treated mathematically as point-processes and computationally as spike-trains. Whether in conditions of spontaneous activity or under conditions of external stimulation, the objective is the identification and quantification of possible causal links among the neurons generating the observed binary signals. A multiple-input/multiple-output (MIMO) modeling methodology is presented that can be used to quantify the neuronal dynamics of causal interrelationships in neuronal ensembles using spike-train data recorded from individual neurons. These causal interrelationships are modeled as transformations of spike-trains recorded from a set of neurons designated as the “inputs” into spike-trains recorded from another set of neurons designated as the “outputs.” The MIMO model is composed of a set of multiinput/single-output (MISO) modules, one for each output. Each module is the cascade of a MISO Volterra model and a threshold operator generating the output spikes. The Laguerre expansion approach is used to estimate the Volterra kernels of each MISO module from the respective input–output data using the least-squares method. The predictive performance of the model is evaluated with the use of the receiver operating characteristic (ROC) curve, from which the optimum threshold is also selected. The Mann–Whitney statistic is used to select the significant inputs for each output by examining the statistical significance of improvements in the predictive accuracy of the model when the respective inputs is included. Illustrative examples are presented for a simulated system and for an actual application using multiunit data recordings from the hippocampus of a behaving rat. PMID:18701382
Mapping the cortical representation of speech sounds in a syllable repetition task.
Markiewicz, Christopher J; Bohland, Jason W
2016-11-01
Speech repetition relies on a series of distributed cortical representations and functional pathways. A speaker must map auditory representations of incoming sounds onto learned speech items, maintain an accurate representation of those items in short-term memory, interface that representation with the motor output system, and fluently articulate the target sequence. A "dorsal stream" consisting of posterior temporal, inferior parietal and premotor regions is thought to mediate auditory-motor representations and transformations, but the nature and activation of these representations for different portions of speech repetition tasks remains unclear. Here we mapped the correlates of phonetic and/or phonological information related to the specific phonemes and syllables that were heard, remembered, and produced using a series of cortical searchlight multi-voxel pattern analyses trained on estimates of BOLD responses from individual trials. Based on responses linked to input events (auditory syllable presentation), predictive vowel-level information was found in the left inferior frontal sulcus, while syllable prediction revealed significant clusters in the left ventral premotor cortex and central sulcus and the left mid superior temporal sulcus. Responses linked to output events (the GO signal cueing overt production) revealed strong clusters of vowel-related information bilaterally in the mid to posterior superior temporal sulcus. For the prediction of onset and coda consonants, input-linked responses yielded distributed clusters in the superior temporal cortices, which were further informative for classifiers trained on output-linked responses. Output-linked responses in the Rolandic cortex made strong predictions for the syllables and consonants produced, but their predictive power was reduced for vowels. The results of this study provide a systematic survey of how cortical response patterns covary with the identity of speech sounds, which will help to constrain and guide theoretical models of speech perception, speech production, and phonological working memory. Copyright © 2016 Elsevier Inc. All rights reserved.
Pulmonary Vascular Congestion: A Mechanism for Distal Lung Unit Dysfunction in Obesity.
Oppenheimer, Beno W; Berger, Kenneth I; Ali, Saleem; Segal, Leopoldo N; Donnino, Robert; Katz, Stuart; Parikh, Manish; Goldring, Roberta M
2016-01-01
Obesity is characterized by increased systemic and pulmonary blood volumes (pulmonary vascular congestion). Concomitant abnormal alveolar membrane diffusion suggests subclinical interstitial edema. In this setting, functional abnormalities should encompass the entire distal lung including the airways. We hypothesize that in obesity: 1) pulmonary vascular congestion will affect the distal lung unit with concordant alveolar membrane and distal airway abnormalities; and 2) the degree of pulmonary congestion and membrane dysfunction will relate to the cardiac response. 54 non-smoking obese subjects underwent spirometry, impulse oscillometry (IOS), diffusion capacity (DLCO) with partition into membrane diffusion (DM) and capillary blood volume (VC), and cardiac MRI (n = 24). Alveolar-capillary membrane efficiency was assessed by calculation of DM/VC. Mean age was 45±12 years; mean BMI was 44.8±7 kg/m2. Vital capacity was 88±13% predicted with reduction in functional residual capacity (58±12% predicted). Despite normal DLCO (98±18% predicted), VC was elevated (135±31% predicted) while DM averaged 94±22% predicted. DM/VC varied from 0.4 to 1.4 with high values reflecting recruitment of alveolar membrane and low values indicating alveolar membrane dysfunction. The most abnormal IOS (R5 and X5) occurred in subjects with lowest DM/VC (r2 = 0.31, p<0.001; r2 = 0.34, p<0.001). Cardiac output and index (cardiac output / body surface area) were directly related to DM/VC (r2 = 0.41, p<0.001; r2 = 0.19, p = 0.03). Subjects with lower DM/VC demonstrated a cardiac output that remained in the normal range despite presence of obesity. Global dysfunction of the distal lung (alveolar membrane and distal airway) is associated with pulmonary vascular congestion and failure to achieve the high output state of obesity. Pulmonary vascular congestion and consequent fluid transudation and/or alterations in the structure of the alveolar capillary membrane may be considered often unrecognized causes of airway dysfunction in obesity.
NASA Astrophysics Data System (ADS)
Harvey, Natalie J.; Huntley, Nathan; Dacre, Helen F.; Goldstein, Michael; Thomson, David; Webster, Helen
2018-01-01
Following the disruption to European airspace caused by the eruption of Eyjafjallajökull in 2010 there has been a move towards producing quantitative predictions of volcanic ash concentration using volcanic ash transport and dispersion simulators. However, there is no formal framework for determining the uncertainties of these predictions and performing many simulations using these complex models is computationally expensive. In this paper a Bayesian linear emulation approach is applied to the Numerical Atmospheric-dispersion Modelling Environment (NAME) to better understand the influence of source and internal model parameters on the simulator output. Emulation is a statistical method for predicting the output of a computer simulator at new parameter choices without actually running the simulator. A multi-level emulation approach is applied using two configurations of NAME with different numbers of model particles. Information from many evaluations of the computationally faster configuration is combined with results from relatively few evaluations of the slower, more accurate, configuration. This approach is effective when it is not possible to run the accurate simulator many times and when there is also little prior knowledge about the influence of parameters. The approach is applied to the mean ash column loading in 75 geographical regions on 14 May 2010. Through this analysis it has been found that the parameters that contribute the most to the output uncertainty are initial plume rise height, mass eruption rate, free tropospheric turbulence levels and precipitation threshold for wet deposition. This information can be used to inform future model development and observational campaigns and routine monitoring. The analysis presented here suggests the need for further observational and theoretical research into parameterisation of atmospheric turbulence. Furthermore it can also be used to inform the most important parameter perturbations for a small operational ensemble of simulations. The use of an emulator also identifies the input and internal parameters that do not contribute significantly to simulator uncertainty. Finally, the analysis highlights that the faster, less accurate, configuration of NAME can, on its own, provide useful information for the problem of predicting average column load over large areas.
Robertson, Benjamin D; Farris, Dominic J; Sawicki, Gregory S
2014-11-24
Development of robotic exoskeletons to assist/enhance human locomotor performance involves lengthy prototyping, testing, and analysis. This process is further convoluted by variability in limb/body morphology and preferred gait patterns between individuals. In an attempt to expedite this process, and establish a physiological basis for actuator prescription, we developed a simple, predictive model of human neuromechanical adaptation to a passive elastic exoskeleton applied at the ankle joint during a functional task. We modeled the human triceps surae-Achilles tendon muscle tendon unit (MTU) as a single Hill-type muscle, or contractile element (CE), and series tendon, or series elastic element (SEE). This modeled system was placed under gravitational load and underwent cyclic stimulation at a regular frequency (i.e. hopping) with and without exoskeleton (Exo) assistance. We explored the effect that both Exo stiffness (kExo) and muscle activation (Astim) had on combined MTU and Exo (MTU + Exo), MTU, and CE/SEE mechanics and energetics. Model accuracy was verified via qualitative and quantitative comparisons between modeled and prior experimental outcomes. We demonstrated that reduced Astim can be traded for increased kExo to maintain consistent MTU + Exo mechanics (i.e. average positive power (P⁺mech) output) from an unassisted condition (i.e. kExo = 0 kN · m⁻¹). For these regions of parameter space, our model predicted a reduction in MTU force, SEE energy cycling, and metabolic rate (Pmet), as well as constant CE P⁺mech output compared to unassisted conditions. This agreed with previous experimental observations, demonstrating our model's predictive ability. Model predictions also provided insight into mechanisms of metabolic cost minimization, and/or enhanced mechanical performance, and we concluded that both of these outcomes cannot be achieved simultaneously, and that one must come at the detriment of the other in a spring-assisted compliant MTU.
Method of controlling cyclic variation in engine combustion
Davis, L.I. Jr.; Daw, C.S.; Feldkamp, L.A.; Hoard, J.W.; Yuan, F.; Connolly, F.T.
1999-07-13
Cyclic variation in combustion of a lean burning engine is reduced by detecting an engine combustion event output such as torsional acceleration in a cylinder (i) at a combustion event (k), using the detected acceleration to predict a target acceleration for the cylinder at the next combustion event (k+1), modifying the target output by a correction term that is inversely proportional to the average phase of the combustion event output of cylinder (i) and calculating a control output such as fuel pulse width or spark timing necessary to achieve the target acceleration for cylinder (i) at combustion event (k+1) based on anti-correlation with the detected acceleration and spill-over effects from fueling. 27 figs.
Method of controlling cyclic variation in engine combustion
Davis, Jr., Leighton Ira; Daw, Charles Stuart; Feldkamp, Lee Albert; Hoard, John William; Yuan, Fumin; Connolly, Francis Thomas
1999-01-01
Cyclic variation in combustion of a lean burning engine is reduced by detecting an engine combustion event output such as torsional acceleration in a cylinder (i) at a combustion event (k), using the detected acceleration to predict a target acceleration for the cylinder at the next combustion event (k+1), modifying the target output by a correction term that is inversely proportional to the average phase of the combustion event output of cylinder (i) and calculating a control output such as fuel pulse width or spark timing necessary to achieve the target acceleration for cylinder (i) at combustion event (k+1) based on anti-correlation with the detected acceleration and spill-over effects from fueling.
Wu, Yu-Hsiang; Stangl, Elizabeth
2013-01-01
The acceptable noise level (ANL) test determines the maximum noise level that an individual is willing to accept while listening to speech. The first objective of the present study was to systematically investigate the effect of wide dynamic range compression processing (WDRC), and its combined effect with digital noise reduction (DNR) and directional processing (DIR), on ANL. Because ANL represents the lowest signal-to-noise ratio (SNR) that a listener is willing to accept, the second objective was to examine whether the hearing aid output SNR could predict aided ANL across different combinations of hearing aid signal-processing schemes. Twenty-five adults with sensorineural hearing loss participated in the study. ANL was measured monaurally in two unaided and seven aided conditions, in which the status of the hearing aid processing schemes (enabled or disabled) and the location of noise (front or rear) were manipulated. The hearing aid output SNR was measured for each listener in each condition using a phase-inversion technique. The aided ANL was predicted by unaided ANL and hearing aid output SNR, under the assumption that the lowest acceptable SNR at the listener's eardrum is a constant across different ANL test conditions. Study results revealed that, on average, WDRC increased (worsened) ANL by 1.5 dB, while DNR and DIR decreased (improved) ANL by 1.1 and 2.8 dB, respectively. Because the effects of WDRC and DNR on ANL were opposite in direction but similar in magnitude, the ANL of linear/DNR-off was not significantly different from that of WDRC/DNR-on. The results further indicated that the pattern of ANL change across different aided conditions was consistent with the pattern of hearing aid output SNR change created by processing schemes. Compared with linear processing, WDRC creates a noisier sound image and makes listeners less willing to accept noise. However, this negative effect on noise acceptance can be offset by DNR, regardless of microphone mode. The hearing aid output SNR derived using the phase-inversion technique can predict aided ANL across different combinations of signal-processing schemes. These results suggest a close relationship between aided ANL, signal-processing scheme, and hearing aid output SNR.
NASA Astrophysics Data System (ADS)
Abdul Ghani, B.
2005-09-01
"TEA CO 2 Laser Simulator" has been designed to simulate the dynamic emission processes of the TEA CO 2 laser based on the six-temperature model. The program predicts the behavior of the laser output pulse (power, energy, pulse duration, delay time, FWHM, etc.) depending on the physical and geometrical input parameters (pressure ratio of gas mixture, reflecting area of the output mirror, media length, losses, filling and decay factors, etc.). Program summaryTitle of program: TEA_CO2 Catalogue identifier: ADVW Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVW Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer: P.IV DELL PC Setup: Atomic Energy Commission of Syria, Scientific Services Department, Mathematics and Informatics Division Operating system: MS-Windows 9x, 2000, XP Programming language: Delphi 6.0 No. of lines in distributed program, including test data, etc.: 47 315 No. of bytes in distributed program, including test data, etc.:7 681 109 Distribution format:tar.gz Classification: 15 Laser Physics Nature of the physical problem: "TEA CO 2 Laser Simulator" is a program that predicts the behavior of the laser output pulse by studying the effect of the physical and geometrical input parameters on the characteristics of the output laser pulse. The laser active medium consists of a CO 2-N 2-He gas mixture. Method of solution: Six-temperature model, for the dynamics emission of TEA CO 2 laser, has been adapted in order to predict the parameters of laser output pulses. A simulation of the laser electrical pumping was carried out using two approaches; empirical function equation (8) and differential equation (9). Typical running time: The program's running time mainly depends on both integration interval and step; for a 4 μs period of time and 0.001 μs integration step (defaults values used in the program), the running time will be about 4 seconds. Restrictions on the complexity: Using a very small integration step might leads to stop the program run due to the huge number of calculating points and to a small paging file size of the MS-Windows virtual memory. In such case, it is recommended to enlarge the paging file size to the appropriate size, or to use a bigger value of integration step.
Hydrological responses to dynamically and statistically downscaled climate model output
Wilby, R.L.; Hay, L.E.; Gutowski, W.J.; Arritt, R.W.; Takle, E.S.; Pan, Z.; Leavesley, G.H.; Clark, M.P.
2000-01-01
Daily rainfall and surface temperature series were simulated for the Animas River basin, Colorado using dynamically and statistically downscaled output from the National Center for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) re-analysis. A distributed hydrological model was then applied to the downscaled data. Relative to raw NCEP output, downscaled climate variables provided more realistic stimulations of basin scale hydrology. However, the results highlight the sensitivity of modeled processes to the choice of downscaling technique, and point to the need for caution when interpreting future hydrological scenarios.
xGDBvm: A Web GUI-Driven Workflow for Annotating Eukaryotic Genomes in the Cloud[OPEN
Merchant, Nirav
2016-01-01
Genome-wide annotation of gene structure requires the integration of numerous computational steps. Currently, annotation is arguably best accomplished through collaboration of bioinformatics and domain experts, with broad community involvement. However, such a collaborative approach is not scalable at today’s pace of sequence generation. To address this problem, we developed the xGDBvm software, which uses an intuitive graphical user interface to access a number of common genome analysis and gene structure tools, preconfigured in a self-contained virtual machine image. Once their virtual machine instance is deployed through iPlant’s Atmosphere cloud services, users access the xGDBvm workflow via a unified Web interface to manage inputs, set program parameters, configure links to high-performance computing (HPC) resources, view and manage output, apply analysis and editing tools, or access contextual help. The xGDBvm workflow will mask the genome, compute spliced alignments from transcript and/or protein inputs (locally or on a remote HPC cluster), predict gene structures and gene structure quality, and display output in a public or private genome browser complete with accessory tools. Problematic gene predictions are flagged and can be reannotated using the integrated yrGATE annotation tool. xGDBvm can also be configured to append or replace existing data or load precomputed data. Multiple genomes can be annotated and displayed, and outputs can be archived for sharing or backup. xGDBvm can be adapted to a variety of use cases including de novo genome annotation, reannotation, comparison of different annotations, and training or teaching. PMID:27020957
xGDBvm: A Web GUI-Driven Workflow for Annotating Eukaryotic Genomes in the Cloud.
Duvick, Jon; Standage, Daniel S; Merchant, Nirav; Brendel, Volker P
2016-04-01
Genome-wide annotation of gene structure requires the integration of numerous computational steps. Currently, annotation is arguably best accomplished through collaboration of bioinformatics and domain experts, with broad community involvement. However, such a collaborative approach is not scalable at today's pace of sequence generation. To address this problem, we developed the xGDBvm software, which uses an intuitive graphical user interface to access a number of common genome analysis and gene structure tools, preconfigured in a self-contained virtual machine image. Once their virtual machine instance is deployed through iPlant's Atmosphere cloud services, users access the xGDBvm workflow via a unified Web interface to manage inputs, set program parameters, configure links to high-performance computing (HPC) resources, view and manage output, apply analysis and editing tools, or access contextual help. The xGDBvm workflow will mask the genome, compute spliced alignments from transcript and/or protein inputs (locally or on a remote HPC cluster), predict gene structures and gene structure quality, and display output in a public or private genome browser complete with accessory tools. Problematic gene predictions are flagged and can be reannotated using the integrated yrGATE annotation tool. xGDBvm can also be configured to append or replace existing data or load precomputed data. Multiple genomes can be annotated and displayed, and outputs can be archived for sharing or backup. xGDBvm can be adapted to a variety of use cases including de novo genome annotation, reannotation, comparison of different annotations, and training or teaching. © 2016 American Society of Plant Biologists. All rights reserved.
Deterministic Wave Predictions from the WaMoS II
2014-10-23
Monitoring System WaMoS II as input to a wave pre- diction system . The utility of wave prediction is primarily ves- sel motion prediction. Specific...successful prediction. The envisioned prediction system may provide graphical output in the form of a decision support system (Fig. 1). Predictions are...quality and accuracy of WaMoS as input to a deterministic wave prediction system . In the context of this paper, the Time Now Forecast H e a v e Hindcast
Díaz, José; Acosta, Jesús; González, Rafael; Cota, Juan; Sifuentes, Ernesto; Nebot, Àngela
2018-02-01
The control of the central nervous system (CNS) over the cardiovascular system (CS) has been modeled using different techniques, such as fuzzy inductive reasoning, genetic fuzzy systems, neural networks, and nonlinear autoregressive techniques; the results obtained so far have been significant, but not solid enough to describe the control response of the CNS over the CS. In this research, support vector machines (SVMs) are used to predict the response of a branch of the CNS, specifically, the one that controls an important part of the cardiovascular system. To do this, five models are developed to emulate the output response of five controllers for the same input signal, the carotid sinus blood pressure (CSBP). These controllers regulate parameters such as heart rate, myocardial contractility, peripheral and coronary resistance, and venous tone. The models are trained using a known set of input-output response in each controller; also, there is a set of six input-output signals for testing each proposed model. The input signals are processed using an all-pass filter, and the accuracy performance of the control models is evaluated using the percentage value of the normalized mean square error (MSE). Experimental results reveal that SVM models achieve a better estimation of the dynamical behavior of the CNS control compared to others modeling systems. The main results obtained show that the best case is for the peripheral resistance controller, with a MSE of 1.20e-4%, while the worst case is for the heart rate controller, with a MSE of 1.80e-3%. These novel models show a great reliability in fitting the output response of the CNS which can be used as an input to the hemodynamic system models in order to predict the behavior of the heart and blood vessels in response to blood pressure variations. Copyright © 2017 Elsevier Ltd. All rights reserved.
Effectiveness of a passive-active vibration isolation system with actuator constraints
NASA Astrophysics Data System (ADS)
Sun, Lingling; Sun, Wei; Song, Kongjie; Hansen, Colin H.
2014-05-01
In the prediction of active vibration isolation performance, control force requirements were ignored in previous work. This may limit the realization of theoretically predicted isolation performance if control force of large magnitude cannot be supplied by actuators. The behavior of a feed-forward active isolation system subjected to actuator output constraints is investigated. Distributed parameter models are developed to analyze the system response, and to produce a transfer matrix for the design of an integrated passive-active isolation system. Cost functions comprising a combination of the vibration transmission energy and the sum of the squared control forces are proposed. The example system considered is a rigid body connected to a simply supported plate via two passive-active isolation mounts. Vertical and transverse forces as well as a rotational moment are applied at the rigid body, and resonances excited in elastic mounts and the supporting plate are analyzed. The overall isolation performance is evaluated by numerical simulation. The simulation results are then compared with those obtained using unconstrained control strategies. In addition, the effects of waves in elastic mounts are analyzed. It is shown that the control strategies which rely on unconstrained actuator outputs may give substantial power transmission reductions over a wide frequency range, but also require large control force amplitudes to control excited vibration modes of the system. Expected power transmission reductions for modified control strategies that incorporate constrained actuator outputs are considerably less than typical reductions with unconstrained actuator outputs. In the frequency range in which rigid body modes are present, the control strategies can only achieve 5-10 dB power transmission reduction, when control forces are constrained to be the same order of the magnitude as the primary vertical force. The resonances of the elastic mounts result in a notable increase of power transmission in high frequency range and cannot be attenuated by active control. The investigation provides a guideline for design and evaluation of active vibration isolation systems.
Building a Framework in Improving Drought Monitoring and Early Warning Systems in Africa
NASA Astrophysics Data System (ADS)
Tadesse, T.; Wall, N.; Haigh, T.; Shiferaw, A. S.; Beyene, S.; Demisse, G. B.; Zaitchik, B.
2015-12-01
Decision makers need a basic understanding of the prediction models and products of hydro-climatic extremes and their suitability in time and space for strategic resource and development planning to develop mitigation and adaptation strategies. Advances in our ability to assess and predict climate extremes (e.g., droughts and floods) under evolving climate change suggest opportunity to improve management of climatic/hydrologic risk in agriculture and water resources. In the NASA funded project entitled, "Seasonal Prediction of Hydro-Climatic Extremes in the Greater Horn of Africa (GHA) under Evolving Climate Conditions to Support Adaptation Strategies," we are attempting to develop a framework that uses dialogue between managers and scientists on how to enhance the use of models' outputs and prediction products in the GHA as well as improve the delivery of this information in ways that can be easily utilized by managers. This process is expected to help our multidisciplinary research team obtain feedback on the models and forecast products. In addition, engaging decision makers is essential in evaluating the use of drought and flood prediction models and products for decision-making processes in drought and flood management. Through this study, we plan to assess information requirements to implement a robust Early Warning Systems (EWS) by engaging decision makers in the process. This participatory process could also help the existing EWSs in Africa and to develop new local and regional EWSs. In this presentation, we report the progress made in the past two years of the NASA project.
A probabilistic method for constructing wave time-series at inshore locations using model scenarios
Long, Joseph W.; Plant, Nathaniel G.; Dalyander, P. Soupy; Thompson, David M.
2014-01-01
Continuous time-series of wave characteristics (height, period, and direction) are constructed using a base set of model scenarios and simple probabilistic methods. This approach utilizes an archive of computationally intensive, highly spatially resolved numerical wave model output to develop time-series of historical or future wave conditions without performing additional, continuous numerical simulations. The archive of model output contains wave simulations from a set of model scenarios derived from an offshore wave climatology. Time-series of wave height, period, direction, and associated uncertainties are constructed at locations included in the numerical model domain. The confidence limits are derived using statistical variability of oceanographic parameters contained in the wave model scenarios. The method was applied to a region in the northern Gulf of Mexico and assessed using wave observations at 12 m and 30 m water depths. Prediction skill for significant wave height is 0.58 and 0.67 at the 12 m and 30 m locations, respectively, with similar performance for wave period and direction. The skill of this simplified, probabilistic time-series construction method is comparable to existing large-scale, high-fidelity operational wave models but provides higher spatial resolution output at low computational expense. The constructed time-series can be developed to support a variety of applications including climate studies and other situations where a comprehensive survey of wave impacts on the coastal area is of interest.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eck, Brendan L.; Fahmi, Rachid; Miao, Jun
2015-10-15
Purpose: Aims in this study are to (1) develop a computational model observer which reliably tracks the detectability of human observers in low dose computed tomography (CT) images reconstructed with knowledge-based iterative reconstruction (IMR™, Philips Healthcare) and filtered back projection (FBP) across a range of independent variables, (2) use the model to evaluate detectability trends across reconstructions and make predictions of human observer detectability, and (3) perform human observer studies based on model predictions to demonstrate applications of the model in CT imaging. Methods: Detectability (d′) was evaluated in phantom studies across a range of conditions. Images were generated usingmore » a numerical CT simulator. Trained observers performed 4-alternative forced choice (4-AFC) experiments across dose (1.3, 2.7, 4.0 mGy), pin size (4, 6, 8 mm), contrast (0.3%, 0.5%, 1.0%), and reconstruction (FBP, IMR), at fixed display window. A five-channel Laguerre–Gauss channelized Hotelling observer (CHO) was developed with internal noise added to the decision variable and/or to channel outputs, creating six different internal noise models. Semianalytic internal noise computation was tested against Monte Carlo and used to accelerate internal noise parameter optimization. Model parameters were estimated from all experiments at once using maximum likelihood on the probability correct, P{sub C}. Akaike information criterion (AIC) was used to compare models of different orders. The best model was selected according to AIC and used to predict detectability in blended FBP-IMR images, analyze trends in IMR detectability improvements, and predict dose savings with IMR. Predicted dose savings were compared against 4-AFC study results using physical CT phantom images. Results: Detection in IMR was greater than FBP in all tested conditions. The CHO with internal noise proportional to channel output standard deviations, Model-k4, showed the best trade-off between fit and model complexity according to AIC{sub c}. With parameters fixed, the model reasonably predicted detectability of human observers in blended FBP-IMR images. Semianalytic internal noise computation gave results equivalent to Monte Carlo, greatly speeding parameter estimation. Using Model-k4, the authors found an average detectability improvement of 2.7 ± 0.4 times that of FBP. IMR showed greater improvements in detectability with larger signals and relatively consistent improvements across signal contrast and x-ray dose. In the phantom tested, Model-k4 predicted an 82% dose reduction compared to FBP, verified with physical CT scans at 80% reduced dose. Conclusions: IMR improves detectability over FBP and may enable significant dose reductions. A channelized Hotelling observer with internal noise proportional to channel output standard deviation agreed well with human observers across a wide range of variables, even across reconstructions with drastically different image characteristics. Utility of the model observer was demonstrated by predicting the effect of image processing (blending), analyzing detectability improvements with IMR across dose, size, and contrast, and in guiding real CT scan dose reduction experiments. Such a model observer can be applied in optimizing parameters in advanced iterative reconstruction algorithms as well as guiding dose reduction protocols in physical CT experiments.« less
A hybrid deep neural network and physically based distributed model for river stage prediction
NASA Astrophysics Data System (ADS)
hitokoto, Masayuki; sakuraba, Masaaki
2016-04-01
We developed the real-time river stage prediction model, using the hybrid deep neural network and physically based distributed model. As the basic model, 4 layer feed-forward artificial neural network (ANN) was used. As a network training method, the deep learning technique was applied. To optimize the network weight, the stochastic gradient descent method based on the back propagation method was used. As a pre-training method, the denoising autoencoder was used. Input of the ANN model is hourly change of water level and hourly rainfall, output data is water level of downstream station. In general, the desirable input of the ANN has strong correlation with the output. In conceptual hydrological model such as tank model and storage-function model, river discharge is governed by the catchment storage. Therefore, the change of the catchment storage, downstream discharge subtracted from rainfall, can be the potent input candidate of the ANN model instead of rainfall. From this point of view, the hybrid deep neural network and physically based distributed model was developed. The prediction procedure of the hybrid model is as follows; first, downstream discharge was calculated by the distributed model, and then estimates the hourly change of catchment storage form rainfall and calculated discharge as the input of the ANN model, and finally the ANN model was calculated. In the training phase, hourly change of catchment storage can be calculated by the observed rainfall and discharge data. The developed model was applied to the one catchment of the OOYODO River, one of the first-grade river in Japan. The modeled catchment is 695 square km. For the training data, 5 water level gauging station and 14 rain-gauge station in the catchment was used. The training floods, superior 24 events, were selected during the period of 2005-2014. Prediction was made up to 6 hours, and 6 models were developed for each prediction time. To set the proper learning parameters and network architecture of the ANN model, sensitivity analysis was done by the case study approach. The prediction result was evaluated by the superior 4 flood events by the leave-one-out cross validation. The prediction result of the basic 4 layer ANN was better than the conventional 3 layer ANN model. However, the result did not reproduce well the biggest flood event, supposedly because the lack of the sufficient high-water level flood event in the training data. The result of the hybrid model outperforms the basic ANN model and distributed model, especially improved the performance of the basic ANN model in the biggest flood event.
Evaluation of MM5 model resolution when applied to prediction of national fire danger rating indexes
Jeanne L. Hoadley; Miriam L. Rorig; Larry Bradshaw; Sue A. Ferguson; Kenneth J. Westrick; Scott L. Goodrick; Paul Werth
2006-01-01
Weather predictions from the MM5 mesoscale model were used to compute gridded predictions of National Fire Danger Rating System (NFDRS) indexes. The model output was applied to a case study of the 2000 fire season in Northern Idaho and Western Montana to simulate an extreme event. To determine the preferred resolution for automating NFD RS predictions, model...
NASA Technical Reports Server (NTRS)
Whitney, W. J.; Behning, F. P.; Moffitt, T. P.; Hotz, G. M.
1980-01-01
The stage group performance of a 4 1/2 stage turbine with an average stage loading factor of 4.66 and high specific work output was determined in cold air at design equivalent speed. The four stage turbine configuration produced design equivalent work output with an efficiency of 0.856; a barely discernible difference from the 0.855 obtained for the complete 4 1/2 stage turbine in a previous investigation. The turbine was designed and the procedure embodied the following design features: (1) controlled vortex flow, (2) tailored radial work distribution, and (3) control of the location of the boundary-layer transition point on the airfoil suction surface. The efficiency forecast for the 4 1/2 stage turbine was 0.886, and the value predicted using a reference method was 0.862. The stage group performance results were used to determine the individual stage efficiencies for the condition at which design 4 1/2 stage work output was obtained. The efficiencies of stages one and four were about 0.020 lower than the predicted value, that of stage two was 0.014 lower, and that of stage three was about equal to the predicted value. Thus all the stages operated reasonably close to their expected performance levels, and the overall (4 1/2 stage) performance was not degraded by any particularly inefficient component.
Sampson, Maureen L; Gounden, Verena; van Deventer, Hendrik E; Remaley, Alan T
2016-02-01
The main drawback of the periodic analysis of quality control (QC) material is that test performance is not monitored in time periods between QC analyses, potentially leading to the reporting of faulty test results. The objective of this study was to develop a patient based QC procedure for the more timely detection of test errors. Results from a Chem-14 panel measured on the Beckman LX20 analyzer were used to develop the model. Each test result was predicted from the other 13 members of the panel by multiple regression, which resulted in correlation coefficients between the predicted and measured result of >0.7 for 8 of the 14 tests. A logistic regression model, which utilized the measured test result, the predicted test result, the day of the week and time of day, was then developed for predicting test errors. The output of the logistic regression was tallied by a daily CUSUM approach and used to predict test errors, with a fixed specificity of 90%. The mean average run length (ARL) before error detection by CUSUM-Logistic Regression (CSLR) was 20 with a mean sensitivity of 97%, which was considerably shorter than the mean ARL of 53 (sensitivity 87.5%) for a simple prediction model that only used the measured result for error detection. A CUSUM-Logistic Regression analysis of patient laboratory data can be an effective approach for the rapid and sensitive detection of clinical laboratory errors. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Zahmatkesh, Zahra; Karamouz, Mohammad; Nazif, Sara
2015-09-01
Simulation of rainfall-runoff process in urban areas is of great importance considering the consequences and damages of extreme runoff events and floods. The first issue in flood hazard analysis is rainfall simulation. Large scale climate signals have been proved to be effective in rainfall simulation and prediction. In this study, an integrated scheme is developed for rainfall-runoff modeling considering different sources of uncertainty. This scheme includes three main steps of rainfall forecasting, rainfall-runoff simulation and future runoff prediction. In the first step, data driven models are developed and used to forecast rainfall using large scale climate signals as rainfall predictors. Due to high effect of different sources of uncertainty on the output of hydrologic models, in the second step uncertainty associated with input data, model parameters and model structure is incorporated in rainfall-runoff modeling and simulation. Three rainfall-runoff simulation models are developed for consideration of model conceptual (structural) uncertainty in real time runoff forecasting. To analyze the uncertainty of the model structure, streamflows generated by alternative rainfall-runoff models are combined, through developing a weighting method based on K-means clustering. Model parameters and input uncertainty are investigated using an adaptive Markov Chain Monte Carlo method. Finally, calibrated rainfall-runoff models are driven using the forecasted rainfall to predict future runoff for the watershed. The proposed scheme is employed in the case study of the Bronx River watershed, New York City. Results of uncertainty analysis of rainfall-runoff modeling reveal that simultaneous estimation of model parameters and input uncertainty significantly changes the probability distribution of the model parameters. It is also observed that by combining the outputs of the hydrological models using the proposed clustering scheme, the accuracy of runoff simulation in the watershed is remarkably improved up to 50% in comparison to the simulations by the individual models. Results indicate that the developed methodology not only provides reliable tools for rainfall and runoff modeling, but also adequate time for incorporating required mitigation measures in dealing with potentially extreme runoff events and flood hazard. Results of this study can be used in identification of the main factors affecting flood hazard analysis.
McMeekin, T A
2007-09-01
Predictive microbiology is considered in the context of the conference theme "chance, innovation and challenge", together with the impact of quantitative approaches on food microbiology, generally. The contents of four prominent texts on predictive microbiology are analysed and the major contributions of two meat microbiologists, Drs. T.A. Roberts and C.O. Gill, to the early development of predictive microbiology are highlighted. These provide a segue into R&D trends in predictive microbiology, including the Refrigeration Index, an example of science-based, outcome-focussed food safety regulation. Rapid advances in technologies and systems for application of predictive models are indicated and measures to judge the impact of predictive microbiology are suggested in terms of research outputs and outcomes. The penultimate section considers the future of predictive microbiology and advances that will become possible when data on population responses are combined with data derived from physiological and molecular studies in a systems biology approach. Whilst the emphasis is on science and technology for food safety management, it is suggested that decreases in foodborne illness will also arise from minimising human error by changing the food safety culture.
NASA Astrophysics Data System (ADS)
Daneji, A.; Ali, M.; Pervaiz, S.
2018-04-01
Friction stir welding (FSW) is a form of solid state welding process for joining metals, alloys, and selective composites. Over the years, FSW development has provided an improved way of producing welding joints, and consequently got accepted in numerous industries such as aerospace, automotive, rail and marine etc. In FSW, the base metal properties control the material’s plastic flow under the influence of a rotating tool whereas, the process and tool parameters play a vital role in the quality of weld. In the current investigation, an array of square butt joints of 6061 Aluminum alloy was to be welded under varying FSW process and tool geometry related parameters, after which the resulting weld was evaluated for the corresponding mechanical properties and welding defects. The study incorporates FSW process and tool parameters such as welding speed, pin height and pin thread pitch as input parameters. However, the weld quality related defects and mechanical properties were treated as output parameters. The experimentation paves way to investigate the correlation between the inputs and the outputs. The correlation between inputs and outputs were used as tool to predict the optimized FSW process and tool parameters for a desired weld output of the base metals under investigation. The study also provides reflection on the effect of said parameters on a welding defect such as wormhole.
Liang, Albert K.; Koniczek, Martin; Antonuk, Larry E.; El-Mohri, Youcef; Zhao, Qihua; Street, Robert A.; Lu, Jeng Ping
2017-01-01
Photon counting arrays (PCAs), defined as pixelated imagers which measure the absorbed energy of x-ray photons individually and record this information digitally, are of increasing clinical interest. A number of PCA prototypes with a 1 mm pixel-to-pixel pitch have recently been fabricated with polycrystalline silicon (poly-Si) — a thin-film technology capable of creating monolithic imagers of a size commensurate with human anatomy. In this study, analog and digital simulation frameworks were developed to provide insight into the influence of individual poly-Si transistors on pixel circuit performance — information that is not readily available through empirical means. The simulation frameworks were used to characterize the circuit designs employed in the prototypes. The analog framework, which determines the noise produced by individual transistors, was used to estimate energy resolution, as well as to identify which transistors contribute the most noise. The digital framework, which analyzes how well circuits function in the presence of significant variations in transistor properties, was used to estimate how fast a circuit can produce an output (referred to as output count rate). In addition, an algorithm was developed and used to estimate the minimum pixel pitch that could be achieved for the pixel circuits of the current prototypes. The simulation frameworks predict that the analog component of the PCA prototypes could have energy resolution as low as 8.9% FWHM at 70 keV; and the digital components should work well even in the presence of significant TFT variations, with the fastest component having output count rates as high as 3 MHz. Finally, based on conceivable improvements in the underlying fabrication process, the algorithm predicts that the 1 mm pitch of the current PCA prototypes could be reduced significantly, potentially to between ~240 and 290 μm. PMID:26878107
Liang, Albert K; Koniczek, Martin; Antonuk, Larry E; El-Mohri, Youcef; Zhao, Qihua; Street, Robert A; Lu, Jeng Ping
2016-03-07
Photon counting arrays (PCAs), defined as pixelated imagers which measure the absorbed energy of x-ray photons individually and record this information digitally, are of increasing clinical interest. A number of PCA prototypes with a 1 mm pixel-to-pixel pitch have recently been fabricated with polycrystalline silicon (poly-Si)-a thin-film technology capable of creating monolithic imagers of a size commensurate with human anatomy. In this study, analog and digital simulation frameworks were developed to provide insight into the influence of individual poly-Si transistors on pixel circuit performance-information that is not readily available through empirical means. The simulation frameworks were used to characterize the circuit designs employed in the prototypes. The analog framework, which determines the noise produced by individual transistors, was used to estimate energy resolution, as well as to identify which transistors contribute the most noise. The digital framework, which analyzes how well circuits function in the presence of significant variations in transistor properties, was used to estimate how fast a circuit can produce an output (referred to as output count rate). In addition, an algorithm was developed and used to estimate the minimum pixel pitch that could be achieved for the pixel circuits of the current prototypes. The simulation frameworks predict that the analog component of the PCA prototypes could have energy resolution as low as 8.9% full width at half maximum (FWHM) at 70 keV; and the digital components should work well even in the presence of significant thin-film transistor (TFT) variations, with the fastest component having output count rates as high as 3 MHz. Finally, based on conceivable improvements in the underlying fabrication process, the algorithm predicts that the 1 mm pitch of the current PCA prototypes could be reduced significantly, potentially to between ~240 and 290 μm.
Norazmi-Lokman, Nor Hakim; Purser, G. J.; Patil, Jawahar G.
2016-01-01
In most livebearing fish, the gravid spot is an excellent marker to identify brooding females, however its use to predict progress of embryonic development, brood size, timing of parturition and overall reproductive potential of populations remain unexplored. Therefore, to understand these relationships, this study quantified visual attributes (intensity and size) of the gravid spot in relation to key internal development in Gambusia holbrooki. Observations show that the colour of the gravid spot arises from progressive melanisation on the surface of the ovarian sac at its hind margin, rather than melanisation of the developing embryos or the skin of the brooding mother. More importantly, the gravid spot intensity and size were closely linked with both developmental stages and clutch size, suggesting their reliable use as external surrogates of key internal developmental in the species. Using predictive consistency of the gravid spot, we also determined the effect of rearing temperature (23°C and 25°C) on gestation period and parturition behaviour. The results show that gestation period was significantly reduced (F = 364.58; df = 1,48; P˃0.05) at 25°C. However there was no significant difference in average number of fry parturated in the two temperature groups (P<0.05), reaffirming that gravid spot intensity is a reliable predictor of reproductive output. The parturition in the species occurred predominantly in the morning and in contrast to earlier reports, tails of the fry emerged first with a few exceptions of head-first, twin and premature births. This study demonstrates utility of the gravid spot for downstream reproductive investigations in a live-bearing fish both in the field and laboratory. The reproducibility of the relationships (intensity with both developmental stage and clutch size), imply that they are also relevant to wild populations that experience varying temperature climes and stressors, significant deviations of which may serve as indicators of environmental health and climate variability. PMID:26808521
Adam-Poupart, Ariane; Brand, Allan; Fournier, Michel; Jerrett, Michael
2014-01-01
Background: Ambient air ozone (O3) is a pulmonary irritant that has been associated with respiratory health effects including increased lung inflammation and permeability, airway hyperreactivity, respiratory symptoms, and decreased lung function. Estimation of O3 exposure is a complex task because the pollutant exhibits complex spatiotemporal patterns. To refine the quality of exposure estimation, various spatiotemporal methods have been developed worldwide. Objectives: We sought to compare the accuracy of three spatiotemporal models to predict summer ground-level O3 in Quebec, Canada. Methods: We developed a land-use mixed-effects regression (LUR) model based on readily available data (air quality and meteorological monitoring data, road networks information, latitude), a Bayesian maximum entropy (BME) model incorporating both O3 monitoring station data and the land-use mixed model outputs (BME-LUR), and a kriging method model based only on available O3 monitoring station data (BME kriging). We performed leave-one-station-out cross-validation and visually assessed the predictive capability of each model by examining the mean temporal and spatial distributions of the average estimated errors. Results: The BME-LUR was the best predictive model (R2 = 0.653) with the lowest root mean-square error (RMSE ;7.06 ppb), followed by the LUR model (R2 = 0.466, RMSE = 8.747) and the BME kriging model (R2 = 0.414, RMSE = 9.164). Conclusions: Our findings suggest that errors of estimation in the interpolation of O3 concentrations with BME can be greatly reduced by incorporating outputs from a LUR model developed with readily available data. Citation: Adam-Poupart A, Brand A, Fournier M, Jerrett M, Smargiassi A. 2014. Spatiotemporal modeling of ozone levels in Quebec (Canada): a comparison of kriging, land-use regression (LUR), and combined Bayesian maximum entropy–LUR approaches. Environ Health Perspect 122:970–976; http://dx.doi.org/10.1289/ehp.1306566 PMID:24879650
2009-01-01
Modeling of water flow in carbon nanotubes is still a challenge for the classic models of fluid dynamics. In this investigation, an adaptive-network-based fuzzy inference system (ANFIS) is presented to solve this problem. The proposed ANFIS approach can construct an input–output mapping based on both human knowledge in the form of fuzzy if-then rules and stipulated input–output data pairs. Good performance of the designed ANFIS ensures its capability as a promising tool for modeling and prediction of fluid flow at nanoscale where the continuum models of fluid dynamics tend to break down. PMID:20596382
Ahadian, Samad; Kawazoe, Yoshiyuki
2009-06-04
Modeling of water flow in carbon nanotubes is still a challenge for the classic models of fluid dynamics. In this investigation, an adaptive-network-based fuzzy inference system (ANFIS) is presented to solve this problem. The proposed ANFIS approach can construct an input-output mapping based on both human knowledge in the form of fuzzy if-then rules and stipulated input-output data pairs. Good performance of the designed ANFIS ensures its capability as a promising tool for modeling and prediction of fluid flow at nanoscale where the continuum models of fluid dynamics tend to break down.
Ellis, Katherine; Godbole, Suneeta; Marshall, Simon; Lanckriet, Gert; Staudenmayer, John; Kerr, Jacqueline
2014-01-01
Active travel is an important area in physical activity research, but objective measurement of active travel is still difficult. Automated methods to measure travel behaviors will improve research in this area. In this paper, we present a supervised machine learning method for transportation mode prediction from global positioning system (GPS) and accelerometer data. We collected a dataset of about 150 h of GPS and accelerometer data from two research assistants following a protocol of prescribed trips consisting of five activities: bicycling, riding in a vehicle, walking, sitting, and standing. We extracted 49 features from 1-min windows of this data. We compared the performance of several machine learning algorithms and chose a random forest algorithm to classify the transportation mode. We used a moving average output filter to smooth the output predictions over time. The random forest algorithm achieved 89.8% cross-validated accuracy on this dataset. Adding the moving average filter to smooth output predictions increased the cross-validated accuracy to 91.9%. Machine learning methods are a viable approach for automating measurement of active travel, particularly for measuring travel activities that traditional accelerometer data processing methods misclassify, such as bicycling and vehicle travel.
NASA Technical Reports Server (NTRS)
Jenkins, R. M.
1983-01-01
The present effort represents an extension of previous work wherein a calculation model for performing rapid pitchline optimization of axial gas turbine geometry, including blade profiles, is developed. The model requires no specification of geometric constraints. Output includes aerodynamic performance (adiabatic efficiency), hub-tip flow-path geometry, blade chords, and estimates of blade shape. Presented herein is a verification of the aerodynamic performance portion of the model, whereby detailed turbine test-rig data, including rig geometry, is input to the model to determine whether tested performance can be predicted. An array of seven (7) NASA single-stage axial gas turbine configurations is investigated, ranging in size from 0.6 kg/s to 63.8 kg/s mass flow and in specific work output from 153 J/g to 558 J/g at design (hot) conditions; stage loading factor ranges from 1.15 to 4.66.
Experimental verification of a radiofrequency power model for Wi-Fi technology.
Fang, Minyu; Malone, David
2010-04-01
When assessing the power emitted from a Wi-Fi network, it has been observed that these networks operate at a relatively low duty cycle. In this paper, we extend a recently introduced model of emitted power in Wi-Fi networks to cover conditions where devices do not always have packets to transmit. We present experimental results to validate the original model and its extension by developing approximate, but practical, testbed measurement techniques. The accuracy of the models is confirmed, with small relative errors: less than 5-10%. Moreover, we confirm that the greatest power is emitted when the network is saturated with traffic. Using this, we give a simple technique to quickly estimate power output based on traffic levels and give examples showing how this might be used in practice to predict current or future power output from a Wi-Fi network.
Coelho, Antonio Augusto Rodrigues
2016-01-01
This paper introduces the Fuzzy Logic Hypercube Interpolator (FLHI) and demonstrates applications in control of multiple-input single-output (MISO) and multiple-input multiple-output (MIMO) processes with Hammerstein nonlinearities. FLHI consists of a Takagi-Sugeno fuzzy inference system where membership functions act as kernel functions of an interpolator. Conjunction of membership functions in an unitary hypercube space enables multivariable interpolation of N-dimensions. Membership functions act as interpolation kernels, such that choice of membership functions determines interpolation characteristics, allowing FLHI to behave as a nearest-neighbor, linear, cubic, spline or Lanczos interpolator, to name a few. The proposed interpolator is presented as a solution to the modeling problem of static nonlinearities since it is capable of modeling both a function and its inverse function. Three study cases from literature are presented, a single-input single-output (SISO) system, a MISO and a MIMO system. Good results are obtained regarding performance metrics such as set-point tracking, control variation and robustness. Results demonstrate applicability of the proposed method in modeling Hammerstein nonlinearities and their inverse functions for implementation of an output compensator with Model Based Predictive Control (MBPC), in particular Dynamic Matrix Control (DMC). PMID:27657723
Schöffel, Norman; Krempel, Meike; Bundschuh, Matthias; Bendels, Michael H; Brüggmann, Dörthe; Groneberg, David A
2016-11-01
Despite decades of effort, the 5-year overall survival rate of pancreatic cancer (PC) remains at only approximately 5%. Until now, no detailed knowledge regarding the worldwide research architecture of PC has yet been established. Hence, we conducted this scientometric analysis to quantify the global research activity in this field. The total research productivity was screened and research output of countries, categories, individual institutions, authors, and their collaborative networks were analyzed by the new quality and quantity indices in science platform. Results were visualized via state-of-the-art density equalizing mapping projections. The results indicated that Japan, Germany, and the United States played a leading role regarding output activity, multilateral, and bilateral cooperations. Within the past decades, the topic PC has developed into a scientific field covering many subject areas. Recently published studies predict that the scientific progress will be mainly depending on international cooperations; we can confirm that development by now. We conclude that the field of PC is constantly progressing in which the influence of international cooperations on the scientific progress is of increasing importance. Nevertheless, research in the field of PC still needs to be strengthened to reduce morbidity and mortality in the future.
NASA Technical Reports Server (NTRS)
Darmofal, David L.
2003-01-01
The use of computational simulations in the prediction of complex aerodynamic flows is becoming increasingly prevalent in the design process within the aerospace industry. Continuing advancements in both computing technology and algorithmic development are ultimately leading to attempts at simulating ever-larger, more complex problems. However, by increasing the reliance on computational simulations in the design cycle, we must also increase the accuracy of these simulations in order to maintain or improve the reliability arid safety of the resulting aircraft. At the same time, large-scale computational simulations must be made more affordable so that their potential benefits can be fully realized within the design cycle. Thus, a continuing need exists for increasing the accuracy and efficiency of computational algorithms such that computational fluid dynamics can become a viable tool in the design of more reliable, safer aircraft. The objective of this research was the development of an error estimation and grid adaptive strategy for reducing simulation errors in integral outputs (functionals) such as lift or drag from from multi-dimensional Euler and Navier-Stokes simulations. In this final report, we summarize our work during this grant.
Ramezankhani, Azra; Pournik, Omid; Shahrabi, Jamal; Khalili, Davood; Azizi, Fereidoun; Hadaegh, Farzad
2014-09-01
The aim of this study was to create a prediction model using data mining approach to identify low risk individuals for incidence of type 2 diabetes, using the Tehran Lipid and Glucose Study (TLGS) database. For a 6647 population without diabetes, aged ≥20 years, followed for 12 years, a prediction model was developed using classification by the decision tree technique. Seven hundred and twenty-nine (11%) diabetes cases occurred during the follow-up. Predictor variables were selected from demographic characteristics, smoking status, medical and drug history and laboratory measures. We developed the predictive models by decision tree using 60 input variables and one output variable. The overall classification accuracy was 90.5%, with 31.1% sensitivity, 97.9% specificity; and for the subjects without diabetes, precision and f-measure were 92% and 0.95, respectively. The identified variables included fasting plasma glucose, body mass index, triglycerides, mean arterial blood pressure, family history of diabetes, educational level and job status. In conclusion, decision tree analysis, using routine demographic, clinical, anthropometric and laboratory measurements, created a simple tool to predict individuals at low risk for type 2 diabetes. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Rath, S.; Sengupta, P. P.; Singh, A. P.; Marik, A. K.; Talukdar, P.
2013-07-01
Accurate prediction of roll force during hot strip rolling is essential for model based operation of hot strip mills. Traditionally, mathematical models based on theory of plastic deformation have been used for prediction of roll force. In the last decade, data driven models like artificial neural network have been tried for prediction of roll force. Pure mathematical models have accuracy limitations whereas data driven models have difficulty in convergence when applied to industrial conditions. Hybrid models by integrating the traditional mathematical formulations and data driven methods are being developed in different parts of world. This paper discusses the methodology of development of an innovative hybrid mathematical-artificial neural network model. In mathematical model, the most important factor influencing accuracy is flow stress of steel. Coefficients of standard flow stress equation, calculated by parameter estimation technique, have been used in the model. The hybrid model has been trained and validated with input and output data collected from finishing stands of Hot Strip Mill, Bokaro Steel Plant, India. It has been found that the model accuracy has been improved with use of hybrid model, over the traditional mathematical model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lall, Pradeep; Wei, Junchao; Davis, J Lynn
2014-06-24
Abstract— Solid-state lighting (SSL) luminaires containing light emitting diodes (LEDs) have the potential of seeing excessive temperatures when being transported across country or being stored in non-climate controlled warehouses. They are also being used in outdoor applications in desert environments that see little or no humidity but will experience extremely high temperatures during the day. This makes it important to increase our understanding of what effects high temperature exposure for a prolonged period of time will have on the usability and survivability of these devices. Traditional light sources “burn out” at end-of-life. For an incandescent bulb, the lamp life ismore » defined by B50 life. However, the LEDs have no filament to “burn”. The LEDs continually degrade and the light output decreases eventually below useful levels causing failure. Presently, the TM-21 test standard is used to predict the L70 life of LEDs from LM-80 test data. Several failure mechanisms may be active in a LED at a single time causing lumen depreciation. The underlying TM-21 Model may not capture the failure physics in presence of multiple failure mechanisms. Correlation of lumen maintenance with underlying physics of degradation at system-level is needed. In this paper, Kalman Filter (KF) and Extended Kalman Filters (EKF) have been used to develop a 70-percent Lumen Maintenance Life Prediction Model for LEDs used in SSL luminaires. Ten-thousand hour LM-80 test data for various LEDs have been used for model development. System state at each future time has been computed based on the state space at preceding time step, system dynamics matrix, control vector, control matrix, measurement matrix, measured vector, process noise and measurement noise. The future state of the lumen depreciation has been estimated based on a second order Kalman Filter model and a Bayesian Framework. Life prediction of L70 life for the LEDs used in SSL luminaires from KF and EKF based models have been compared with the TM-21 model predictions and experimental data.« less
Design Models for the Development of Helium-Carbon Sorption Crycoolers
NASA Technical Reports Server (NTRS)
Lindensmith, C. A.; Ahart, M.; Bhandari, P.; Wade, L. A.; Paine, C. G.
2000-01-01
We have developed models for predicting the performance of helium-based Joule-Thomson continuous-flow cryocoolers using charcoal-pumped sorption compressors. The models take as inputs the number of compressors, desired heat-lift, cold tip temperature, and available precooling temperature and provide design parameters as outputs. Future laboratory development will be used to verify and improve the models. We will present a preliminary design for a two-stage vibration-free cryocooler that is being proposed as part of a mid-infrared camera on NASA's Next Generation Space Telescope. Model predictions show that a 10 mW helium-carbon cryocooler with a base temperature of 5.5 K will reject less than 650 mW at 18 K. The total input power to the helium-carbon stage is 650 mW. These models, which run in MathCad and Microsoft Excel, can be coupled to similar models for hydrogen sorption coolers to give designs for 2-stage vibration-free cryocoolers that provide cooling from approx. 50 K to 4 K.
Design Models for the Development of Helium-Carbon Sorption Cryocoolers
NASA Technical Reports Server (NTRS)
Lindensmith, Chris A.; Ahart, M.; Bhandari, P.; Wade, L. A.; Paine, C. G.
2000-01-01
We have developed models for predicting the performance of helium-based Joule-Thomson continuous-flow cryocoolers using charcoal-pumped sorption compressors. The models take as inputs the number of compressors, desired heat-lift, cold tip temperature, and available precooling temperature and provide design parameters as outputs. Future laboratory development will be used to verify and improve the models. We will present a preliminary design for a two-stage vibration-free cryocooler that is being proposed as part of a mid-infrared camera on NASA's Next Generation Space Telescope. Model predictions show that a 10 mW helium-carbon cryocooler with a base temperature of 5.5 K will reject less than 650 mW at 18 K. The total input power to the helium-carbon stage is 650 mW. These models, which run in MathCad and Microsoft Excel, can be coupled to similar models for hydrogen sorption coolers to give designs for 2-stage vibration-free cryocoolers that provide cooling from approximately 50 K to 4 K.
Flexible piezoelectric energy harvesting from jaw movements
NASA Astrophysics Data System (ADS)
Delnavaz, Aidin; Voix, Jérémie
2014-10-01
Piezoelectric fiber composites (PFC) represent an interesting subset of smart materials that can function as sensor, actuator and energy converter. Despite their excellent potential for energy harvesting, very few PFC mechanisms have been developed to capture the human body power and convert it into an electric current to power wearable electronic devices. This paper provides a proof of concept for a head-mounted device with a PFC chin strap capable of harvesting energy from jaw movements. An electromechanical model based on the bond graph method is developed to predict the power output of the energy harvesting system. The optimum resistance value of the load and the best stretch ratio in the strap are also determined. A prototype was developed and tested and its performances were compared to the analytical model predictions. The proposed piezoelectric strap mechanism can be added to all types of head-mounted devices to power small-scale electronic devices such as hearing aids, electronic hearing protectors and communication earpieces.
NASA Technical Reports Server (NTRS)
Gaines, G. B.; Thomas, R. E.; Noel, G. T.; Shilliday, T. S.; Wood, V. E.; Carmichael, D. C.
1979-01-01
An accelerated life test is described which was developed to predict the life of the 25 kW photovoltaic array installed near Mead, Nebraska. A quantitative model for accelerating testing using multiple environmental stresses was used to develop the test design. The model accounts for the effects of thermal stress by a relation of the Arrhenius form. This relation was then corrected for the effects of nonthermal environmental stresses, such as relative humidity, atmospheric pollutants, and ultraviolet radiation. The correction factors for the nonthermal stresses included temperature-dependent exponents to account for the effects of interactions between thermal and nonthermal stresses on the rate of degradation of power output. The test conditions, measurements, and data analyses for the accelerated tests are presented. Constant-temperature, cyclic-temperature, and UV types of tests are specified, incorporating selected levels of relative humidity and chemical contamination and an imposed forward-bias current and static electric field.
Microresonators for Nonlinear Quantum Optics
NASA Astrophysics Data System (ADS)
Vernon, Zachary
In this thesis I study in detail the quantum dynamics of several nonlinear optical processes in microresonator systems. A Heisenberg-picture input-output formalism is developed from first principles that includes the effects of scattering losses and independent quality factors and coupling ratios for different resonances. The task of calculating the device output is then reduced to solving a set of driven, damped, ordinary differential equations for the resonator mode operators alone. This theoretical framework is used to study photon pair generation via spontaneous four-wave mixing in the weakly pumped regime, on which the effects of scattering losses are appraised. A more strongly driven regime is studied for continuous wave pumps, demonstrating when self- and cross-phase modulation and multi-photon pair generation become important, and their effects on the spectral and power scaling properties of the system are examined; A detuning strategy is presented that compensates for some of these effects. The results of the weak-pump regime are applied to study microresonator-based heralded single photon sources. The impact of scattering losses is studied, revealing that typical systems suffer from low heralding efficiency due to these losses. A technique to improve heralding efficiency is presented through over-coupling the resonator-channel system, and a resultant trade-off between heralding rate and heralding efficiency is uncovered. Limitations to the spectral purity of the heralded single photon output for conventional microresonator systems are also analysed, and a more sophisticated coupling scheme presented to overcome the upper bound for spectral purity of 93% that exists in typical systems, permitting the generation of single photons with spectral purity arbitrarily close to 100% without spectral filtering or sophisticated phase-matching techniques. The theory of quantum frequency conversion in microresonators using four-wave mixing is then developed in detail, and the spectral conversion probability and conversion efficiency studied. Efficiencies exceeding 90% using less than 100 mW of pump power are predicted to be achievable with current technology. A dressed mode picture is developed to better understand the conversion dynamics. Rabi-like spectral splitting and temporal oscillations of the intraresonator mean photon number are predicted, exhibiting a novel regime of strongly coupled photonic modes.
Self-Tuning of Design Variables for Generalized Predictive Control
NASA Technical Reports Server (NTRS)
Lin, Chaung; Juang, Jer-Nan
2000-01-01
Three techniques are introduced to determine the order and control weighting for the design of a generalized predictive controller. These techniques are based on the application of fuzzy logic, genetic algorithms, and simulated annealing to conduct an optimal search on specific performance indexes or objective functions. Fuzzy logic is found to be feasible for real-time and on-line implementation due to its smooth and quick convergence. On the other hand, genetic algorithms and simulated annealing are applicable for initial estimation of the model order and control weighting, and final fine-tuning within a small region of the solution space, Several numerical simulations for a multiple-input and multiple-output system are given to illustrate the techniques developed in this paper.
NASA Technical Reports Server (NTRS)
Kalb, Michael; Robertson, Franklin; Jedlovec, Gary; Perkey, Donald
1987-01-01
Techniques by which mesoscale numerical weather prediction model output and radiative transfer codes are combined to simulate the radiance fields that a given passive temperature/moisture satellite sensor would see if viewing the evolving model atmosphere are introduced. The goals are to diagnose the dynamical atmospheric processes responsible for recurring patterns in observed satellite radiance fields, and to develop techniques to anticipate the ability of satellite sensor systems to depict atmospheric structures and provide information useful for numerical weather prediction (NWP). The concept of linking radiative transfer and dynamical NWP codes is demonstrated with time sequences of simulated radiance imagery in the 24 TIROS vertical sounder channels derived from model integrations for March 6, 1982.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riensche, Roderick M.; Paulson, Patrick R.; Danielson, Gary R.
We describe a methodology and architecture to support the development of games in a predictive analytics context. These games serve as part of an overall family of systems designed to gather input knowledge, calculate results of complex predictive technical and social models, and explore those results in an engaging fashion. The games provide an environment shaped and driven in part by the outputs of the models, allowing users to exert influence over a limited set of parameters, and displaying the results when those actions cause changes in the underlying model. We have crafted a prototype system in which we aremore » implementing test versions of games driven by models in such a fashion, using a flexible architecture to allow for future continuation and expansion of this work.« less
Modeling polyvinyl chloride Plasma Modification by Neural Networks
NASA Astrophysics Data System (ADS)
Wang, Changquan
2018-03-01
Neural networks model were constructed to analyze the connection between dielectric barrier discharge parameters and surface properties of material. The experiment data were generated from polyvinyl chloride plasma modification by using uniform design. Discharge voltage, discharge gas gap and treatment time were as neural network input layer parameters. The measured values of contact angle were as the output layer parameters. A nonlinear mathematical model of the surface modification for polyvinyl chloride was developed based upon the neural networks. The optimum model parameters were obtained by the simulation evaluation and error analysis. The results of the optimal model show that the predicted value is very close to the actual test value. The prediction model obtained here are useful for discharge plasma surface modification analysis.
Belay, T K; Dagnachew, B S; Kowalski, Z M; Ådnøy, T
2017-08-01
Fourier transform mid-infrared (FT-MIR) spectra of milk are commonly used for phenotyping of traits of interest through links developed between the traits and milk FT-MIR spectra. Predicted traits are then used in genetic analysis for ultimate phenotypic prediction using a single-trait mixed model that account for cows' circumstances at a given test day. Here, this approach is referred to as indirect prediction (IP). Alternatively, FT-MIR spectral variable can be kept multivariate in the form of factor scores in REML and BLUP analyses. These BLUP predictions, including phenotype (predicted factor scores), were converted to single-trait through calibration outputs; this method is referred to as direct prediction (DP). The main aim of this study was to verify whether mixed modeling of milk spectra in the form of factors scores (DP) gives better prediction of blood β-hydroxybutyrate (BHB) than the univariate approach (IP). Models to predict blood BHB from milk spectra were also developed. Two data sets that contained milk FT-MIR spectra and other information on Polish dairy cattle were used in this study. Data set 1 (n = 826) also contained BHB measured in blood samples, whereas data set 2 (n = 158,028) did not contain measured blood values. Part of data set 1 was used to calibrate a prediction model (n = 496) and the remaining part of data set 1 (n = 330) was used to validate the calibration models, as well as to evaluate the DP and IP approaches. Dimensions of FT-MIR spectra in data set 2 were reduced either into 5 or 10 factor scores (DP) or into a single trait (IP) with calibration outputs. The REML estimates for these factor scores were found using WOMBAT. The BLUP values and predicted BHB for observations in the validation set were computed using the REML estimates. Blood BHB predicted from milk FT-MIR spectra by both approaches were regressed on reference blood BHB that had not been used in the model development. Coefficients of determination in cross-validation for untransformed blood BHB were from 0.21 to 0.32, whereas that for the log-transformed BHB were from 0.31 to 0.38. The corresponding estimates in validation were from 0.29 to 0.37 and 0.21 to 0.43, respectively, for untransformed and logarithmic BHB. Contrary to expectation, slightly better predictions of BHB were found when univariate variance structure was used (IP) than when multivariate covariance structures were used (DP). Conclusive remarks on the importance of keeping spectral data in multivariate form for prediction of phenotypes may be found in data sets where the trait of interest has strong relationships with spectral variables. The Authors. Published by the Federation of Animal Science Societies and Elsevier Inc. on behalf of the American Dairy Science Association®. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).
Broadband Fan Noise Prediction System for Turbofan Engines. Volume 3; Validation and Test Cases
NASA Technical Reports Server (NTRS)
Morin, Bruce L.
2010-01-01
Pratt & Whitney has developed a Broadband Fan Noise Prediction System (BFaNS) for turbofan engines. This system computes the noise generated by turbulence impinging on the leading edges of the fan and fan exit guide vane, and noise generated by boundary-layer turbulence passing over the fan trailing edge. BFaNS has been validated on three fan rigs that were tested during the NASA Advanced Subsonic Technology Program (AST). The predicted noise spectra agreed well with measured data. The predicted effects of fan speed, vane count, and vane sweep also agreed well with measurements. The noise prediction system consists of two computer programs: Setup_BFaNS and BFaNS. Setup_BFaNS converts user-specified geometry and flow-field information into a BFaNS input file. From this input file, BFaNS computes the inlet and aft broadband sound power spectra generated by the fan and FEGV. The output file from BFaNS contains the inlet, aft and total sound power spectra from each noise source. This report is the third volume of a three-volume set documenting the Broadband Fan Noise Prediction System: Volume 1: Setup_BFaNS User s Manual and Developer s Guide; Volume 2: BFaNS User s Manual and Developer s Guide; and Volume 3: Validation and Test Cases. The present volume begins with an overview of the Broadband Fan Noise Prediction System, followed by validation studies that were done on three fan rigs. It concludes with recommended improvements and additional studies for BFaNS.
A Spiking Neural Network Model of the Lateral Geniculate Nucleus on the SpiNNaker Machine
Sen-Bhattacharya, Basabdatta; Serrano-Gotarredona, Teresa; Balassa, Lorinc; Bhattacharya, Akash; Stokes, Alan B.; Rowley, Andrew; Sugiarto, Indar; Furber, Steve
2017-01-01
We present a spiking neural network model of the thalamic Lateral Geniculate Nucleus (LGN) developed on SpiNNaker, which is a state-of-the-art digital neuromorphic hardware built with very-low-power ARM processors. The parallel, event-based data processing in SpiNNaker makes it viable for building massively parallel neuro-computational frameworks. The LGN model has 140 neurons representing a “basic building block” for larger modular architectures. The motivation of this work is to simulate biologically plausible LGN dynamics on SpiNNaker. Synaptic layout of the model is consistent with biology. The model response is validated with existing literature reporting entrainment in steady state visually evoked potentials (SSVEP)—brain oscillations corresponding to periodic visual stimuli recorded via electroencephalography (EEG). Periodic stimulus to the model is provided by: a synthetic spike-train with inter-spike-intervals in the range 10–50 Hz at a resolution of 1 Hz; and spike-train output from a state-of-the-art electronic retina subjected to a light emitting diode flashing at 10, 20, and 40 Hz, simulating real-world visual stimulus to the model. The resolution of simulation is 0.1 ms to ensure solution accuracy for the underlying differential equations defining Izhikevichs neuron model. Under this constraint, 1 s of model simulation time is executed in 10 s real time on SpiNNaker; this is because simulations on SpiNNaker work in real time for time-steps dt ⩾ 1 ms. The model output shows entrainment with both sets of input and contains harmonic components of the fundamental frequency. However, suppressing the feed-forward inhibition in the circuit produces subharmonics within the gamma band (>30 Hz) implying a reduced information transmission fidelity. These model predictions agree with recent lumped-parameter computational model-based predictions, using conventional computers. Scalability of the framework is demonstrated by a multi-node architecture consisting of three “nodes,” where each node is the “basic building block” LGN model. This 420 neuron model is tested with synthetic periodic stimulus at 10 Hz to all the nodes. The model output is the average of the outputs from all nodes, and conforms to the above-mentioned predictions of each node. Power consumption for model simulation on SpiNNaker is ≪1 W. PMID:28848380
NASA Astrophysics Data System (ADS)
Tomsic, Z.; Rajsl, I.; Filipovic, M.
2017-11-01
Wind power varies over time, mainly under the influence of meteorological fluctuations. The variations occur on all time scales. Understanding these variations and their predictability is of key importance for the integration and optimal utilization of wind in the power system. There are two major attributes of variable generation that notably impact the participation on power exchanges: Variability (the output of variable generation changes and resulting in fluctuations in the plant output on all time scales) and Uncertainty (the magnitude and timing of variable generation output is less predictable, wind power output has low levels of predictability). Because of these variability and uncertainty wind plants cannot participate to electricity market, especially to power exchanges. For this purpose, the paper presents techno-economic analysis of work of wind plants together with combined cycle gas turbine (CCGT) plant as support for offering continues power to electricity market. A model of wind farms and CCGT plant was developed in program PLEXOS based on real hourly input data and all characteristics of CCGT with especial analysis of techno-economic characteristics of different types of starts and stops of the plant. The Model analyzes the followings: costs of different start-stop characteristics (hot, warm, cold start-ups and shutdowns) and part load performance of CCGT. Besides the costs, the technical restrictions were considered such as start-up time depending on outage duration, minimum operation time, and minimum load or peaking capability. For calculation purposes, the following parameters are necessary to know in order to be able to economically evaluate changes in the start-up process: ramp up and down rate, time of start time reduction, fuel mass flow during start, electricity production during start, variable cost of start-up process, cost and charges for life time consumption for each start and start type, remuneration during start up time regarding expected or unexpected starts, the cost and revenues for balancing energy (important when participating in electricity market), and the cost or revenues for CO2-certificates. Main motivation for this analysis is to investigate possibilities to participate on power exchanges by offering continues guarantied power from wind plants by backing-up them with CCGT power plant.
A Spiking Neural Network Model of the Lateral Geniculate Nucleus on the SpiNNaker Machine.
Sen-Bhattacharya, Basabdatta; Serrano-Gotarredona, Teresa; Balassa, Lorinc; Bhattacharya, Akash; Stokes, Alan B; Rowley, Andrew; Sugiarto, Indar; Furber, Steve
2017-01-01
We present a spiking neural network model of the thalamic Lateral Geniculate Nucleus (LGN) developed on SpiNNaker, which is a state-of-the-art digital neuromorphic hardware built with very-low-power ARM processors. The parallel, event-based data processing in SpiNNaker makes it viable for building massively parallel neuro-computational frameworks. The LGN model has 140 neurons representing a "basic building block" for larger modular architectures. The motivation of this work is to simulate biologically plausible LGN dynamics on SpiNNaker. Synaptic layout of the model is consistent with biology. The model response is validated with existing literature reporting entrainment in steady state visually evoked potentials (SSVEP)-brain oscillations corresponding to periodic visual stimuli recorded via electroencephalography (EEG). Periodic stimulus to the model is provided by: a synthetic spike-train with inter-spike-intervals in the range 10-50 Hz at a resolution of 1 Hz; and spike-train output from a state-of-the-art electronic retina subjected to a light emitting diode flashing at 10, 20, and 40 Hz, simulating real-world visual stimulus to the model. The resolution of simulation is 0.1 ms to ensure solution accuracy for the underlying differential equations defining Izhikevichs neuron model. Under this constraint, 1 s of model simulation time is executed in 10 s real time on SpiNNaker; this is because simulations on SpiNNaker work in real time for time-steps dt ⩾ 1 ms. The model output shows entrainment with both sets of input and contains harmonic components of the fundamental frequency. However, suppressing the feed-forward inhibition in the circuit produces subharmonics within the gamma band (>30 Hz) implying a reduced information transmission fidelity. These model predictions agree with recent lumped-parameter computational model-based predictions, using conventional computers. Scalability of the framework is demonstrated by a multi-node architecture consisting of three "nodes," where each node is the "basic building block" LGN model. This 420 neuron model is tested with synthetic periodic stimulus at 10 Hz to all the nodes. The model output is the average of the outputs from all nodes, and conforms to the above-mentioned predictions of each node. Power consumption for model simulation on SpiNNaker is ≪1 W.
Shi, Xiaohu; Zhang, Jingfen; He, Zhiquan; Shang, Yi; Xu, Dong
2011-09-01
One of the major challenges in protein tertiary structure prediction is structure quality assessment. In many cases, protein structure prediction tools generate good structural models, but fail to select the best models from a huge number of candidates as the final output. In this study, we developed a sampling-based machine-learning method to rank protein structural models by integrating multiple scores and features. First, features such as predicted secondary structure, solvent accessibility and residue-residue contact information are integrated by two Radial Basis Function (RBF) models trained from different datasets. Then, the two RBF scores and five selected scoring functions developed by others, i.e., Opus-CA, Opus-PSP, DFIRE, RAPDF, and Cheng Score are synthesized by a sampling method. At last, another integrated RBF model ranks the structural models according to the features of sampling distribution. We tested the proposed method by using two different datasets, including the CASP server prediction models of all CASP8 targets and a set of models generated by our in-house software MUFOLD. The test result shows that our method outperforms any individual scoring function on both best model selection, and overall correlation between the predicted ranking and the actual ranking of structural quality.
NASA Astrophysics Data System (ADS)
Guruprasad, R.; Behera, B. K.
2015-10-01
Quantitative prediction of fabric mechanical properties is an essential requirement for design engineering of textile and apparel products. In this work, the possibility of prediction of bending rigidity of cotton woven fabrics has been explored with the application of Artificial Neural Network (ANN) and two hybrid methodologies, namely Neuro-genetic modeling and Adaptive Neuro-Fuzzy Inference System (ANFIS) modeling. For this purpose, a set of cotton woven grey fabrics was desized, scoured and relaxed. The fabrics were then conditioned and tested for bending properties. With the database thus created, a neural network model was first developed using back propagation as the learning algorithm. The second model was developed by applying a hybrid learning strategy, in which genetic algorithm was first used as a learning algorithm to optimize the number of neurons and connection weights of the neural network. The Genetic algorithm optimized network structure was further allowed to learn using back propagation algorithm. In the third model, an ANFIS modeling approach was attempted to map the input-output data. The prediction performances of the models were compared and a sensitivity analysis was reported. The results show that the prediction by neuro-genetic and ANFIS models were better in comparison with that of back propagation neural network model.
A comparative study of kinetic and connectionist modeling for shelf-life prediction of Basundi mix.
Ruhil, A P; Singh, R R B; Jain, D K; Patel, A A; Patil, G R
2011-04-01
A ready-to-reconstitute formulation of Basundi, a popular Indian dairy dessert was subjected to storage at various temperatures (10, 25 and 40 °C) and deteriorative changes in the Basundi mix were monitored using quality indices like pH, hydroxyl methyl furfural (HMF), bulk density (BD) and insolubility index (II). The multiple regression equations and the Arrhenius functions that describe the parameters' dependence on temperature for the four physico-chemical parameters were integrated to develop mathematical models for predicting sensory quality of Basundi mix. Connectionist model using multilayer feed forward neural network with back propagation algorithm was also developed for predicting the storage life of the product employing artificial neural network (ANN) tool box of MATLAB software. The quality indices served as the input parameters whereas the output parameters were the sensorily evaluated flavour and total sensory score. A total of 140 observations were used and the prediction performance was judged on the basis of per cent root mean square error. The results obtained from the two approaches were compared. Relatively lower magnitudes of percent root mean square error for both the sensory parameters indicated that the connectionist models were better fitted than kinetic models for predicting storage life.
Asynchronous machine rotor speed estimation using a tabulated numerical approach
NASA Astrophysics Data System (ADS)
Nguyen, Huu Phuc; De Miras, Jérôme; Charara, Ali; Eltabach, Mario; Bonnet, Stéphane
2017-12-01
This paper proposes a new method to estimate the rotor speed of the asynchronous machine by looking at the estimation problem as a nonlinear optimal control problem. The behavior of the nonlinear plant model is approximated off-line as a prediction map using a numerical one-step time discretization obtained from simulations. At each time-step, the speed of the induction machine is selected satisfying the dynamic fitting problem between the plant output and the predicted output, leading the system to adopt its dynamical behavior. Thanks to the limitation of the prediction horizon to a single time-step, the execution time of the algorithm can be completely bounded. It can thus easily be implemented and embedded into a real-time system to observe the speed of the real induction motor. Simulation results show the performance and robustness of the proposed estimator.
A Reliability Estimation in Modeling Watershed Runoff With Uncertainties
NASA Astrophysics Data System (ADS)
Melching, Charles S.; Yen, Ben Chie; Wenzel, Harry G., Jr.
1990-10-01
The reliability of simulation results produced by watershed runoff models is a function of uncertainties in nature, data, model parameters, and model structure. A framework is presented here for using a reliability analysis method (such as first-order second-moment techniques or Monte Carlo simulation) to evaluate the combined effect of the uncertainties on the reliability of output hydrographs from hydrologic models. For a given event the prediction reliability can be expressed in terms of the probability distribution of the estimated hydrologic variable. The peak discharge probability for a watershed in Illinois using the HEC-1 watershed model is given as an example. The study of the reliability of predictions from watershed models provides useful information on the stochastic nature of output from deterministic models subject to uncertainties and identifies the relative contribution of the various uncertainties to unreliability of model predictions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engel, David W.; Reichardt, Thomas A.; Kulp, Thomas J.
Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensormore » level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.« less
Risk assessment of fungal spoilage: A case study of Aspergillus niger on yogurt.
Gougouli, Maria; Koutsoumanis, Konstantinos P
2017-08-01
A quantitative risk assessment model of yogurt spoilage by Aspergillus niger was developed based on a stochastic modeling approach for mycelium growth by taking into account the important sources of variability such as time-temperature conditions during the different stages of chill chain and individual spore behavior. Input parameters were fitted to the appropriate distributions and A. niger colony's diameter at each stage of the chill chain was estimated using Monte Carlo simulation. By combining the output of the growth model with the fungus prevalence, that can be estimated by the industry using challenge tests, the risk of spoilage translated to number of yogurt cups in which a visible mycelium of A. niger is being formed at the time of consumption was assessed. The risk assessment output showed that for a batch of 100,000 cups in which the percentage of contaminated cups with A. niger was 1% the predicted numbers (median (5 th , 95 th percentiles)) of the cups with a visible mycelium at consumption time were 8 (5, 14). For higher percentages of 3, 5 and 10 the predicted numbers (median (5 th , 95 th percentiles)) of the spoiled cups at consumption time were estimated to be 24 (16, 35), 39 (29, 52) and 80 (64, 94), respectively. The developed model can lead to a more effective risk-based quality management of yogurt and support the decision making in yogurt production. Copyright © 2017 Elsevier Ltd. All rights reserved.
Emulating RRTMG Radiation with Deep Neural Networks for the Accelerated Model for Climate and Energy
NASA Astrophysics Data System (ADS)
Pal, A.; Norman, M. R.
2017-12-01
The RRTMG radiation scheme in the Accelerated Model for Climate and Energy Multi-scale Model Framework (ACME-MMF), is a bottleneck and consumes approximately 50% of the computational time. To simulate a case using RRTMG radiation scheme in ACME-MMF with high throughput and high resolution will therefore require a speed-up of this calculation while retaining physical fidelity. In this study, RRTMG radiation is emulated with Deep Neural Networks (DNNs). The first step towards this goal is to run a case with ACME-MMF and generate input data sets for the DNNs. A principal component analysis of these input data sets are carried out. Artificial data sets are created using the previous data sets to cover a wider space. These artificial data sets are used in a standalone RRTMG radiation scheme to generate outputs in a cost effective manner. These input-output pairs are used to train multiple architectures DNNs(1). Another DNN(2) is trained using the inputs to predict the error. A reverse emulation is trained to map the output to input. An error controlled code is developed with the two DNNs (1 and 2) and will determine when/if the original parameterization needs to be used.
Numerical analysis of 2.7 μm lasing in Er3+-doped tellurite fiber lasers
Wang, Weichao; Li, Lixiu; Chen, Dongdan; Zhang, Qinyuan
2016-01-01
The laser performance of Er3+-doped tellurite fiber lasers operating at 2.7 μm due to 4I11/2 → 4I13/2 transition has been theoretically studied by using rate equations and propagation equations. The effects of pumping configuration and fiber length on the output power, slope efficiency, threshold, and intracavity pump and laser power distributions have been systematically investigated to optimize the performance of fiber lasers. When the pump power is 20 W, the maximum slope efficiency (27.62%), maximum output power (5.219 W), and minimum threshold (278.90 mW) are predicted with different fiber lengths (0.05–5 m) under three pumping configurations. It is also found that reasonable output power is expected for fiber loss below 2 dB/ m. The numerical modeling on the two- and three-dimensional laser field distributions are further analyzed to reveal the characteristics of this multimode step-index tellurite fiber. Preliminary simulation results show that this Er3+-doped tellurite fiber is an excellent alternative to conventional fluoride fiber for developing efficient 2.7 μm fiber lasers. PMID:27545663
Overview of free-piston Stirling engine technology for space power application
NASA Technical Reports Server (NTRS)
Slaby, Jack G.
1987-01-01
An overview is presented of free-piston Stirling engine activities, directed toward space power applications. One of the major elements of the program is the development of advanced power conversion. Under this program the status of the 25 kWe opposed-piston Space Power Demonstrator Engine (SPDE) is presented. Initial differences between predicted and experimental power outputs and power output influenced by variations in regenerators are discussed. Technology work was conducted on heat-exchanger concepts to minimize the number of joints as well as to enhance the heat transfer in the heater. Design parameters and conceptual design features are also presented for a 25 kWe, single-cylinder free-piston Stirling space power converter. Projections are made for future space power requirements over the next few decades along with a recommendation to consider the use of dynamic power conversion systems, either solar or nuclear. A cursory comparison is presented showing the mass benefits of a Stirling system over a Brayton system for the same peak temperature and output power. A description of a study to investigate the feasibility of scaling a single-cylinder free-piston Stirling space power module to the 150 kWe power range is presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beresford, N. A.; Barnett, C. L.; Brown, J. E.
There is now general acknowledgement that there is a requirement to demonstrate that species other than humans are protected from anthropogenic releases of radioactivity. A number of approaches have been developed for estimating the exposure of wildlife and some of these are being used to conduct regulatory assessments. There is a requirement to compare the outputs of such approaches against available data sets to ensure that they are robust and fit for purpose. In this paper we describe the application of seven approaches for predicting the whole-body ({sup 90}Sr, {sup 137}Cs, {sup 241}Am and Pu isotope) activity concentrations and absorbedmore » dose rates for a range of terrestrial species within the Chernobyl exclusion zone. Predictions are compared against available measurement data, including estimates of external dose rate recorded by thermoluminescent dosimeters attached to rodent species. Potential reasons for differences between predictions between the various approaches and the available data are explored.« less
NASA Astrophysics Data System (ADS)
Gastón, Martín; Fernández-Peruchena, Carlos; Körnich, Heiner; Landelius, Tomas
2017-06-01
The present work describes the first approach of a new procedure to forecast Direct Normal Irradiance (DNI): the #hashtdim that treats to combine ground information and Numerical Weather Predictions. The system is centered in generate predictions for the very short time. It combines the outputs from the Numerical Weather Prediction Model HARMONIE with an adaptive methodology based on Machine Learning. The DNI predictions are generated with 15-minute and hourly temporal resolutions and presents 3-hourly updates. Each update offers forecasts to the next 12 hours, the first nine hours are generated with 15-minute temporal resolution meanwhile the last three hours present hourly temporal resolution. The system is proved over a Spanish emplacement with BSRN operative station in south of Spain (PSA station). The #hashtdim has been implemented in the framework of the Direct Normal Irradiance Nowcasting methods for optimized operation of concentrating solar technologies (DNICast) project, under the European Union's Seventh Programme for research, technological development and demonstration framework.
Surrogate Modeling of High-Fidelity Fracture Simulations for Real-Time Residual Strength Predictions
NASA Technical Reports Server (NTRS)
Spear, Ashley D.; Priest, Amanda R.; Veilleux, Michael G.; Ingraffea, Anthony R.; Hochhalter, Jacob D.
2011-01-01
A surrogate model methodology is described for predicting in real time the residual strength of flight structures with discrete-source damage. Starting with design of experiment, an artificial neural network is developed that takes as input discrete-source damage parameters and outputs a prediction of the structural residual strength. Target residual strength values used to train the artificial neural network are derived from 3D finite element-based fracture simulations. A residual strength test of a metallic, integrally-stiffened panel is simulated to show that crack growth and residual strength are determined more accurately in discrete-source damage cases by using an elastic-plastic fracture framework rather than a linear-elastic fracture mechanics-based method. Improving accuracy of the residual strength training data would, in turn, improve accuracy of the surrogate model. When combined, the surrogate model methodology and high-fidelity fracture simulation framework provide useful tools for adaptive flight technology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1991-02-01
This appendix is a compilation of work done to predict overall cycle performance from gasifier to generator terminals. A spreadsheet has been generated for each case to show flows within a cycle. The spreadsheet shows gaseous or solid composition of flow, temperature of flow, quantity of flow, and heat heat content of flow. Prediction of steam and gas turbine performance was obtained by the computer program GTPro. Outputs of all runs for each combined cycle reviewed has been added to this appendix. A process schematic displaying all flows predicted through GTPro and the spreadsheet is also added to this appendix.more » The numbered bubbles on the schematic correspond to columns on the top headings of the spreadsheet.« less
Pandey, Daya Shankar; Pan, Indranil; Das, Saptarshi; Leahy, James J; Kwapinski, Witold
2015-03-01
A multi-gene genetic programming technique is proposed as a new method to predict syngas yield production and the lower heating value for municipal solid waste gasification in a fluidized bed gasifier. The study shows that the predicted outputs of the municipal solid waste gasification process are in good agreement with the experimental dataset and also generalise well to validation (untrained) data. Published experimental datasets are used for model training and validation purposes. The results show the effectiveness of the genetic programming technique for solving complex nonlinear regression problems. The multi-gene genetic programming are also compared with a single-gene genetic programming model to show the relative merits and demerits of the technique. This study demonstrates that the genetic programming based data-driven modelling strategy can be a good candidate for developing models for other types of fuels as well. Copyright © 2014 Elsevier Ltd. All rights reserved.
Motamedi, Shervin; Roy, Chandrabhushan; Shamshirband, Shahaboddin; Hashim, Roslan; Petković, Dalibor; Song, Ki-Il
2015-08-01
Ultrasonic pulse velocity is affected by defects in material structure. This study applied soft computing techniques to predict the ultrasonic pulse velocity for various peats and cement content mixtures for several curing periods. First, this investigation constructed a process to simulate the ultrasonic pulse velocity with adaptive neuro-fuzzy inference system. Then, an ANFIS network with neurons was developed. The input and output layers consisted of four and one neurons, respectively. The four inputs were cement, peat, sand content (%) and curing period (days). The simulation results showed efficient performance of the proposed system. The ANFIS and experimental results were compared through the coefficient of determination and root-mean-square error. In conclusion, use of ANFIS network enhances prediction and generation of strength. The simulation results confirmed the effectiveness of the suggested strategies. Copyright © 2015 Elsevier B.V. All rights reserved.
Simulating seasonal tropical cyclone intensities at landfall along the South China coast
NASA Astrophysics Data System (ADS)
Lok, Charlie C. F.; Chan, Johnny C. L.
2018-04-01
A numerical method is developed using a regional climate model (RegCM3) and the Weather Forecast and Research (WRF) model to predict seasonal tropical cyclone (TC) intensities at landfall for the South China region. In designing the model system, three sensitivity tests have been performed to identify the optimal choice of the RegCM3 model domain, WRF horizontal resolution and WRF physics packages. Driven from the National Centers for Environmental Prediction Climate Forecast System Reanalysis dataset, the model system can produce a reasonable distribution of TC intensities at landfall on a seasonal scale. Analyses of the model output suggest that the strength and extent of the subtropical ridge in the East China Sea are crucial to simulating TC landfalls in the Guangdong and Hainan provinces. This study demonstrates the potential for predicting TC intensities at landfall on a seasonal basis as well as projecting future climate changes using numerical models.
Surrogate Modeling of High-Fidelity Fracture Simulations for Real-Time Residual Strength Predictions
NASA Technical Reports Server (NTRS)
Spear, Ashley D.; Priest, Amanda R.; Veilleux, Michael G.; Ingraffea, Anthony R.; Hochhalter, Jacob D.
2011-01-01
A surrogate model methodology is described for predicting, during flight, the residual strength of aircraft structures that sustain discrete-source damage. Starting with design of experiment, an artificial neural network is developed that takes as input discrete-source damage parameters and outputs a prediction of the structural residual strength. Target residual strength values used to train the artificial neural network are derived from 3D finite element-based fracture simulations. Two ductile fracture simulations are presented to show that crack growth and residual strength are determined more accurately in discrete-source damage cases by using an elastic-plastic fracture framework rather than a linear-elastic fracture mechanics-based method. Improving accuracy of the residual strength training data does, in turn, improve accuracy of the surrogate model. When combined, the surrogate model methodology and high fidelity fracture simulation framework provide useful tools for adaptive flight technology.
Optical Limiting Using the Two-Photon Absorption Electrical Modulation Effect in HgCdTe Photodiode
Cui, Haoyang; Yang, Junjie; Zeng, Jundong; Tang, Zhong
2013-01-01
The electrical modulation properties of the output intensity of two-photon absorption (TPA) pumping were analyzed in this paper. The frequency dispersion dependence of TPA and the electric field dependence of TPA were calculated using Wherrett theory model and Garcia theory model, respectively. Both predicted a dramatic variation of TPA coefficient which was attributed into the increasing of the transition rate. The output intensity of the laser pulse propagation in the pn junction device was calculated by using function-transfer method. It shows that the output intensity increases nonlinearly with increasing intensity of incident light and eventually reaches saturation. The output saturation intensity depends on the electric field strength; the greater the electric field, the smaller the output intensity. Consequently, the clamped saturation intensity can be controlled by the electric field. The prior advantage of electrical modulation is that the TPA can be varied extremely continuously, thus adjusting the output intensity in a wide range. This large change provides a manipulate method to control steady output intensity of TPA by adjusting electric field. PMID:24198721
Sun, Baozhou; Lam, Dao; Yang, Deshan; Grantham, Kevin; Zhang, Tiezhi; Mutic, Sasa; Zhao, Tianyu
2018-05-01
Clinical treatment planning systems for proton therapy currently do not calculate monitor units (MUs) in passive scatter proton therapy due to the complexity of the beam delivery systems. Physical phantom measurements are commonly employed to determine the field-specific output factors (OFs) but are often subject to limited machine time, measurement uncertainties and intensive labor. In this study, a machine learning-based approach was developed to predict output (cGy/MU) and derive MUs, incorporating the dependencies on gantry angle and field size for a single-room proton therapy system. The goal of this study was to develop a secondary check tool for OF measurements and eventually eliminate patient-specific OF measurements. The OFs of 1754 fields previously measured in a water phantom with calibrated ionization chambers and electrometers for patient-specific fields with various range and modulation width combinations for 23 options were included in this study. The training data sets for machine learning models in three different methods (Random Forest, XGBoost and Cubist) included 1431 (~81%) OFs. Ten-fold cross-validation was used to prevent "overfitting" and to validate each model. The remaining 323 (~19%) OFs were used to test the trained models. The difference between the measured and predicted values from machine learning models was analyzed. Model prediction accuracy was also compared with that of the semi-empirical model developed by Kooy (Phys. Med. Biol. 50, 2005). Additionally, gantry angle dependence of OFs was measured for three groups of options categorized on the selection of the second scatters. Field size dependence of OFs was investigated for the measurements with and without patient-specific apertures. All three machine learning methods showed higher accuracy than the semi-empirical model which shows considerably large discrepancy of up to 7.7% for the treatment fields with full range and full modulation width. The Cubist-based solution outperformed all other models (P < 0.001) with the mean absolute discrepancy of 0.62% and maximum discrepancy of 3.17% between the measured and predicted OFs. The OFs showed a small dependence on gantry angle for small and deep options while they were constant for large options. The OF decreased by 3%-4% as the field radius was reduced to 2.5 cm. Machine learning methods can be used to predict OF for double-scatter proton machines with greater prediction accuracy than the most popular semi-empirical prediction model. By incorporating the gantry angle dependence and field size dependence, the machine learning-based methods can be used for a sanity check of OF measurements and bears the potential to eliminate the time-consuming patient-specific OF measurements. © 2018 American Association of Physicists in Medicine.
Computational Design of Materials: Planetary Entry to Electric Aircraft and Beyond
NASA Technical Reports Server (NTRS)
Thompson, Alexander; Lawson, John W.
2014-01-01
NASA's projects and missions push the bounds of what is possible. To support the agency's work, materials development must stay on the cutting edge in order to keep pace. Today, researchers at NASA Ames Research Center perform multiscale modeling to aid the development of new materials and provide insight into existing ones. Multiscale modeling enables researchers to determine micro- and macroscale properties by connecting computational methods ranging from the atomic level (density functional theory, molecular dynamics) to the macroscale (finite element method). The output of one level is passed on as input to the next level, creating a powerful predictive model.
Laser diode initiated detonators for space applications
NASA Technical Reports Server (NTRS)
Ewick, David W.; Graham, J. A.; Hawley, J. D.
1993-01-01
Ensign Bickford Aerospace Company (EBAC) has over ten years of experience in the design and development of laser ordnance systems. Recent efforts have focused on the development of laser diode ordnance systems for space applications. Because the laser initiated detonators contain only insensitive secondary explosives, a high degree of system safety is achieved. Typical performance characteristics of a laser diode initiated detonator are described in this paper, including all-fire level, function time, and output. A finite difference model used at EBAC to predict detonator performance, is described and calculated results are compared to experimental data. Finally, the use of statistically designed experiments to evaluate performance of laser initiated detonators is discussed.
Advances and Computational Tools towards Predictable Design in Biological Engineering
2014-01-01
The design process of complex systems in all the fields of engineering requires a set of quantitatively characterized components and a method to predict the output of systems composed by such elements. This strategy relies on the modularity of the used components or the prediction of their context-dependent behaviour, when parts functioning depends on the specific context. Mathematical models usually support the whole process by guiding the selection of parts and by predicting the output of interconnected systems. Such bottom-up design process cannot be trivially adopted for biological systems engineering, since parts function is hard to predict when components are reused in different contexts. This issue and the intrinsic complexity of living systems limit the capability of synthetic biologists to predict the quantitative behaviour of biological systems. The high potential of synthetic biology strongly depends on the capability of mastering this issue. This review discusses the predictability issues of basic biological parts (promoters, ribosome binding sites, coding sequences, transcriptional terminators, and plasmids) when used to engineer simple and complex gene expression systems in Escherichia coli. A comparison between bottom-up and trial-and-error approaches is performed for all the discussed elements and mathematical models supporting the prediction of parts behaviour are illustrated. PMID:25161694
Johnson, Earl E
2017-11-01
To determine safe output sound pressure levels (SPL) for sound amplification devices to preserve hearing sensitivity after usage. A mathematical model consisting of the Modified Power Law (MPL) (Humes & Jesteadt, 1991 ) combined with equations for predicting temporary threshold shift (TTS) and subsequent permanent threshold shift (PTS) (Macrae, 1994b ) was used to determine safe output SPL. The study involves no new human subject measurements of loudness tolerance or threshold shifts. PTS was determined by the MPL model for 234 audiograms and the SPL output recommended by four different validated prescription recommendations for hearing aids. PTS can, on rare occasion, occur as a result of SPL delivered by hearing aids at modern day prescription recommendations. The trading relationship of safe output SPL, decibel hearing level (dB HL) threshold, and PTS was captured with algebraic expressions. Better hearing thresholds lowered the safe output SPL and higher thresholds raised the safe output SPL. Safe output SPL can consider the magnitude of unaided hearing loss. For devices not set to prescriptive levels, limiting the output SPL below the safe levels identified should protect against threshold worsening as a result of long-term usage.
Chesapeake Bay Forecast System: Oxygen Prediction for the Sustainable Ecosystem Management
NASA Astrophysics Data System (ADS)
Mathukumalli, B.; Long, W.; Zhang, X.; Wood, R.; Murtugudde, R. G.
2010-12-01
The Chesapeake Bay Forecast System (CBFS) is a flexible, end-to-end expert prediction tool for decision makers that will provide customizable, user-specified predictions and projections of the region’s climate, air and water quality, local chemistry, and ecosystems at days to decades. As a part of CBFS, the long-term water quality data were collected and assembled to develop ecological models for the sustainable management of the Chesapeake Bay. Cultural eutrophication depletes oxygen levels in this ecosystem particularly in summer which has several negative implications on the structure and function of ecosystem. In order to understand dynamics and prediction of spatially-explicit oxygen levels in the Bay, an empirical process based ecological model is developed with long-term control variables (water temperature, salinity, nitrogen and phosphorus). Statistical validation methods were employed to demonstrate usability of predictions for management purposes and the predicted oxygen levels are quite faithful to observations. The predicted oxygen values and other physical outputs from downscaling of regional weather and climate predictions, or forecasts from hydrodynamic models can be used to forecast various ecological components. Such forecasts would be useful for both recreational and commercial users of the bay (for example, bass fishing). Furthermore, this work can also be used to predict extent of hypoxia/anoxia not only from anthropogenic nutrient pollution, but also from global warming. Some hindcasts and forecasts are discussed along with the ongoing efforts at a mechanistic ecosystem model to provide prognostic oxygen predictions and projections and upper trophic modeling using an energetics approach.
Economic optimization of operations for hybrid energy systems under variable markets
Chen, Jen; Garcia, Humberto E.
2016-05-21
We prosed a hybrid energy systems (HES) which is an important element to enable increasing penetration of clean energy. Our paper investigates the operations flexibility of HES, and develops a methodology for operations optimization for maximizing economic value based on predicted renewable generation and market information. A multi-environment computational platform for performing such operations optimization is also developed. In order to compensate for prediction error, a control strategy is accordingly designed to operate a standby energy storage element (ESE) to avoid energy imbalance within HES. The proposed operations optimizer allows systematic control of energy conversion for maximal economic value. Simulationmore » results of two specific HES configurations are included to illustrate the proposed methodology and computational capability. These results demonstrate the economic viability of HES under proposed operations optimizer, suggesting the diversion of energy for alternative energy output while participating in the ancillary service market. Economic advantages of such operations optimizer and associated flexible operations are illustrated by comparing the economic performance of flexible operations against that of constant operations. Sensitivity analysis with respect to market variability and prediction error, are also performed.« less
NASA Technical Reports Server (NTRS)
West, Thomas K., IV; Reuter, Bryan W.; Walker, Eric L.; Kleb, Bil; Park, Michael A.
2014-01-01
The primary objective of this work was to develop and demonstrate a process for accurate and efficient uncertainty quantification and certification prediction of low-boom, supersonic, transport aircraft. High-fidelity computational fluid dynamics models of multiple low-boom configurations were investigated including the Lockheed Martin SEEB-ALR body of revolution, the NASA 69 Delta Wing, and the Lockheed Martin 1021-01 configuration. A nonintrusive polynomial chaos surrogate modeling approach was used for reduced computational cost of propagating mixed, inherent (aleatory) and model-form (epistemic) uncertainty from both the computation fluid dynamics model and the near-field to ground level propagation model. A methodology has also been introduced to quantify the plausibility of a design to pass a certification under uncertainty. Results of this study include the analysis of each of the three configurations of interest under inviscid and fully turbulent flow assumptions. A comparison of the uncertainty outputs and sensitivity analyses between the configurations is also given. The results of this study illustrate the flexibility and robustness of the developed framework as a tool for uncertainty quantification and certification prediction of low-boom, supersonic aircraft.
A multidimensional model of the effect of gravity on the spatial orientation of the monkey
NASA Technical Reports Server (NTRS)
Merfeld, D. M.; Young, L. R.; Oman, C. M.; Shelhamer, M. J.
1993-01-01
A "sensory conflict" model of spatial orientation was developed. This mathematical model was based on concepts derived from observer theory, optimal observer theory, and the mathematical properties of coordinate rotations. The primary hypothesis is that the central nervous system of the squirrel monkey incorporates information about body dynamics and sensory dynamics to develop an internal model. The output of this central model (expected sensory afference) is compared to the actual sensory afference, with the difference defined as "sensory conflict." The sensory conflict information is, in turn, used to drive central estimates of angular velocity ("velocity storage"), gravity ("gravity storage"), and linear acceleration ("acceleration storage") toward more accurate values. The model successfully predicts "velocity storage" during rotation about an earth-vertical axis. The model also successfully predicts that the time constant of the horizontal vestibulo-ocular reflex is reduced and that the axis of eye rotation shifts toward alignment with gravity following postrotatory tilt. Finally, the model predicts the bias, modulation, and decay components that have been observed during off-vertical axis rotations (OVAR).
Economic optimization of operations for hybrid energy systems under variable markets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Jen; Garcia, Humberto E.
We prosed a hybrid energy systems (HES) which is an important element to enable increasing penetration of clean energy. Our paper investigates the operations flexibility of HES, and develops a methodology for operations optimization for maximizing economic value based on predicted renewable generation and market information. A multi-environment computational platform for performing such operations optimization is also developed. In order to compensate for prediction error, a control strategy is accordingly designed to operate a standby energy storage element (ESE) to avoid energy imbalance within HES. The proposed operations optimizer allows systematic control of energy conversion for maximal economic value. Simulationmore » results of two specific HES configurations are included to illustrate the proposed methodology and computational capability. These results demonstrate the economic viability of HES under proposed operations optimizer, suggesting the diversion of energy for alternative energy output while participating in the ancillary service market. Economic advantages of such operations optimizer and associated flexible operations are illustrated by comparing the economic performance of flexible operations against that of constant operations. Sensitivity analysis with respect to market variability and prediction error, are also performed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
O`Kula, K.R.
1994-03-01
The Nuclear Installations Inspectorate (NII) of the United Kingdom (UK) suggested the use of an accident progression logic model method developed by Westinghouse Savannah River Company (WSRC) and Science Applications International Corporation (SAIC) for K Reactor to predict the magnitude and timing of radioactivity releases (the source term) based on an advanced logic model methodology. Predicted releases are output from the personal computer-based model in a level-of-confidence format. Additional technical discussions eventually led to a request from the NII to develop a proposal for assembling a similar technology to predict source terms for the UK`s advanced gas-cooled reactor (AGR) type.more » To respond to this request, WSRC is submitting a proposal to provide contractual assistance as specified in the Scope of Work. The work will produce, document, and transfer technology associated with a Decision-Oriented Source Term Estimator for Emergency Preparedness (DOSE-EP) for the NII to apply to AGRs in the United Kingdom. This document, Appendix A is a part of this proposal.« less
Smith, Kathryn E; Thatje, Sven
2013-10-01
Developmental resource partitioning and the consequent offspring size variations are of fundamental importance for marine invertebrates, in both an ecological and evolutionary context. Typically, differences are attributed to maternal investment and the environmental factors determining this; additional variables, such as environmental factors affecting development, are rarely discussed. During intracapsular development, for example, sibling conflict has the potential to affect resource partitioning. Here, we investigate encapsulated development in the marine gastropod Buccinum undatum. We examine the effects of maternal investment and temperature on intracapsular resource partitioning in this species. Reproductive output was positively influenced by maternal investment, but additionally, temperature and sibling conflict significantly affected offspring size, number, and quality during development. Increased temperature led to reduced offspring number, and a combination of high sibling competition and asynchronous early development resulted in a common occurrence of "empty" embryos, which received no nutrition at all. The proportion of empty embryos increased with both temperature and capsule size. Additionally, a novel example ofa risk in sibling conflict was observed; embryos cannibalized by others during early development ingested nurse eggs from inside the consumer, killing it in a "Trojan horse" scenario. Our results highlight the complexity surrounding offspring fitness. Encapsulation should be considered as significant in determining maternal output. Considering predicted increases in ocean temperatures, this may impact offspring quality and consequently species distribution and abundance.
Integrated Wind Power Planning Tool
NASA Astrophysics Data System (ADS)
Rosgaard, Martin; Giebel, Gregor; Skov Nielsen, Torben; Hahmann, Andrea; Sørensen, Poul; Madsen, Henrik
2013-04-01
This poster presents the current state of the public service obligation (PSO) funded project PSO 10464, with the title "Integrated Wind Power Planning Tool". The goal is to integrate a mesoscale numerical weather prediction (NWP) model with purely statistical tools in order to assess wind power fluctuations, with focus on long term power system planning for future wind farms as well as short term forecasting for existing wind farms. Currently, wind power fluctuation models are either purely statistical or integrated with NWP models of limited resolution. Using the state-of-the-art mesoscale NWP model Weather Research & Forecasting model (WRF) the forecast error is sought quantified in dependence of the time scale involved. This task constitutes a preparative study for later implementation of features accounting for NWP forecast errors in the DTU Wind Energy maintained Corwind code - a long term wind power planning tool. Within the framework of PSO 10464 research related to operational short term wind power prediction will be carried out, including a comparison of forecast quality at different mesoscale NWP model resolutions and development of a statistical wind power prediction tool taking input from WRF. The short term prediction part of the project is carried out in collaboration with ENFOR A/S; a Danish company that specialises in forecasting and optimisation for the energy sector. The integrated prediction model will allow for the description of the expected variability in wind power production in the coming hours to days, accounting for its spatio-temporal dependencies, and depending on the prevailing weather conditions defined by the WRF output. The output from the integrated short term prediction tool constitutes scenario forecasts for the coming period, which can then be fed into any type of system model or decision making problem to be solved. The high resolution of the WRF results loaded into the integrated prediction model will ensure a high accuracy data basis is available for use in the decision making process of the Danish transmission system operator. The need for high accuracy predictions will only increase over the next decade as Denmark approaches the goal of 50% wind power based electricity in 2025 from the current 20%.
An effective drift correction for dynamical downscaling of decadal global climate predictions
NASA Astrophysics Data System (ADS)
Paeth, Heiko; Li, Jingmin; Pollinger, Felix; Müller, Wolfgang A.; Pohlmann, Holger; Feldmann, Hendrik; Panitz, Hans-Jürgen
2018-04-01
Initialized decadal climate predictions with coupled climate models are often marked by substantial climate drifts that emanate from a mismatch between the climatology of the coupled model system and the data set used for initialization. While such drifts may be easily removed from the prediction system when analyzing individual variables, a major problem prevails for multivariate issues and, especially, when the output of the global prediction system shall be used for dynamical downscaling. In this study, we present a statistical approach to remove climate drifts in a multivariate context and demonstrate the effect of this drift correction on regional climate model simulations over the Euro-Atlantic sector. The statistical approach is based on an empirical orthogonal function (EOF) analysis adapted to a very large data matrix. The climate drift emerges as a dramatic cooling trend in North Atlantic sea surface temperatures (SSTs) and is captured by the leading EOF of the multivariate output from the global prediction system, accounting for 7.7% of total variability. The SST cooling pattern also imposes drifts in various atmospheric variables and levels. The removal of the first EOF effectuates the drift correction while retaining other components of intra-annual, inter-annual and decadal variability. In the regional climate model, the multivariate drift correction of the input data removes the cooling trends in most western European land regions and systematically reduces the discrepancy between the output of the regional climate model and observational data. In contrast, removing the drift only in the SST field from the global model has hardly any positive effect on the regional climate model.
Chipps, S.R.; Einfalt, L.M.; Wahl, David H.
2000-01-01
We measured growth of age-0 tiger muskellunge as a function of ration size (25, 50, 75, and 100% C(max))and water temperature (7.5-25??C) and compared experimental results with those predicted from a bioenergetic model. Discrepancies between actual and predicted values varied appreciably with water temperature and growth rate. On average, model output overestimated winter consumption rates at 10 and 7.5??C by 113 to 328%, respectively, whereas model predictions in summer and autumn (20-25??C) were in better agreement with actual values (4 to 58%). We postulate that variation in model performance was related to seasonal changes in esocid metabolic rate, which were not accounted for in the bioenergetic model. Moreover, accuracy of model output varied with feeding and growth rate of tiger muskellunge. The model performed poorly for fish fed low rations compared with estimates based on fish fed ad libitum rations and was attributed, in part, to the influence of growth rate on the accuracy of bioenergetic predictions. Based on modeling simulations, we found that errors associated with bioenergetic parameters had more influence on model output when growth rate was low, which is consistent with our observations. In addition, reduced conversion efficiency at high ration levels may contribute to variable model performance, thereby implying that waste losses should be modeled as a function of ration size for esocids. Our findings support earlier field tests of the esocid bioenergetic model and indicate that food consumption is generally overestimated by the model, particularly in winter months and for fish exhibiting low feeding and growth rates.
THRESHOLD LOGIC IN ARTIFICIAL INTELLIGENCE
COMPUTER LOGIC, ARTIFICIAL INTELLIGENCE , BIONICS, GEOMETRY, INPUT OUTPUT DEVICES, LINEAR PROGRAMMING, MATHEMATICAL LOGIC, MATHEMATICAL PREDICTION, NETWORKS, PATTERN RECOGNITION, PROBABILITY, SWITCHING CIRCUITS, SYNTHESIS
Update to the NASA Lewis Ice Accretion Code LEWICE
NASA Technical Reports Server (NTRS)
Wright, William B.
1994-01-01
This report is intended as an update to NASA CR-185129 'User's Manual for the NASA Lewis Ice Accretion Prediction Code (LEWICE).' It describes modifications and improvements made to this code as well as changes to the input and output files, interactive input, and graphics output. The comparison of this code to experimental data is shown to have improved as a result of these modifications.
Statistical Surrogate Models for Estimating Probability of High-Consequence Climate Change
NASA Astrophysics Data System (ADS)
Field, R.; Constantine, P.; Boslough, M.
2011-12-01
We have posed the climate change problem in a framework similar to that used in safety engineering, by acknowledging that probabilistic risk assessments focused on low-probability, high-consequence climate events are perhaps more appropriate than studies focused simply on best estimates. To properly explore the tails of the distribution requires extensive sampling, which is not possible with existing coupled atmospheric models due to the high computational cost of each simulation. We have developed specialized statistical surrogate models (SSMs) that can be used to make predictions about the tails of the associated probability distributions. A SSM is different than a deterministic surrogate model in that it represents each climate variable of interest as a space/time random field, that is, a random variable for every fixed location in the atmosphere at all times. The SSM can be calibrated to available spatial and temporal data from existing climate databases, or to a collection of outputs from general circulation models. Because of its reduced size and complexity, the realization of a large number of independent model outputs from a SSM becomes computationally straightforward, so that quantifying the risk associated with low-probability, high-consequence climate events becomes feasible. A Bayesian framework was also developed to provide quantitative measures of confidence, via Bayesian credible intervals, to assess these risks. To illustrate the use of the SSM, we considered two collections of NCAR CCSM 3.0 output data. The first collection corresponds to average December surface temperature for years 1990-1999 based on a collection of 8 different model runs obtained from the Program for Climate Model Diagnosis and Intercomparison (PCMDI). We calibrated the surrogate model to the available model data and make various point predictions. We also analyzed average precipitation rate in June, July, and August over a 54-year period assuming a cyclic Y2K ocean model. We applied the calibrated surrogate model to study the probability that the precipitation rate falls below certain thresholds and utilized the Bayesian approach to quantify our confidence in these predictions. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under Contract DE-AC04-94AL85000.
Re-thinking the role of motor cortex: Context-sensitive motor outputs?
Gandolla, Marta; Ferrante, Simona; Molteni, Franco; Guanziroli, Eleonora; Frattini, Tiziano; Martegani, Alberto; Ferrigno, Giancarlo; Friston, Karl; Pedrocchi, Alessandra; Ward, Nick S.
2014-01-01
The standard account of motor control considers descending outputs from primary motor cortex (M1) as motor commands and efference copy. This account has been challenged recently by an alternative formulation in terms of active inference: M1 is considered as part of a sensorimotor hierarchy providing top–down proprioceptive predictions. The key difference between these accounts is that predictions are sensitive to the current proprioceptive context, whereas efference copy is not. Using functional electric stimulation to experimentally manipulate proprioception during voluntary movement in healthy human subjects, we assessed the evidence for context sensitive output from M1. Dynamic causal modeling of functional magnetic resonance imaging responses showed that FES altered proprioception increased the influence of M1 on primary somatosensory cortex (S1). These results disambiguate competing accounts of motor control, provide some insight into the synaptic mechanisms of sensory attenuation and may speak to potential mechanisms of action of FES in promoting motor learning in neurorehabilitation. PMID:24440530
Re-thinking the role of motor cortex: context-sensitive motor outputs?
Gandolla, Marta; Ferrante, Simona; Molteni, Franco; Guanziroli, Eleonora; Frattini, Tiziano; Martegani, Alberto; Ferrigno, Giancarlo; Friston, Karl; Pedrocchi, Alessandra; Ward, Nick S
2014-05-01
The standard account of motor control considers descending outputs from primary motor cortex (M1) as motor commands and efference copy. This account has been challenged recently by an alternative formulation in terms of active inference: M1 is considered as part of a sensorimotor hierarchy providing top-down proprioceptive predictions. The key difference between these accounts is that predictions are sensitive to the current proprioceptive context, whereas efference copy is not. Using functional electric stimulation to experimentally manipulate proprioception during voluntary movement in healthy human subjects, we assessed the evidence for context sensitive output from M1. Dynamic causal modeling of functional magnetic resonance imaging responses showed that FES altered proprioception increased the influence of M1 on primary somatosensory cortex (S1). These results disambiguate competing accounts of motor control, provide some insight into the synaptic mechanisms of sensory attenuation and may speak to potential mechanisms of action of FES in promoting motor learning in neurorehabilitation. Copyright © 2014 unknown. Published by Elsevier Inc. All rights reserved.
Applications of information theory, genetic algorithms, and neural models to predict oil flow
NASA Astrophysics Data System (ADS)
Ludwig, Oswaldo; Nunes, Urbano; Araújo, Rui; Schnitman, Leizer; Lepikson, Herman Augusto
2009-07-01
This work introduces a new information-theoretic methodology for choosing variables and their time lags in a prediction setting, particularly when neural networks are used in non-linear modeling. The first contribution of this work is the Cross Entropy Function (XEF) proposed to select input variables and their lags in order to compose the input vector of black-box prediction models. The proposed XEF method is more appropriate than the usually applied Cross Correlation Function (XCF) when the relationship among the input and output signals comes from a non-linear dynamic system. The second contribution is a method that minimizes the Joint Conditional Entropy (JCE) between the input and output variables by means of a Genetic Algorithm (GA). The aim is to take into account the dependence among the input variables when selecting the most appropriate set of inputs for a prediction problem. In short, theses methods can be used to assist the selection of input training data that have the necessary information to predict the target data. The proposed methods are applied to a petroleum engineering problem; predicting oil production. Experimental results obtained with a real-world dataset are presented demonstrating the feasibility and effectiveness of the method.