Reflexion on linear regression trip production modelling method for ensuring good model quality
NASA Astrophysics Data System (ADS)
Suprayitno, Hitapriya; Ratnasari, Vita
2017-11-01
Transport Modelling is important. For certain cases, the conventional model still has to be used, in which having a good trip production model is capital. A good model can only be obtained from a good sample. Two of the basic principles of a good sampling is having a sample capable to represent the population characteristics and capable to produce an acceptable error at a certain confidence level. It seems that this principle is not yet quite understood and used in trip production modeling. Therefore, investigating the Trip Production Modelling practice in Indonesia and try to formulate a better modeling method for ensuring the Model Quality is necessary. This research result is presented as follows. Statistics knows a method to calculate span of prediction value at a certain confidence level for linear regression, which is called Confidence Interval of Predicted Value. The common modeling practice uses R2 as the principal quality measure, the sampling practice varies and not always conform to the sampling principles. An experiment indicates that small sample is already capable to give excellent R2 value and sample composition can significantly change the model. Hence, good R2 value, in fact, does not always mean good model quality. These lead to three basic ideas for ensuring good model quality, i.e. reformulating quality measure, calculation procedure, and sampling method. A quality measure is defined as having a good R2 value and a good Confidence Interval of Predicted Value. Calculation procedure must incorporate statistical calculation method and appropriate statistical tests needed. A good sampling method must incorporate random well distributed stratified sampling with a certain minimum number of samples. These three ideas need to be more developed and tested.
Chen, Shangying; Zhang, Peng; Liu, Xin; Qin, Chu; Tao, Lin; Zhang, Cheng; Yang, Sheng Yong; Chen, Yu Zong; Chui, Wai Keung
2016-06-01
The overall efficacy and safety profile of a new drug is partially evaluated by the therapeutic index in clinical studies and by the protective index (PI) in preclinical studies. In-silico predictive methods may facilitate the assessment of these indicators. Although QSAR and QSTR models can be used for predicting PI, their predictive capability has not been evaluated. To test this capability, we developed QSAR and QSTR models for predicting the activity and toxicity of anticonvulsants at accuracy levels above the literature-reported threshold (LT) of good QSAR models as tested by both the internal 5-fold cross validation and external validation method. These models showed significantly compromised PI predictive capability due to the cumulative errors of the QSAR and QSTR models. Therefore, in this investigation a new quantitative structure-index relationship (QSIR) model was devised and it showed improved PI predictive capability that superseded the LT of good QSAR models. The QSAR, QSTR and QSIR models were developed using support vector regression (SVR) method with the parameters optimized by using the greedy search method. The molecular descriptors relevant to the prediction of anticonvulsant activities, toxicities and PIs were analyzed by a recursive feature elimination method. The selected molecular descriptors are primarily associated with the drug-like, pharmacological and toxicological features and those used in the published anticonvulsant QSAR and QSTR models. This study suggested that QSIR is useful for estimating the therapeutic index of drug candidates. Copyright © 2016. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Shin, Yung C.; Bailey, Neil; Katinas, Christopher; Tan, Wenda
2018-05-01
This paper presents an overview of vertically integrated comprehensive predictive modeling capabilities for directed energy deposition processes, which have been developed at Purdue University. The overall predictive models consist of vertically integrated several modules, including powder flow model, molten pool model, microstructure prediction model and residual stress model, which can be used for predicting mechanical properties of additively manufactured parts by directed energy deposition processes with blown powder as well as other additive manufacturing processes. Critical governing equations of each model and how various modules are connected are illustrated. Various illustrative results along with corresponding experimental validation results are presented to illustrate the capabilities and fidelity of the models. The good correlations with experimental results prove the integrated models can be used to design the metal additive manufacturing processes and predict the resultant microstructure and mechanical properties.
NASA Astrophysics Data System (ADS)
Shin, Yung C.; Bailey, Neil; Katinas, Christopher; Tan, Wenda
2018-01-01
This paper presents an overview of vertically integrated comprehensive predictive modeling capabilities for directed energy deposition processes, which have been developed at Purdue University. The overall predictive models consist of vertically integrated several modules, including powder flow model, molten pool model, microstructure prediction model and residual stress model, which can be used for predicting mechanical properties of additively manufactured parts by directed energy deposition processes with blown powder as well as other additive manufacturing processes. Critical governing equations of each model and how various modules are connected are illustrated. Various illustrative results along with corresponding experimental validation results are presented to illustrate the capabilities and fidelity of the models. The good correlations with experimental results prove the integrated models can be used to design the metal additive manufacturing processes and predict the resultant microstructure and mechanical properties.
Predictions of the electro-mechanical response of conductive CNT-polymer composites
NASA Astrophysics Data System (ADS)
Matos, Miguel A. S.; Tagarielli, Vito L.; Baiz-Villafranca, Pedro M.; Pinho, Silvestre T.
2018-05-01
We present finite element simulations to predict the conductivity, elastic response and strain-sensing capability of conductive composites comprising a polymeric matrix and carbon nanotubes. Realistic representative volume elements (RVE) of the microstructure are generated and both constituents are modelled as linear elastic solids, with resistivity independent of strain; the electrical contact between nanotubes is represented by a new element which accounts for quantum tunnelling effects and captures the sensitivity of conductivity to separation. Monte Carlo simulations are conducted and the sensitivity of the predictions to RVE size is explored. Predictions of modulus and conductivity are found in good agreement with published results. The strain-sensing capability of the material is explored for multiaxial strain states.
Developing a predictive tropospheric ozone model for Tabriz
NASA Astrophysics Data System (ADS)
Khatibi, Rahman; Naghipour, Leila; Ghorbani, Mohammad A.; Smith, Michael S.; Karimi, Vahid; Farhoudi, Reza; Delafrouz, Hadi; Arvanaghi, Hadi
2013-04-01
Predictive ozone models are becoming indispensable tools by providing a capability for pollution alerts to serve people who are vulnerable to the risks. We have developed a tropospheric ozone prediction capability for Tabriz, Iran, by using the following five modeling strategies: three regression-type methods: Multiple Linear Regression (MLR), Artificial Neural Networks (ANNs), and Gene Expression Programming (GEP); and two auto-regression-type models: Nonlinear Local Prediction (NLP) to implement chaos theory and Auto-Regressive Integrated Moving Average (ARIMA) models. The regression-type modeling strategies explain the data in terms of: temperature, solar radiation, dew point temperature, and wind speed, by regressing present ozone values to their past values. The ozone time series are available at various time intervals, including hourly intervals, from August 2010 to March 2011. The results for MLR, ANN and GEP models are not overly good but those produced by NLP and ARIMA are promising for the establishing a forecasting capability.
Energy-absorption capability of composite tubes and beams. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Farley, Gary L.; Jones, Robert M.
1989-01-01
In this study the objective was to develop a method of predicting the energy-absorption capability of composite subfloor beam structures. Before it is possible to develop such an analysis capability, an in-depth understanding of the crushing process of composite materials must be achieved. Many variables affect the crushing process of composite structures, such as the constituent materials' mechanical properties, specimen geometry, and crushing speed. A comprehensive experimental evaluation of tube specimens was conducted to develop insight into how composite structural elements crush and what are the controlling mechanisms. In this study the four characteristic crushing modes, transverse shearing, brittle fracturing, lamina bending, and local buckling were identified and the mechanisms that control the crushing process defined. An in-depth understanding was developed of how material properties affect energy-absorption capability. For example, an increase in fiber and matrix stiffness and failure strain can, depending upon the configuration of the tube, increase energy-absorption capability. An analysis to predict the energy-absorption capability of composite tube specimens was developed and verified. Good agreement between experiment and prediction was obtained.
New developments in isotropic turbulent models for FENE-P fluids
NASA Astrophysics Data System (ADS)
Resende, P. R.; Cavadas, A. S.
2018-04-01
The evolution of viscoelastic turbulent models, in the last years, has been significant due to the direct numeric simulation (DNS) advances, which allowed us to capture in detail the evolution of the viscoelastic effects and the development of viscoelastic closures. New viscoelastic closures are proposed for viscoelastic fluids described by the finitely extensible nonlinear elastic-Peterlin constitutive model. One of the viscoelastic closure developed in the context of isotropic turbulent models, consists in a modification of the turbulent viscosity to include an elastic effect, capable of predicting, with good accuracy, the behaviour for different drag reductions. Another viscoelastic closure essential to predict drag reduction relates the viscoelastic term involving velocity and the tensor conformation fluctuations. The DNS data show the high impact of this term to predict correctly the drag reduction, and for this reason is proposed a simpler closure capable of predicting the viscoelastic behaviour with good performance. In addition, a new relation is developed to predict the drag reduction, quantity based on the trace of the tensor conformation at the wall, eliminating the need of the typically parameters of Weissenberg and Reynolds numbers, which depend on the friction velocity. This allows future developments for complex geometries.
Towards predicting the encoding capability of MR fingerprinting sequences.
Sommer, K; Amthor, T; Doneva, M; Koken, P; Meineke, J; Börnert, P
2017-09-01
Sequence optimization and appropriate sequence selection is still an unmet need in magnetic resonance fingerprinting (MRF). The main challenge in MRF sequence design is the lack of an appropriate measure of the sequence's encoding capability. To find such a measure, three different candidates for judging the encoding capability have been investigated: local and global dot-product-based measures judging dictionary entry similarity as well as a Monte Carlo method that evaluates the noise propagation properties of an MRF sequence. Consistency of these measures for different sequence lengths as well as the capability to predict actual sequence performance in both phantom and in vivo measurements was analyzed. While the dot-product-based measures yielded inconsistent results for different sequence lengths, the Monte Carlo method was in a good agreement with phantom experiments. In particular, the Monte Carlo method could accurately predict the performance of different flip angle patterns in actual measurements. The proposed Monte Carlo method provides an appropriate measure of MRF sequence encoding capability and may be used for sequence optimization. Copyright © 2017 Elsevier Inc. All rights reserved.
Understanding heat and fluid flow in linear GTA welds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zacharia, T.; David, S.A.; Vitek, J.M.
1992-01-01
A transient heat flow and fluid flow model was used to predict the development of gas tungsten arc (GTA) weld pools in 1.5 mm thick AISI 304 SS. The welding parameters were chosen so as to correspond to an earlier experimental study which produced high-resolution surface temperature maps. The motivation of the present study was to verify the predictive capability of the computational model. Comparison of the numerical predictions and experimental observations indicate good agreement.
Understanding heat and fluid flow in linear GTA welds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zacharia, T.; David, S.A.; Vitek, J.M.
1992-12-31
A transient heat flow and fluid flow model was used to predict the development of gas tungsten arc (GTA) weld pools in 1.5 mm thick AISI 304 SS. The welding parameters were chosen so as to correspond to an earlier experimental study which produced high-resolution surface temperature maps. The motivation of the present study was to verify the predictive capability of the computational model. Comparison of the numerical predictions and experimental observations indicate good agreement.
NASA Technical Reports Server (NTRS)
Lyle, Karen H.
2008-01-01
The Space Shuttle Columbia Accident Investigation Board recommended that NASA develop, validate, and maintain a modeling tool capable of predicting the damage threshold for debris impacts on the Space Shuttle Reinforced Carbon-Carbon (RCC) wing leading edge and nosecap assembly. The results presented in this paper are one part of a multi-level approach that supported the development of the predictive tool used to recertify the shuttle for flight following the Columbia Accident. The assessment of predictive capability was largely based on test analysis comparisons for simpler component structures. This paper provides comparisons of finite element simulations with test data for external tank foam debris impacts onto 6-in. square RCC flat panels. Both quantitative displacement and qualitative damage assessment correlations are provided. The comparisons show good agreement and provided the Space Shuttle Program with confidence in the predictive tool.
NASA Astrophysics Data System (ADS)
Shao, Hai; Miao, Xujuan; Liu, Jinpeng; Wu, Meng; Zhao, Xuehua
2018-02-01
Xinjiang, as the area where wind energy and solar energy resources are extremely rich, with good resource development characteristics, can provide a support for regional power development and supply protection. This paper systematically analyzes the new energy resource and development characteristics of Xinjiang and carries out the demand prediction and excavation of load characteristics of Xinjiang power market. Combing the development plan of new energy of Xinjiang and considering the construction of transmission channel, it analyzes the absorptive capability of new energy. It provides certain reference for the comprehensive planning of new energy development in Xinjiang and the improvement of absorptive capacity of new energy.
NASA Technical Reports Server (NTRS)
Dunn, Mark H.; Farassat, F.
1990-01-01
The results of NASA's Propeller Test Assessment program involving extensive flight tests of a large-scale advanced propeller are presented. This has provided the opportunity to evaluate the current capability of advanced propeller noise prediction utilizing principally the exterior acoustic measurements for the prediction of exterior noise. The principal object of this study was to evaluate the state-of-the-art of noise prediction for advanced propellers utilizing the best available codes of the disciplines involved. The effects of blade deformation on the aerodynamics and noise of advanced propellers were also studied. It is concluded that blade deformation can appreciably influence propeller noise and aerodynamics, and that, in general, centrifugal and blade forces must both be included in the calculation of blade forces. It is noted that the present capability for free-field noise prediction of the first three harmonics for advanced propellers is fairly good. Detailed data and diagrams of the test results are presented.
Transonic cascade flow prediction using the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Arnone, A.; Stecco, S. S.
1991-01-01
This paper presents results which summarize the work carried out during the last three years to improve the efficiency and accuracy of numerical predictions in turbomachinery flow calculations. A new kind of nonperiodic c-type grid is presented and a Runge-Kutta scheme with accelerating strategies is used as a flow solver. The code capability is presented by testing four different blades at different exit Mach numbers in transonic regimes. Comparison with experiments shows the very good reliability of the numerical prediction. In particular, the loss coefficient seems to be correctly predicted by using the well-known Baldwin-Lomax turbulence model.
Helicopter noise prediction - The current status and future direction
NASA Technical Reports Server (NTRS)
Brentner, Kenneth S.; Farassat, F.
1992-01-01
The paper takes stock of the progress, assesses the current prediction capabilities, and forecasts the direction of future helicopter noise prediction research. The acoustic analogy approach, specifically, theories based on the Ffowcs Williams-Hawkings equations, are the most widely used for deterministic noise sources. Thickness and loading noise can be routinely predicted given good plane motion and blade loading inputs. Blade-vortex interaction noise can also be predicted well with measured input data, but prediction of airloads with the high spatial and temporal resolution required for BVI is still difficult. Current semiempirical broadband noise predictions are useful and reasonably accurate. New prediction methods based on a Kirchhoff formula and direct computation appear to be very promising, but are currently very demanding computationally.
Method to predict external store carriage characteristics at transonic speeds
NASA Technical Reports Server (NTRS)
Rosen, Bruce S.
1988-01-01
Development of a computational method for prediction of external store carriage characteristics at transonic speeds is described. The geometric flexibility required for treatment of pylon-mounted stores is achieved by computing finite difference solutions on a five-level embedded grid arrangement. A completely automated grid generation procedure facilitates applications. Store modeling capability consists of bodies of revolution with multiple fore and aft fins. A body-conforming grid improves the accuracy of the computed store body flow field. A nonlinear relaxation scheme developed specifically for modified transonic small disturbance flow equations enhances the method's numerical stability and accuracy. As a result, treatment of lower aspect ratio, more highly swept and tapered wings is possible. A limited supersonic freestream capability is also provided. Pressure, load distribution, and force/moment correlations show good agreement with experimental data for several test cases. A detailed computer program description for the Transonic Store Carriage Loads Prediction (TSCLP) Code is included.
Influence of flowfield and vehicle parameters on engineering aerothermal methods
NASA Technical Reports Server (NTRS)
Wurster, Kathryn E.; Zoby, E. Vincent; Thompson, Richard A.
1989-01-01
The reliability and flexibility of three engineering codes used in the aerosphace industry (AEROHEAT, INCHES, and MINIVER) were investigated by comparing the results of these codes with Reentry F flight data and ground-test heat-transfer data for a range of cone angles, and with the predictions obtained using the detailed VSL3D code; the engineering solutions were also compared. In particular, the impact of several vehicle and flow-field parameters on the heat transfer and the capability of the engineering codes to predict these results were determined. It was found that entropy, pressure gradient, nose bluntness, gas chemistry, and angle of attack all affect heating levels. A comparison of the results of the three engineering codes with Reentry F flight data and with the predictions obtained of the VSL3D code showed a very good agreement in the regions of the applicability of the codes. It is emphasized that the parameters used in this study can significantly influence the actual heating levels and the prediction capability of a code.
Modeling and predicting community responses to events using cultural demographics
NASA Astrophysics Data System (ADS)
Jaenisch, Holger M.; Handley, James W.; Hicklen, Michael L.
2007-04-01
This paper describes a novel capability for modeling and predicting community responses to events (specifically military operations) related to demographics. Demographics in the form of words and/or numbers are used. As an example, State of Alabama annual demographic data for retail sales, auto registration, wholesale trade, shopping goods, and population were used; from which we determined a ranked estimate of the sensitivity of the demographic parameters on the cultural group response. Our algorithm and results are summarized in this paper.
Effects of Barometric Fluctuations on Well Water-Level Measurements and Aquifer Test Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spane, Frank A.
1999-12-16
This report examines the effects of barometric fluctuations on well water-level measurements and evaluates adjustment and removal methods for determining areal aquifer head conditions and aquifer test analysis. Two examples of Hanford Site unconfined aquifer tests are examined that demonstrate baro-metric response analysis and illustrate the predictive/removal capabilities of various methods for well water-level and aquifer total head values. Good predictive/removal characteristics were demonstrated with best corrective results provided by multiple-regression deconvolution methods.
TEMPEST code simulations of hydrogen distribution in reactor containment structures. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trent, D.S.; Eyler, L.L.
The mass transport version of the TEMPEST computer code was used to simulate hydrogen distribution in geometric configurations relevant to reactor containment structures. Predicted results of Battelle-Frankfurt hydrogen distribution tests 1 to 6, and 12 are presented. Agreement between predictions and experimental data is good. Best agreement is obtained using the k-epsilon turbulence model in TEMPEST in flow cases where turbulent diffusion and stable stratification are dominant mechanisms affecting transport. The code's general analysis capabilities are summarized.
NASA Technical Reports Server (NTRS)
Suzen, Y. B.; Huang, P. G.; Ashpis, D. E.; Volino, R. J.; Corke, T. C.; Thomas, F. O.; Huang, J.; Lake, J. P.; King, P. I.
2007-01-01
A transport equation for the intermittency factor is employed to predict the transitional flows in low-pressure turbines. The intermittent behavior of the transitional flows is taken into account and incorporated into computations by modifying the eddy viscosity, mu(sub p) with the intermittency factor, gamma. Turbulent quantities are predicted using Menter's two-equation turbulence model (SST). The intermittency factor is obtained from a transport equation model which can produce both the experimentally observed streamwise variation of intermittency and a realistic profile in the cross stream direction. The model had been previously validated against low-pressure turbine experiments with success. In this paper, the model is applied to predictions of three sets of recent low-pressure turbine experiments on the Pack B blade to further validate its predicting capabilities under various flow conditions. Comparisons of computational results with experimental data are provided. Overall, good agreement between the experimental data and computational results is obtained. The new model has been shown to have the capability of accurately predicting transitional flows under a wide range of low-pressure turbine conditions.
NASA Astrophysics Data System (ADS)
Norinder, Ulf
1990-12-01
An experimental design based 3-D QSAR analysis using a combination of principal component and PLS analysis is presented and applied to human corticosteroid-binding globulin complexes. The predictive capability of the created model is good. The technique can also be used as guidance when selecting new compounds to be investigated.
Quantifying confidence in density functional theory predictions of magnetic ground states
NASA Astrophysics Data System (ADS)
Houchins, Gregory; Viswanathan, Venkatasubramanian
2017-10-01
Density functional theory (DFT) simulations, at the generalized gradient approximation (GGA) level, are being routinely used for material discovery based on high-throughput descriptor-based searches. The success of descriptor-based material design relies on eliminating bad candidates and keeping good candidates for further investigation. While DFT has been widely successfully for the former, oftentimes good candidates are lost due to the uncertainty associated with the DFT-predicted material properties. Uncertainty associated with DFT predictions has gained prominence and has led to the development of exchange correlation functionals that have built-in error estimation capability. In this work, we demonstrate the use of built-in error estimation capabilities within the BEEF-vdW exchange correlation functional for quantifying the uncertainty associated with the magnetic ground state of solids. We demonstrate this approach by calculating the uncertainty estimate for the energy difference between the different magnetic states of solids and compare them against a range of GGA exchange correlation functionals as is done in many first-principles calculations of materials. We show that this estimate reasonably bounds the range of values obtained with the different GGA functionals. The estimate is determined as a postprocessing step and thus provides a computationally robust and systematic approach to estimating uncertainty associated with predictions of magnetic ground states. We define a confidence value (c-value) that incorporates all calculated magnetic states in order to quantify the concurrence of the prediction at the GGA level and argue that predictions of magnetic ground states from GGA level DFT is incomplete without an accompanying c-value. We demonstrate the utility of this method using a case study of Li-ion and Na-ion cathode materials and the c-value metric correctly identifies that GGA-level DFT will have low predictability for NaFePO4F . Further, there needs to be a systematic test of a collection of plausible magnetic states, especially in identifying antiferromagnetic (AFM) ground states. We believe that our approach of estimating uncertainty can be readily incorporated into all high-throughput computational material discovery efforts and this will lead to a dramatic increase in the likelihood of finding good candidate materials.
Ball Bearing Analysis with the ORBIS Tool
NASA Technical Reports Server (NTRS)
Halpin, Jacob D.
2016-01-01
Ball bearing design is critical to the success of aerospace mechanisms. Key bearing performance parameters, such as load capability, stiffness, torque, and life all depend on accurate determination of the internal load distribution. Hence, a good analytical bearing tool that provides both comprehensive capabilities and reliable results becomes a significant asset to the engineer. This paper introduces the ORBIS bearing tool. A discussion of key modeling assumptions and a technical overview is provided. Numerous validation studies and case studies using the ORBIS tool are presented. All results suggest the ORBIS code closely correlates to predictions on bearing internal load distributions, stiffness, deflection and stresses.
Incident Energy Focused Design and Validation for the Floating Potential Probe
NASA Technical Reports Server (NTRS)
Fincannon, James
2002-01-01
Utilizing the spacecraft shadowing and incident energy analysis capabilities of the NASA Glenn Research Center Power and Propulsion Office's SPACE System Power Analysis for Capability Evaluation) computer code, this paper documents the analyses for various International Space Station (ISS) Floating Potential Probe (EPP) preliminary design options. These options include various solar panel orientations and configurations as well as deployment locations on the ISS. The incident energy for the final selected option is characterized. A good correlation between the predicted data and on-orbit operational telemetry is demonstrated. Minor deviations are postulated to be induced by degradation or sensor drift.
Analysis of Discrete-Source Damage Progression in a Tensile Stiffened Composite Panel
NASA Technical Reports Server (NTRS)
Wang, John T.; Lotts, Christine G.; Sleight, David W.
1999-01-01
This paper demonstrates the progressive failure analysis capability in NASA Langley s COMET-AR finite element analysis code on a large-scale built-up composite structure. A large-scale five stringer composite panel with a 7-in. long discrete source damage was analyzed from initial loading to final failure including the geometric and material nonlinearities. Predictions using different mesh sizes, different saw cut modeling approaches, and different failure criteria were performed and assessed. All failure predictions have a reasonably good correlation with the test result.
Norinder, U; Högberg, T
1992-04-01
The advantageous approach of using an experimentally designed training set as the basis for establishing a quantitative structure-activity relationship with good predictive capability is described. The training set was selected from a fractional factorial design scheme based on a principal component description of physico-chemical parameters of aromatic substituents. The derived model successfully predicts the activities of additional substituted benzamides of 6-methoxy-N-(4-piperidyl)salicylamide type. The major influence on activity of the 3-substituent is demonstrated.
Innovation value chain capability in Malaysian-owned company: A theoretical framework
NASA Astrophysics Data System (ADS)
Abidin, Norkisme Zainal; Suradi, Nur Riza Mohd
2014-09-01
Good quality products or services are no longer adequate to guarantee the sustainability of a company in the present competitive business. Prior research has developed various innovation models with the hope to better understand the innovativeness of the company. Due to countless definitions, indicators, factors, parameter and approaches in the study of innovation, it is difficult to ensure which one will best suit the Malaysian-owned company innovativeness. This paper aims to provide a theoretical background to support the framework of the innovation value chain capability in Malaysian-owned Company. The theoretical framework was based on the literature reviews, expert interviews and focus group study. The framework will be used to predict and assess the innovation value chain capability in Malaysian-owned company.
Design, development and test of a capillary pump loop heat pipe
NASA Technical Reports Server (NTRS)
Kroliczek, E. J.; Ku, J.; Ollendorf, S.
1984-01-01
The development of a capillary pump loop (CPL) heat pipe, including computer modeling and breadboard testing, is presented. The computer model is a SINDA-type thermal analyzer, combined with a pressure analyzer, which predicts the transients of the CPL heat pipe during operation. The breadboard is an aluminum/ammonia transport system which contains multiple parallel evaporator and condenser zones within a single loop. Test results have demonstrated the practicality and reliability of such a design, including heat load sharing among evaporators, liquid inventory/temperature control feature, and priming under load. Transport capability for this system is 65 KW-M with individual evaporator pumps managing up to 1.7 KW at a heat flux of 15 W/sq cm. The prediction of the computer model for heat transport capabilities is in good agreement with experimental results.
Evaluation of icing drag coefficient correlations applied to iced propeller performance prediction
NASA Technical Reports Server (NTRS)
Miller, Thomas L.; Shaw, R. J.; Korkan, K. D.
1987-01-01
Evaluation of three empirical icing drag coefficient correlations is accomplished through application to a set of propeller icing data. The various correlations represent the best means currently available for relating drag rise to various flight and atmospheric conditions for both fixed-wing and rotating airfoils, and the work presented here ilustrates and evaluates one such application of the latter case. The origins of each of the correlations are discussed, and their apparent capabilities and limitations are summarized. These correlations have been made to be an integral part of a computer code, ICEPERF, which has been designed to calculate iced propeller performance. Comparison with experimental propeller icing data shows generally good agreement, with the quality of the predicted results seen to be directly related to the radial icing extent of each case. The code's capability to properly predict thrust coefficient, power coefficient, and propeller efficiency is shown to be strongly dependent on the choice of correlation selected, as well as upon proper specificatioon of radial icing extent.
NASA Astrophysics Data System (ADS)
Labahn, Jeffrey William; Devaud, Cecile
2017-05-01
A Reynolds-Averaged Navier-Stokes (RANS) simulation of the semi-industrial International Flame Research Foundation (IFRF) furnace is performed using a non-adiabatic Conditional Source-term Estimation (CSE) formulation. This represents the first time that a CSE formulation, which accounts for the effect of radiation on the conditional reaction rates, has been applied to a large scale semi-industrial furnace. The objective of the current study is to assess the capabilities of CSE to accurately reproduce the velocity field, temperature, species concentration and nitrogen oxides (NOx) emission for the IFRF furnace. The flow field is solved using the standard k-ε turbulence model and detailed chemistry is included. NOx emissions are calculated using two different methods. Predicted velocity profiles are in good agreement with the experimental data. The predicted peak temperature occurs closer to the centreline, as compared to the experimental observations, suggesting that the mixing between the fuel jet and vitiated air jet may be overestimated. Good agreement between the species concentrations, including NOx, and the experimental data is observed near the burner exit. Farther downstream, the centreline oxygen concentration is found to be underpredicted. Predicted NOx concentrations are in good agreement with experimental data when calculated using the method of Peters and Weber. The current study indicates that RANS-CSE can accurately predict the main characteristics seen in a semi-industrial IFRF furnace.
Evaluation of soft rubber goods. [for use as O-rings, and seals on space shuttle
NASA Technical Reports Server (NTRS)
Merz, P. L.
1974-01-01
The performance of rubber goods suitable for use as O-rings, seals, gaskets, bladders and diaphragms under conditions simulating those of the space shuttle were studied. High reliability throughout the 100 flight missions planned for the space shuttle was considered of overriding importance. Accordingly, in addition to a rank ordering of the selected candidate materials based on prolonged fluid compatibility and sealability behavior, basic rheological parameters (such as cyclic hysteresis, stress relaxation, indicated modulus, etc.) were determined to develop methods capable of predicting the cumulative effect of these multiple reuse cycles.
NASA Astrophysics Data System (ADS)
Wrożyna, Andrzej; Pernach, Monika; Kuziak, Roman; Pietrzyk, Maciej
2016-04-01
Due to their exceptional strength properties combined with good workability the Advanced High-Strength Steels (AHSS) are commonly used in automotive industry. Manufacturing of these steels is a complex process which requires precise control of technological parameters during thermo-mechanical treatment. Design of these processes can be significantly improved by the numerical models of phase transformations. Evaluation of predictive capabilities of models, as far as their applicability in simulation of thermal cycles thermal cycles for AHSS is considered, was the objective of the paper. Two models were considered. The former was upgrade of the JMAK equation while the latter was an upgrade of the Leblond model. The models can be applied to any AHSS though the examples quoted in the paper refer to the Dual Phase (DP) steel. Three series of experimental simulations were performed. The first included various thermal cycles going beyond limitations of the continuous annealing lines. The objective was to validate models behavior in more complex cooling conditions. The second set of tests included experimental simulations of the thermal cycle characteristic for the continuous annealing lines. Capability of the models to describe properly phase transformations in this process was evaluated. The third set included data from the industrial continuous annealing line. Validation and verification of models confirmed their good predictive capabilities. Since it does not require application of the additivity rule, the upgrade of the Leblond model was selected as the better one for simulation of industrial processes in AHSS production.
Pulsed CO2 characterization for lidar use
NASA Technical Reports Server (NTRS)
Jaenisch, Holger M.
1992-01-01
An account is given of a scaled functional testbed laser for space-qualified coherent-detection lidar applications which employs a CO2 laser. This laser has undergone modification and characterization for inherent performance capabilities as a model of coherent detection. While characterization results show good overall performance that is in agreement with theoretical predictions, frequency-stability and pulse-length limitations severely limit the laser's use in coherent detection.
A pan-African medium-range ensemble flood forecast system
NASA Astrophysics Data System (ADS)
Thiemig, Vera; Bisselink, Bernard; Pappenberger, Florian; Thielen, Jutta
2015-04-01
The African Flood Forecasting System (AFFS) is a probabilistic flood forecast system for medium- to large-scale African river basins, with lead times of up to 15 days. The key components are the hydrological model LISFLOOD, the African GIS database, the meteorological ensemble predictions of the ECMWF and critical hydrological thresholds. In this study the predictive capability is investigated, to estimate AFFS' potential as an operational flood forecasting system for the whole of Africa. This is done in a hindcast mode, by reproducing pan-African hydrological predictions for the whole year of 2003 where important flood events were observed. Results were analysed in two ways, each with its individual objective. The first part of the analysis is of paramount importance for the assessment of AFFS as a flood forecasting system, as it focuses on the detection and prediction of flood events. Here, results were verified with reports of various flood archives such as Dartmouth Flood Observatory, the Emergency Event Database, the NASA Earth Observatory and Reliefweb. The number of hits, false alerts and missed alerts as well as the Probability of Detection, False Alarm Rate and Critical Success Index were determined for various conditions (different regions, flood durations, average amount of annual precipitations, size of affected areas and mean annual discharge). The second part of the analysis complements the first by giving a basic insight into the prediction skill of the general streamflow. For this, hydrological predictions were compared against observations at 36 key locations across Africa and the Continuous Rank Probability Skill Score (CRPSS), the limit of predictability and reliability were calculated. Results showed that AFFS detected around 70 % of the reported flood events correctly. In particular, the system showed good performance in predicting riverine flood events of long duration (> 1 week) and large affected areas (> 10 000 km2) well in advance, whereas AFFS showed limitations for small-scale and short duration flood events. Also the forecasts showed on average a good reliability, and the CRPSS helped identifying regions to focus on for future improvements. The case study for the flood event in March 2003 in the Sabi Basin (Zimbabwe and Mozambique) illustrated the good performance of AFFS in forecasting timing and severity of the floods, gave an example of the clear and concise output products, and showed that the system is capable of producing flood warnings even in ungauged river basins. Hence, from a technical perspective, AFFS shows a good prospective as an operational system, as it has demonstrated its significant potential to contribute to the reduction of flood-related losses in Africa by providing national and international aid organizations timely with medium-range flood forecast information. However, issues related to the practical implication will still need to be investigated.
Application of Interface Technology in Progressive Failure Analysis of Composite Panels
NASA Technical Reports Server (NTRS)
Sleight, D. W.; Lotts, C. G.
2002-01-01
A progressive failure analysis capability using interface technology is presented. The capability has been implemented in the COMET-AR finite element analysis code developed at the NASA Langley Research Center and is demonstrated on composite panels. The composite panels are analyzed for damage initiation and propagation from initial loading to final failure using a progressive failure analysis capability that includes both geometric and material nonlinearities. Progressive failure analyses are performed on conventional models and interface technology models of the composite panels. Analytical results and the computational effort of the analyses are compared for the conventional models and interface technology models. The analytical results predicted with the interface technology models are in good correlation with the analytical results using the conventional models, while significantly reducing the computational effort.
Thermal Analysis of Small Re-Entry Probe
NASA Technical Reports Server (NTRS)
Agrawal, Parul; Prabhu, Dinesh K.; Chen, Y. K.
2012-01-01
The Small Probe Reentry Investigation for TPS Engineering (SPRITE) concept was developed at NASA Ames Research Center to facilitate arc-jet testing of a fully instrumented prototype probe at flight scale. Besides demonstrating the feasibility of testing a flight-scale model and the capability of an on-board data acquisition system, another objective for this project was to investigate the capability of simulation tools to predict thermal environments of the probe/test article and its interior. This paper focuses on finite-element thermal analyses of the SPRITE probe during the arcjet tests. Several iterations were performed during the early design phase to provide critical design parameters and guidelines for testing. The thermal effects of ablation and pyrolysis were incorporated into the final higher-fidelity modeling approach by coupling the finite-element analyses with a two-dimensional thermal protection materials response code. Model predictions show good agreement with thermocouple data obtained during the arcjet test.
Heat transfer in a real engine environment
NASA Astrophysics Data System (ADS)
Gladden, Herbert J.
1985-10-01
The hot section facility at the Lewis Research Center was used to demonstrate the capability of instruments to make required measurements of boundary conditions of the flow field and heat transfer processes in the hostile environment of the turbine. The results of thermal scaling tests show that low temperature and pressure rig tests give optimistic estimates of the thermal performance of a cooling design for high pressure and temperature application. The results of measuring heat transfer coefficients on turbine vane airfoils through dynamic data analysis show good comparison with measurements from steady state heat flux gauges. In addition, the data trends are predicted by the STAN5 boundary layer code. However, the magnitude of the experimental data was not predicted by the analysis, particularly in laminar and transitional regions near the leading edge. The infrared photography system was shown capable of providing detailed surface thermal gradients and secondary flow features on a turbine vane and endwell.
NASA Technical Reports Server (NTRS)
Muffoletto, A. J.
1982-01-01
An aerodynamic computer code, capable of predicting unsteady and C sub m values for an airfoil undergoing dynamic stall, is used to predict the amplitudes and frequencies of a wing undergoing torsional stall flutter. The code, developed at United Technologies Research Corporation (UTRC), is an empirical prediction method designed to yield unsteady values of normal force and moment, given the airfoil's static coefficient characteristics and the unsteady aerodynamic values, alpha, A and B. In this experiment, conducted in the PSU 4' x 5' subsonic wind tunnel, the wing's elastic axis, torsional spring constant and initial angle of attack are varied, and the oscillation amplitudes and frequencies of the wing, while undergoing torsional stall flutter, are recorded. These experimental values show only fair comparisons with the predicted responses. Predictions tend to be good at low velocities and rather poor at higher velocities.
Theory of Radar Target Discrimination
1991-02-01
which a capability for target or system identification could be put to good use: air traffic control , border patrol, security and surveillance...different targets from each other, there would be big advantages in air safety. Airport traffic controllers have made serious errors from their...in a way that we can neither predict nor control . Of course, any data function d(t) which can be recorded for computer processing will be digitized and
Validation of Heat Transfer and Film Cooling Capabilities of the 3-D RANS Code TURBO
NASA Technical Reports Server (NTRS)
Shyam, Vikram; Ameri, Ali; Chen, Jen-Ping
2010-01-01
The capabilities of the 3-D unsteady RANS code TURBO have been extended to include heat transfer and film cooling applications. The results of simulations performed with the modified code are compared to experiment and to theory, where applicable. Wilcox s k-turbulence model has been implemented to close the RANS equations. Two simulations are conducted: (1) flow over a flat plate and (2) flow over an adiabatic flat plate cooled by one hole inclined at 35 to the free stream. For (1) agreement with theory is found to be excellent for heat transfer, represented by local Nusselt number, and quite good for momentum, as represented by the local skin friction coefficient. This report compares the local skin friction coefficients and Nusselt numbers on a flat plate obtained using Wilcox's k-model with the theory of Blasius. The study looks at laminar and turbulent flows over an adiabatic flat plate and over an isothermal flat plate for two different wall temperatures. It is shown that TURBO is able to accurately predict heat transfer on a flat plate. For (2) TURBO shows good qualitative agreement with film cooling experiments performed on a flat plate with one cooling hole. Quantitatively, film effectiveness is under predicted downstream of the hole.
NASA Technical Reports Server (NTRS)
Curry, Timothy J.; Batterson, James G. (Technical Monitor)
2000-01-01
Low order equivalent system (LOES) models for the Tu-144 supersonic transport aircraft were identified from flight test data. The mathematical models were given in terms of transfer functions with a time delay by the military standard MIL-STD-1797A, "Flying Qualities of Piloted Aircraft," and the handling qualities were predicted from the estimated transfer function coefficients. The coefficients and the time delay in the transfer functions were estimated using a nonlinear equation error formulation in the frequency domain. Flight test data from pitch, roll, and yaw frequency sweeps at various flight conditions were used for parameter estimation. Flight test results are presented in terms of the estimated parameter values, their standard errors, and output fits in the time domain. Data from doublet maneuvers at the same flight conditions were used to assess the predictive capabilities of the identified models. The identified transfer function models fit the measured data well and demonstrated good prediction capabilities. The Tu-144 was predicted to be between level 2 and 3 for all longitudinal maneuvers and level I for all lateral maneuvers. High estimates of the equivalent time delay in the transfer function model caused the poor longitudinal rating.
Development of a 3D numerical methodology for fast prediction of gun blast induced loading
NASA Astrophysics Data System (ADS)
Costa, E.; Lagasco, F.
2014-05-01
In this paper, the development of a methodology based on semi-empirical models from the literature to carry out 3D prediction of pressure loading on surfaces adjacent to a weapon system during firing is presented. This loading is consequent to the impact of the blast wave generated by the projectile exiting the muzzle bore. When exceeding a pressure threshold level, loading is potentially capable to induce unwanted damage to nearby hard structures as well as frangible panels or electronic equipment. The implemented model shows the ability to quickly predict the distribution of the blast wave parameters over three-dimensional complex geometry surfaces when the weapon design and emplacement data as well as propellant and projectile characteristics are available. Considering these capabilities, the use of the proposed methodology is envisaged as desirable in the preliminary design phase of the combat system to predict adverse effects and then enable to identify the most appropriate countermeasures. By providing a preliminary but sensitive estimate of the operative environmental loading, this numerical means represents a good alternative to more powerful, but time consuming advanced computational fluid dynamics tools, which use can, thus, be limited to the final phase of the design.
Prediction of R-curves from small coupon tests
NASA Technical Reports Server (NTRS)
Yeh, J. R.; Bray, G. H.; Bucci, R. J.; Macheret, Y.
1994-01-01
R-curves were predicted for Alclad 2024-T3 and C188-T3 sheet using the results of small-coupon Kahn tear tests in combination with two-dimensional elastic-plastic finite element stress analyses. The predictions were compared to experimental R-curves from 6.3, 16 and 60-inch wide M(T) specimens and good agreement was obtained. The method is an inexpensive alternative to wide panel testing for characterizing the fracture toughness of damage-tolerant sheet alloys. The usefulness of this approach was demonstrated by performing residual strength calculations for a two-bay crack in a representative fuselage structure. C188-T3 was predicted to have a 24 percent higher load carrying capability than 2024-T3 in this application as a result of its superior fracture toughness.
Modifications to the nozzle test chamber to extend nozzle static-test capability
NASA Technical Reports Server (NTRS)
Keyes, J. W.
1985-01-01
The nozzle test chamber was modified to provide a high-pressure-ratio nozzle static-test capability. Experiments were conducted to determine the range of the ratio of nozzle total pressure to chamber pressure and to make direct nozzle thrust measurements using a three-component strain-gage force balance. Pressure ratios from 3 to 285 were measured with several axisymmetric nozzles at a nozzle total pressure of 15 to 190 psia. Devices for measuring system mass flow were calibrated using standard axisymmetric convergent choked nozzles. System mass-flow rates up to 10 lbm/sec are measured. The measured thrust results of these nozzles are in good agreement with one-dimensional theoretical predictions for convergent nozzles.
Formulation of aerodynamic prediction techniques for hypersonic configuration design
NASA Technical Reports Server (NTRS)
1979-01-01
An investigation of approximate theoretical techniques for predicting aerodynamic characteristics and surface pressures for relatively slender vehicles at moderate hypersonic speeds was performed. Emphasis was placed on approaches that would be responsive to preliminary configuration design level of effort. Supersonic second order potential theory was examined in detail to meet this objective. Shock layer integral techniques were considered as an alternative means of predicting gross aerodynamic characteristics. Several numerical pilot codes were developed for simple three dimensional geometries to evaluate the capability of the approximate equations of motion considered. Results from the second order computations indicated good agreement with higher order solutions and experimental results for a variety of wing like shapes and values of the hypersonic similarity parameter M delta approaching one.
2009-01-01
Modeling of water flow in carbon nanotubes is still a challenge for the classic models of fluid dynamics. In this investigation, an adaptive-network-based fuzzy inference system (ANFIS) is presented to solve this problem. The proposed ANFIS approach can construct an input–output mapping based on both human knowledge in the form of fuzzy if-then rules and stipulated input–output data pairs. Good performance of the designed ANFIS ensures its capability as a promising tool for modeling and prediction of fluid flow at nanoscale where the continuum models of fluid dynamics tend to break down. PMID:20596382
Ahadian, Samad; Kawazoe, Yoshiyuki
2009-06-04
Modeling of water flow in carbon nanotubes is still a challenge for the classic models of fluid dynamics. In this investigation, an adaptive-network-based fuzzy inference system (ANFIS) is presented to solve this problem. The proposed ANFIS approach can construct an input-output mapping based on both human knowledge in the form of fuzzy if-then rules and stipulated input-output data pairs. Good performance of the designed ANFIS ensures its capability as a promising tool for modeling and prediction of fluid flow at nanoscale where the continuum models of fluid dynamics tend to break down.
Aerodynamic prediction techniques for hypersonic configuration design
NASA Technical Reports Server (NTRS)
1981-01-01
An investigation of approximate theoretical techniques for predicting aerodynamic characteristics and surface pressures for relatively slender vehicles at moderate hypersonic speeds was performed. Emphasis was placed on approaches that would be responsive to preliminary configuration design level of effort. Potential theory was examined in detail to meet this objective. Numerical pilot codes were developed for relatively simple three dimensional geometries to evaluate the capability of the approximate equations of motion considered. Results from the computations indicate good agreement with higher order solutions and experimental results for a variety of wing, body, and wing-body shapes for values of the hypersonic similarity parameter M delta approaching one.
Selection of optimal sensors for predicting performance of polymer electrolyte membrane fuel cell
NASA Astrophysics Data System (ADS)
Mao, Lei; Jackson, Lisa
2016-10-01
In this paper, sensor selection algorithms are investigated based on a sensitivity analysis, and the capability of optimal sensors in predicting PEM fuel cell performance is also studied using test data. The fuel cell model is developed for generating the sensitivity matrix relating sensor measurements and fuel cell health parameters. From the sensitivity matrix, two sensor selection approaches, including the largest gap method, and exhaustive brute force searching technique, are applied to find the optimal sensors providing reliable predictions. Based on the results, a sensor selection approach considering both sensor sensitivity and noise resistance is proposed to find the optimal sensor set with minimum size. Furthermore, the performance of the optimal sensor set is studied to predict fuel cell performance using test data from a PEM fuel cell system. Results demonstrate that with optimal sensors, the performance of PEM fuel cell can be predicted with good quality.
Martínez, Francisco J; Márquez, Andrés; Gallego, Sergi; Ortuño, Manuel; Francés, Jorge; Pascual, Inmaculada; Beléndez, Augusto
2015-02-20
Parallel-aligned (PA) liquid-crystal on silicon (LCoS) microdisplays are especially appealing in a wide range of spatial light modulation applications since they enable phase-only operation. Recently we proposed a novel polarimetric method, based on Stokes polarimetry, enabling the characterization of their linear retardance and the magnitude of their associated phase fluctuations or flicker, exhibited by many LCoS devices. In this work we apply the calibrated values obtained with this technique to show their capability to predict the performance of spatially varying phase multilevel elements displayed onto the PA-LCoS device. Specifically we address a series of multilevel phase blazed gratings. We analyze both their average diffraction efficiency ("static" analysis) and its associated time fluctuation ("dynamic" analysis). Two different electrical configuration files with different degrees of flicker are applied in order to evaluate the actual influence of flicker on the expected performance of the diffractive optical elements addressed. We obtain a good agreement between simulation and experiment, thus demonstrating the predictive capability of the calibration provided by the average Stokes polarimetric technique. Additionally, it is obtained that for electrical configurations with less than 30° amplitude for the flicker retardance, they may not influence the performance of the blazed gratings. In general, we demonstrate that the influence of flicker greatly diminishes when the number of quantization levels in the optical element increases.
HART-II Acoustic Predictions using a Coupled CFD/CSD Method
NASA Technical Reports Server (NTRS)
Boyd, D. Douglas, Jr.
2009-01-01
This paper documents results to date from the Rotorcraft Acoustic Characterization and Mitigation activity under the NASA Subsonic Rotary Wing Project. The primary goal of this activity is to develop a NASA rotorcraft impulsive noise prediction capability which uses first principles fluid dynamics and structural dynamics. During this effort, elastic blade motion and co-processing capabilities have been included in a recent version of the computational fluid dynamics code (CFD). The CFD code is loosely coupled to computational structural dynamics (CSD) code using new interface codes. The CFD/CSD coupled solution is then used to compute impulsive noise on a plane under the rotor using the Ffowcs Williams-Hawkings solver. This code system is then applied to a range of cases from the Higher Harmonic Aeroacoustic Rotor Test II (HART-II) experiment. For all cases presented, the full experimental configuration (i.e., rotor and wind tunnel sting mount) are used in the coupled CFD/CSD solutions. Results show good correlation between measured and predicted loading and loading time derivative at the only measured radial station. A contributing factor for a typically seen loading mean-value offset between measured data and predictions data is examined. Impulsive noise predictions on the measured microphone plane under the rotor compare favorably with measured mid-frequency noise for all cases. Flow visualization of the BL and MN cases shows that vortex structures generated in the prediction method are consist with measurements. Future application of the prediction method is discussed.
ERIC Educational Resources Information Center
McLean, Monica; Walker, Melanie
2012-01-01
The education of professionals oriented to poverty reduction and the public good is the focus of the article. Sen's "capability approach" is used to conceptualise university-based professional education as a process of developing public-good professional capabilities. The main output of a research project on professional education in…
A compressible Navier-Stokes solver with two-equation and Reynolds stress turbulence closure models
NASA Technical Reports Server (NTRS)
Morrison, Joseph H.
1992-01-01
This report outlines the development of a general purpose aerodynamic solver for compressible turbulent flows. Turbulent closure is achieved using either two equation or Reynolds stress transportation equations. The applicable equation set consists of Favre-averaged conservation equations for the mass, momentum and total energy, and transport equations for the turbulent stresses and turbulent dissipation rate. In order to develop a scheme with good shock capturing capabilities, good accuracy and general geometric capabilities, a multi-block cell centered finite volume approach is used. Viscous fluxes are discretized using a finite volume representation of a central difference operator and the source terms are treated as an integral over the control volume. The methodology is validated by testing the algorithm on both two and three dimensional flows. Both the two equation and Reynolds stress models are used on a two dimensional 10 degree compression ramp at Mach 3, and the two equation model is used on the three dimensional flow over a cone at angle of attack at Mach 3.5. With the development of this algorithm, it is now possible to compute complex, compressible high speed flow fields using both two equation and Reynolds stress turbulent closure models, with the capability of eventually evaluating their predictive performance.
Ovesen, C; Christensen, A; Nielsen, J K; Christensen, H
2013-11-01
Easy-to-perform and valid assessment scales for the effect of thrombolysis are essential in hyperacute stroke settings. Because of this we performed an external validation of the DRAGON scale proposed by Strbian et al. in a Danish cohort. All patients treated with intravenous recombinant plasminogen activator between 2009 and 2011 were included. Upon admission all patients underwent physical and neurological examination using the National Institutes of Health Stroke Scale along with non-contrast CT scans and CT angiography. Patients were followed up through the Outpatient Clinic and their modified Rankin Scale (mRS) was assessed after 3 months. Three hundred and three patients were included in the analysis. The DRAGON scale proved to have a good discriminative ability for predicting highly unfavourable outcome (mRS 5-6) (area under the curve-receiver operating characteristic [AUC-ROC]: 0.89; 95% confidence interval [CI] 0.81-0.96; p<0.001) and good outcome (mRS 0-2) (AUC-ROC: 0.79; 95% CI 0.73-0.85; p<0.001). When only patients with M1 occlusions were selected the DRAGON scale provided good discriminative capability (AUC-ROC: 0.89; 95% CI 0.78-1.0; p=0.003) for highly unfavourable outcome. We confirmed the validity of the DRAGON scale in predicting outcome after thrombolysis treatment. Copyright © 2013 Elsevier Ltd. All rights reserved.
Thirunathan, Praveena; Arnz, Patrik; Husny, Joeska; Gianfrancesco, Alessandro; Perdana, Jimmy
2018-03-01
Accurate description of moisture diffusivity is key to precisely understand and predict moisture transfer behaviour in a matrix. Unfortunately, measuring moisture diffusivity is not trivial, especially at low moisture values and/or elevated temperatures. This paper presents a novel experimental procedure to accurately measure moisture diffusivity based on thermogravimetric approach. The procedure is capable to measure diffusivity even at elevated temperatures (>70°C) and low moisture values (>1%). Diffusivity was extracted from experimental data based on "regular regime approach". The approach was tailored to determine diffusivity from thin film and from poly-dispersed powdered samples. Subsequently, measured diffusivity was validated by comparing to available literature data, showing good agreement. Ability of this approach to accurately measure diffusivity at a wider range of temperatures provides better insight on temperature dependency of diffusivity. Thus, this approach can be crucial to ensure good accuracy of moisture transfer description/prediction especially when involving elevated temperatures. Copyright © 2017 Elsevier Ltd. All rights reserved.
Sfakiotakis, Stelios; Vamvuka, Despina
2015-12-01
The pyrolysis of six waste biomass samples was studied and the fuels were kinetically evaluated. A modified independent parallel reactions scheme (IPR) and a distributed activation energy model (DAEM) were developed and their validity was assessed and compared by checking their accuracy of fitting the experimental results, as well as their prediction capability in different experimental conditions. The pyrolysis experiments were carried out in a thermogravimetric analyzer and a fitting procedure, based on least squares minimization, was performed simultaneously at different experimental conditions. A modification of the IPR model, considering dependence of the pre-exponential factor on heating rate, was proved to give better fit results for the same number of tuned kinetic parameters, comparing to the known IPR model and very good prediction results for stepwise experiments. Fit of calculated data to the experimental ones using the developed DAEM model was also proved to be very good. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Hexiang; Schuster, Eugenio; Rafiq, Tariq; Kritz, Arnold; Ding, Siye
2016-10-01
Extensive research has been conducted to find high-performance operating scenarios characterized by high fusion gain, good confinement, plasma stability and possible steady-state operation. A key plasma property that is related to both the stability and performance of these advanced plasma scenarios is the safety factor profile. A key component of the EAST research program is the exploration of non-inductively driven steady-state plasmas with the recently upgraded heating and current drive capabilities that include lower hybrid current drive and neutral beam injection. Anticipating the need for tight regulation of the safety factor profile in these plasma scenarios, a first-principles-driven (FPD)control-oriented model is proposed to describe the safety factor profile evolution in EAST in response to the different actuators. The TRANSP simulation code is employed to tailor the FPD model to the EAST tokamak geometry and to convert it into a form suitable for control design. The FPD control-oriented model's prediction capabilities are demonstrated by comparing predictions with experimental data from EAST. Supported by the US DOE under DE-SC0010537,DE-FG02-92ER54141 and DE-SC0013977.
Shen, Shuang; Sun, Xiuzhen; Yu, Shen; Liu, Yingxi; Su, Yingfeng; Zhao, Wei; Liu, Wenlong
2016-06-14
The utriculo-endolymphatic valve (UEV) has an uncertain function, but its opening and closure have been predicted to maintain a constant endolymphatic pressure within the semicircular canals (SCCs) and the utricle of the inner ear. Here, the study׳s aim was to examine the role of the UEV in regulating the capabilities of the 3 SCCs in sensing angular acceleration by using the finite element method. The results of the developed model showed endolymphatic flow and cupula displacement patterns in good agreement with previous experiments. Moreover, the open valve was predicted to permit endolymph exchange between the 2 parts of the membranous labyrinth during head rotation and, in comparison to the closed valve, to result in a reinforced endolymph flow in the utricle and an enhanced or weakened cupula deflection. Further, the model predicted an increase in the size of the orifice would result in greater endolymph exchange and thereby to a greater impact on cupula deflection. The model findings suggest the UEV plays a crucial role in the preservation of inner ear sensory function. Copyright © 2016 Elsevier Ltd. All rights reserved.
A Predictive Model of Daily Seismic Activity Induced by Mining, Developed with Data Mining Methods
NASA Astrophysics Data System (ADS)
Jakubowski, Jacek
2014-12-01
The article presents the development and evaluation of a predictive classification model of daily seismic energy emissions induced by longwall mining in sector XVI of the Piast coal mine in Poland. The model uses data on tremor energy, basic characteristics of the longwall face and mined output in this sector over the period from July 1987 to March 2011. The predicted binary variable is the occurrence of a daily sum of tremor seismic energies in a longwall that is greater than or equal to the threshold value of 105 J. Three data mining analytical methods were applied: logistic regression,neural networks, and stochastic gradient boosted trees. The boosted trees model was chosen as the best for the purposes of the prediction. The validation sample results showed its good predictive capability, taking the complex nature of the phenomenon into account. This may indicate the applied model's suitability for a sequential, short-term prediction of mining induced seismic activity.
TRAC-PD2 posttest analysis of the CCTF Evaluation-Model Test C1-19 (Run 38). [PWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Motley, F.
The results of a Transient Reactor Analysis Code posttest analysis of the Cylindral Core Test Facility Evaluation-Model Test agree very well with the results of the experiment. The good agreement obtained verifies the multidimensional analysis capability of the TRAC code. Because of the steep radial power profile, the importance of using fine noding in the core region was demonstrated (as compared with poorer results obtained from an earlier pretest prediction that used a coarsely noded model).
Estimation of brain network ictogenicity predicts outcome from epilepsy surgery
NASA Astrophysics Data System (ADS)
Goodfellow, M.; Rummel, C.; Abela, E.; Richardson, M. P.; Schindler, K.; Terry, J. R.
2016-07-01
Surgery is a valuable option for pharmacologically intractable epilepsy. However, significant post-operative improvements are not always attained. This is due in part to our incomplete understanding of the seizure generating (ictogenic) capabilities of brain networks. Here we introduce an in silico, model-based framework to study the effects of surgery within ictogenic brain networks. We find that factors conventionally determining the region of tissue to resect, such as the location of focal brain lesions or the presence of epileptiform rhythms, do not necessarily predict the best resection strategy. We validate our framework by analysing electrocorticogram (ECoG) recordings from patients who have undergone epilepsy surgery. We find that when post-operative outcome is good, model predictions for optimal strategies align better with the actual surgery undertaken than when post-operative outcome is poor. Crucially, this allows the prediction of optimal surgical strategies and the provision of quantitative prognoses for patients undergoing epilepsy surgery.
Ahammad, S Ziauddin; Gomes, James; Sreekrishnan, T R
2011-09-01
Anaerobic degradation of waste involves different classes of microorganisms, and there are different types of interactions among them for substrates, terminal electron acceptors, and so on. A mathematical model is developed based on the mass balance of different substrates, products, and microbes present in the system to study the interaction between methanogens and sulfate-reducing bacteria (SRB). The performance of major microbial consortia present in the system, such as propionate-utilizing acetogens, butyrate-utilizing acetogens, acetoclastic methanogens, hydrogen-utilizing methanogens, and SRB were considered and analyzed in the model. Different substrates consumed and products formed during the process also were considered in the model. The experimental observations and model predictions showed very good prediction capabilities of the model. Model prediction was validated statistically. It was observed that the model-predicted values matched the experimental data very closely, with an average error of 3.9%.
One-Dimensional Modelling of Internal Ballistics
NASA Astrophysics Data System (ADS)
Monreal-González, G.; Otón-Martínez, R. A.; Velasco, F. J. S.; García-Cascáles, J. R.; Ramírez-Fernández, F. J.
2017-10-01
A one-dimensional model is introduced in this paper for problems of internal ballistics involving solid propellant combustion. First, the work presents the physical approach and equations adopted. Closure relationships accounting for the physical phenomena taking place during combustion (interfacial friction, interfacial heat transfer, combustion) are deeply discussed. Secondly, the numerical method proposed is presented. Finally, numerical results provided by this code (UXGun) are compared with results of experimental tests and with the outcome from a well-known zero-dimensional code. The model provides successful results in firing tests of artillery guns, predicting with good accuracy the maximum pressure in the chamber and muzzle velocity what highlights its capabilities as prediction/design tool for internal ballistics.
Shamshirband, Shahaboddin; Banjanovic-Mehmedovic, Lejla; Bosankic, Ivan; Kasapovic, Suad; Abdul Wahab, Ainuddin Wahid Bin
2016-01-01
Intelligent Transportation Systems rely on understanding, predicting and affecting the interactions between vehicles. The goal of this paper is to choose a small subset from the larger set so that the resulting regression model is simple, yet have good predictive ability for Vehicle agent speed relative to Vehicle intruder. The method of ANFIS (adaptive neuro fuzzy inference system) was applied to the data resulting from these measurements. The ANFIS process for variable selection was implemented in order to detect the predominant variables affecting the prediction of agent speed relative to intruder. This process includes several ways to discover a subset of the total set of recorded parameters, showing good predictive capability. The ANFIS network was used to perform a variable search. Then, it was used to determine how 9 parameters (Intruder Front sensors active (boolean), Intruder Rear sensors active (boolean), Agent Front sensors active (boolean), Agent Rear sensors active (boolean), RSSI signal intensity/strength (integer), Elapsed time (in seconds), Distance between Agent and Intruder (m), Angle of Agent relative to Intruder (angle between vehicles °), Altitude difference between Agent and Intruder (m)) influence prediction of agent speed relative to intruder. The results indicated that distance between Vehicle agent and Vehicle intruder (m) and angle of Vehicle agent relative to Vehicle Intruder (angle between vehicles °) is the most influential parameters to Vehicle agent speed relative to Vehicle intruder.
NASA Astrophysics Data System (ADS)
Chaljub, E. O.; Bard, P.; Tsuno, S.; Kristek, J.; Moczo, P.; Franek, P.; Hollender, F.; Manakou, M.; Raptakis, D.; Pitilakis, K.
2009-12-01
During the last decades, an important effort has been dedicated to develop accurate and computationally efficient numerical methods to predict earthquake ground motion in heterogeneous 3D media. The progress in methods and increasing capability of computers have made it technically feasible to calculate realistic seismograms for frequencies of interest in seismic design applications. In order to foster the use of numerical simulation in practical prediction, it is important to (1) evaluate the accuracy of current numerical methods when applied to realistic 3D applications where no reference solution exists (verification) and (2) quantify the agreement between recorded and numerically simulated earthquake ground motion (validation). Here we report the results of the Euroseistest verification and validation project - an ongoing international collaborative work organized jointly by the Aristotle University of Thessaloniki, Greece, the Cashima research project (supported by the French nuclear agency, CEA, and the Laue-Langevin institute, ILL, Grenoble), and the Joseph Fourier University, Grenoble, France. The project involves more than 10 international teams from Europe, Japan and USA. The teams employ the Finite Difference Method (FDM), the Finite Element Method (FEM), the Global Pseudospectral Method (GPSM), the Spectral Element Method (SEM) and the Discrete Element Method (DEM). The project makes use of a new detailed 3D model of the Mygdonian basin (about 5 km wide, 15 km long, sediments reach about 400 m depth, surface S-wave velocity is 200 m/s). The prime target is to simulate 8 local earthquakes with magnitude from 3 to 5. In the verification, numerical predictions for frequencies up to 4 Hz for a series of models with increasing structural and rheological complexity are analyzed and compared using quantitative time-frequency goodness-of-fit criteria. Predictions obtained by one FDM team and the SEM team are close and different from other predictions (consistent with the ESG2006 exercise which targeted the Grenoble Valley). Diffractions off the basin edges and induced surface-wave propagation mainly contribute to differences between predictions. The differences are particularly large in the elastic models but remain important also in models with attenuation. In the validation, predictions are compared with the recordings by a local array of 19 surface and borehole accelerometers. The level of agreement is found event-dependent. For the largest-magnitude event the agreement is surprisingly good even at high frequencies.
Prediction of Protein Structure by Template-Based Modeling Combined with the UNRES Force Field.
Krupa, Paweł; Mozolewska, Magdalena A; Joo, Keehyoung; Lee, Jooyoung; Czaplewski, Cezary; Liwo, Adam
2015-06-22
A new approach to the prediction of protein structures that uses distance and backbone virtual-bond dihedral angle restraints derived from template-based models and simulations with the united residue (UNRES) force field is proposed. The approach combines the accuracy and reliability of template-based methods for the segments of the target sequence with high similarity to those having known structures with the ability of UNRES to pack the domains correctly. Multiplexed replica-exchange molecular dynamics with restraints derived from template-based models of a given target, in which each restraint is weighted according to the accuracy of the prediction of the corresponding section of the molecule, is used to search the conformational space, and the weighted histogram analysis method and cluster analysis are applied to determine the families of the most probable conformations, from which candidate predictions are selected. To test the capability of the method to recover template-based models from restraints, five single-domain proteins with structures that have been well-predicted by template-based methods were used; it was found that the resulting structures were of the same quality as the best of the original models. To assess whether the new approach can improve template-based predictions with incorrectly predicted domain packing, four such targets were selected from the CASP10 targets; for three of them the new approach resulted in significantly better predictions compared with the original template-based models. The new approach can be used to predict the structures of proteins for which good templates can be found for sections of the sequence or an overall good template can be found for the entire sequence but the prediction quality is remarkably weaker in putative domain-linker regions.
NASA/FAA general aviation crash dynamics program
NASA Technical Reports Server (NTRS)
Thomson, R. G.; Hayduk, R. J.; Carden, H. D.
1981-01-01
The program involves controlled full scale crash testing, nonlinear structural analyses to predict large deflection elastoplastic response, and load attenuating concepts for use in improved seat and subfloor structure. Both analytical and experimental methods are used to develop expertise in these areas. Analyses include simplified procedures for estimating energy dissipating capabilities and comprehensive computerized procedures for predicting airframe response. These analyses are developed to provide designers with methods for predicting accelerations, loads, and displacements on collapsing structure. Tests on typical full scale aircraft and on full and subscale structural components are performed to verify the analyses and to demonstrate load attenuating concepts. A special apparatus was built to test emergency locator transmitters when attached to representative aircraft structure. The apparatus is shown to provide a good simulation of the longitudinal crash pulse observed in full scale aircraft crash tests.
Combined rule extraction and feature elimination in supervised classification.
Liu, Sheng; Patel, Ronak Y; Daga, Pankaj R; Liu, Haining; Fu, Gang; Doerksen, Robert J; Chen, Yixin; Wilkins, Dawn E
2012-09-01
There are a vast number of biology related research problems involving a combination of multiple sources of data to achieve a better understanding of the underlying problems. It is important to select and interpret the most important information from these sources. Thus it will be beneficial to have a good algorithm to simultaneously extract rules and select features for better interpretation of the predictive model. We propose an efficient algorithm, Combined Rule Extraction and Feature Elimination (CRF), based on 1-norm regularized random forests. CRF simultaneously extracts a small number of rules generated by random forests and selects important features. We applied CRF to several drug activity prediction and microarray data sets. CRF is capable of producing performance comparable with state-of-the-art prediction algorithms using a small number of decision rules. Some of the decision rules are biologically significant.
La Delfa, Nicholas J; Potvin, Jim R
2017-03-01
This paper describes the development of a novel method (termed the 'Arm Force Field' or 'AFF') to predict manual arm strength (MAS) for a wide range of body orientations, hand locations and any force direction. This method used an artificial neural network (ANN) to predict the effects of hand location and force direction on MAS, and included a method to estimate the contribution of the arm's weight to the predicted strength. The AFF method predicted the MAS values very well (r 2 = 0.97, RMSD = 5.2 N, n = 456) and maintained good generalizability with external test data (r 2 = 0.842, RMSD = 13.1 N, n = 80). The AFF can be readily integrated within any DHM ergonomics software, and appears to be a more robust, reliable and valid method of estimating the strength capabilities of the arm, when compared to current approaches. Copyright © 2016 Elsevier Ltd. All rights reserved.
Wang, Dan; Wang, Qingtang; Zhang, Zhuomin; Chen, Guonan
2012-01-21
ZnO nanorod array coating is a novel kind of solid-phase microextraction (SPME) fiber coating which shows good extraction capability due to the nanostructure. To prepare the composite coating is a good way to improve the extraction capability. In this paper, the ZnO nanorod array polydimethylsiloxane (PDMS) composite SPME fiber coating has been prepared and its extraction capability for volatile organic compounds (VOCs) has been studied by headspace sampling the typical volatile mixed standard solution of benzene, toluene, ethylbenzene and xylene (BTEX). Improved detection limit and good linear ranges have been achieved for this composite SPME fiber coating. Also, it is found that the composite SPME fiber coating shows good extraction selectivity to the VOCs with alkane radicals.
The Integrated Design Plan for the Coastal Module of GOOS
NASA Astrophysics Data System (ADS)
Malone, T.; Knap, T.
2003-04-01
Changes in the ocean-climate system and land-use in coastal drainage basins are major drivers of change in coastal waters of the U.S. EEZ. Consequent changes in physical, biological, chemical and geological processes affect public health and well being, the health of coastal marine ecosystems, and the sustainability of living marine resources. Such changes are related through a hierarchy of interactions that can be represented by robust models of ecosystems dynamics. These observations make a compelling case for an ecosystem-based approach to the management of living marine resources, environmental protection, coastal zone management, and coastal engineering, especially in coastal systems where erosion, flooding, habitat alterations, over-fishing, water pollution, fish kills, invasions of non-indigenous species, and harmful algal blooms are collectively most severe. Implementing an ecosystem-based strategy requires the capability to engage in adaptive management, a decision-making activity that depends on the ability to (1) routinely and rapidly detect changes in the environment and living resources and to (2) provide timely predictions of changes in or the occurrence of the phenomena that affect the capacity of ecosystems to provide goods and services (from surface currents and coastal flooding to habitat modification and the loss of biodiversity). We do not have this capability today. Effective management and sustainable use also depend on efficient and timely coupling of the processes by which new scientific knowledge is gained and the fruits of this knowledge are used for the public good. Today, there is an unacceptable disconnect between these processes. A new approach to detecting and predicting environmental changes is needed that enables adaptive management through routine, continuous and rapid provision of data and information. The Coastal Module of the Global Ocean Observing System is being developed as part of the Integrated Global Observing Strategy to address this challenge. The observing system is not only needed to provide data and information on the time scales that environmental decisions should be made, it is needed to facilitate environmental research and to protect the integrity of the scientific method upon which the development of predictive capabilities depend.
Generalized Predictive and Neural Generalized Predictive Control of Aerospace Systems
NASA Technical Reports Server (NTRS)
Kelkar, Atul G.
2000-01-01
The research work presented in this thesis addresses the problem of robust control of uncertain linear and nonlinear systems using Neural network-based Generalized Predictive Control (NGPC) methodology. A brief overview of predictive control and its comparison with Linear Quadratic (LQ) control is given to emphasize advantages and drawbacks of predictive control methods. It is shown that the Generalized Predictive Control (GPC) methodology overcomes the drawbacks associated with traditional LQ control as well as conventional predictive control methods. It is shown that in spite of the model-based nature of GPC it has good robustness properties being special case of receding horizon control. The conditions for choosing tuning parameters for GPC to ensure closed-loop stability are derived. A neural network-based GPC architecture is proposed for the control of linear and nonlinear uncertain systems. A methodology to account for parametric uncertainty in the system is proposed using on-line training capability of multi-layer neural network. Several simulation examples and results from real-time experiments are given to demonstrate the effectiveness of the proposed methodology.
Sun, Lili; Zhou, Liping; Yu, Yu; Lan, Yukun; Li, Zhiliang
2007-01-01
Polychlorinated diphenyl ethers (PCDEs) have received more and more concerns as a group of ubiquitous potential persistent organic pollutants (POPs). By using molecular electronegativity distance vector (MEDV-4), multiple linear regression (MLR) models are developed for sub-cooled liquid vapor pressures (P(L)), n-octanol/water partition coefficients (K(OW)) and sub-cooled liquid water solubilities (S(W,L)) of 209 PCDEs and diphenyl ether. The correlation coefficients (R) and the leave-one-out cross-validation (LOO) correlation coefficients (R(CV)) of all the 6-descriptor models for logP(L), logK(OW) and logS(W,L) are more than 0.98. By using stepwise multiple regression (SMR), the descriptors are selected and the resulting models are 5-descriptor model for logP(L), 4-descriptor model for logK(OW), and 6-descriptor model for logS(W,L), respectively. All these models exhibit excellent estimate capabilities for internal sample set and good predictive capabilities for external samples set. The consistency between observed and estimated/predicted values for logP(L) is the best (R=0.996, R(CV)=0.996), followed by logK(OW) (R=0.992, R(CV)=0.992) and logS(W,L) (R=0.983, R(CV)=0.980). By using MEDV-4 descriptors, the QSPR models can be used for prediction and the model predictions can hence extend the current database of experimental values.
Universities, Professional Capabilities and Contributions to the Public Good in South Africa
ERIC Educational Resources Information Center
Walker, Melanie
2012-01-01
The generation of a public-good, capabilities-based approach to professional education in South African universities is outlined and proposed as a contribution to wider social transformation. The relevance and importance of understanding what Amartya Sen describes as "capability failure" in the lives of people living in poverty is…
NASA Astrophysics Data System (ADS)
Schmit, C. J.; Pritchard, J. R.
2018-03-01
Next generation radio experiments such as LOFAR, HERA, and SKA are expected to probe the Epoch of Reionization (EoR) and claim a first direct detection of the cosmic 21cm signal within the next decade. Data volumes will be enormous and can thus potentially revolutionize our understanding of the early Universe and galaxy formation. However, numerical modelling of the EoR can be prohibitively expensive for Bayesian parameter inference and how to optimally extract information from incoming data is currently unclear. Emulation techniques for fast model evaluations have recently been proposed as a way to bypass costly simulations. We consider the use of artificial neural networks as a blind emulation technique. We study the impact of training duration and training set size on the quality of the network prediction and the resulting best-fitting values of a parameter search. A direct comparison is drawn between our emulation technique and an equivalent analysis using 21CMMC. We find good predictive capabilities of our network using training sets of as low as 100 model evaluations, which is within the capabilities of fully numerical radiative transfer codes.
Use of Air Quality Observations by the National Air Quality Forecast Capability
NASA Astrophysics Data System (ADS)
Stajner, I.; McQueen, J.; Lee, P.; Stein, A. F.; Kondragunta, S.; Ruminski, M.; Tong, D.; Pan, L.; Huang, J. P.; Shafran, P.; Huang, H. C.; Dickerson, P.; Upadhayay, S.
2015-12-01
The National Air Quality Forecast Capability (NAQFC) operational predictions of ozone and wildfire smoke for the United States (U.S.) and predictions of airborne dust for continental U.S. are available at http://airquality.weather.gov/. NOAA National Centers for Environmental Prediction (NCEP) operational North American Mesoscale (NAM) weather predictions are combined with the Community Multiscale Air Quality (CMAQ) model to produce the ozone predictions and test fine particulate matter (PM2.5) predictions. The Hybrid Single Particle Lagrangian Integrated Trajectory (HYSPLIT) model provides smoke and dust predictions. Air quality observations constrain emissions used by NAQFC predictions. NAQFC NOx emissions from mobile sources were updated using National Emissions Inventory (NEI) projections for year 2012. These updates were evaluated over large U.S. cities by comparing observed changes in OMI NO2 observations and NOx measured by surface monitors. The rate of decrease in NOx emission projections from year 2005 to year 2012 is in good agreement with the observed changes over the same period. Smoke emissions rely on the fire locations detected from satellite observations obtained from NESDIS Hazard Mapping System (HMS). Dust emissions rely on a climatology of areas with a potential for dust emissions based on MODIS Deep Blue aerosol retrievals. Verification of NAQFC predictions uses AIRNow compilation of surface measurements for ozone and PM2.5. Retrievals of smoke from GOES satellites are used for verification of smoke predictions. Retrievals of dust from MODIS are used for verification of dust predictions. In summary, observations are the basis for the emissions inputs for NAQFC, they are critical for evaluation of performance of NAQFC predictions, and furthermore they are used in real-time testing of bias correction of PM2.5 predictions, as we continue to work on improving modeling and emissions important for representation of PM2.5.
Temperature coefficients and radiation induced DLTS spectra of MOCVD grown n(+)p InP solar cells
NASA Technical Reports Server (NTRS)
Walters, Robert J.; Statler, Richard L.; Summers, Geoffrey P.
1991-01-01
The effects of temperature and radiation on n(+)p InP solar cells and mesa diodes grown by metallorganic chemical vapor deposition (MOCVD) were studied. It was shown that MOCVD is capable of consistently producing good quality InP solar cells with Eff greater than 19 percent which display excellent radiation resistance due to minority carrier injection and thermal annealing. It was also shown that universal predictions of InP device performance based on measurements of a small group of test samples can be expected to be quite accurate, and that the degradation of an InP device due to any incident particle spectrum should be predictable from a measurement following a single low energy proton irradiation.
A model of the human observer and decision maker
NASA Technical Reports Server (NTRS)
Wewerinke, P. H.
1981-01-01
The decision process is described in terms of classical sequential decision theory by considering the hypothesis that an abnormal condition has occurred by means of a generalized likelihood ratio test. For this, a sufficient statistic is provided by the innovation sequence which is the result of the perception an information processing submodel of the human observer. On the basis of only two model parameters, the model predicts the decision speed/accuracy trade-off and various attentional characteristics. A preliminary test of the model for single variable failure detection tasks resulted in a very good fit of the experimental data. In a formal validation program, a variety of multivariable failure detection tasks was investigated and the predictive capability of the model was demonstrated.
Huang, Hsin-Chung; Yang, Hwai-I; Chang, Yu-Hsun; Chang, Rui-Jane; Chen, Mei-Huei; Chen, Chien-Yi; Chou, Hung-Chieh; Hsieh, Wu-Shiun; Tsao, Po-Nien
2012-12-01
The aim of this study was to identify high-risk newborns who will subsequently develop significant hyperbilirubinemia Days 4 to 10 of life by using the clinical data from the first three days of life. We retrospectively collected exclusively breastfeeding healthy term and near-term newborns born in our nursery between May 1, 2002, to June 30, 2005. Clinical data, including serum bilirubin were collected and the significant predictors were identified. Bilirubin level ≥15mg/dL during Days 4 to 10 of life was defined as significant hyperbilirubinemia. A prediction model to predict subsequent hyperbilirubinemia was established. This model was externally validated in another group of newborns who were enrolled by the same criteria to test its discrimination capability. Totally, 1979 neonates were collected and 1208 cases were excluded by our exclusion criteria. Finally, 771 newborns were enrolled and 182 (23.6%) cases developed significant hyperbilirubinemia during Days 4 to 10 of life. In the logistic regression analysis, gestational age, maximal body weight loss percentage, and peak bilirubin level during the first 72 hours of life were significantly associated with subsequent hyperbilirubinemia. A prediction model was derived with the area under receiver operating characteristic (AUROC) curve of 0.788. Model validation in the separate study (N = 209) showed similar discrimination capability (AUROC = 0.8340). Gestational age, maximal body weight loss percentage, and peak serum bilirubin level during the first 3 days of life have highest predictive value of subsequent significant hyperbilirubinemia. We provide a good model to predict the risk of subsequent significant hyperbilirubinemia. Copyright © 2012. Published by Elsevier B.V.
Searching for Dark Matter Signatures in the GLAST LAT Electron Flux
NASA Technical Reports Server (NTRS)
Moiseev, Alexander; Profumo, Stefano
2008-01-01
We explored several viable scenarios of how LAT might observe DM, when the spectral feature is predicted to be observed in the HE electron flux It has been demonstrated elsewhere that LAT will be capable to detect HE electrons flux in energy range from 20 GeV to - 1 TeV with 520% energy resolution and good statistics If there is a DM-caused feature in the HE electron flux (in the range 20 GeV - 1 TeV), LAT will be the best current instrument to observe it!
Prediction of high temperature metal matrix composite ply properties
NASA Technical Reports Server (NTRS)
Caruso, J. J.; Chamis, C. C.
1988-01-01
The application of the finite element method (superelement technique) in conjunction with basic concepts from mechanics of materials theory is demonstrated to predict the thermomechanical behavior of high temperature metal matrix composites (HTMMC). The simulated behavior is used as a basis to establish characteristic properties of a unidirectional composite idealized an as equivalent homogeneous material. The ply properties predicted include: thermal properties (thermal conductivities and thermal expansion coefficients) and mechanical properties (moduli and Poisson's ratio). These properties are compared with those predicted by a simplified, analytical composite micromechanics model. The predictive capabilities of the finite element method and the simplified model are illustrated through the simulation of the thermomechanical behavior of a P100-graphite/copper unidirectional composite at room temperature and near matrix melting temperature. The advantage of the finite element analysis approach is its ability to more precisely represent the composite local geometry and hence capture the subtle effects that are dependent on this. The closed form micromechanics model does a good job at representing the average behavior of the constituents to predict composite behavior.
Low Cost Gas Turbine Off-Design Prediction Technique
NASA Astrophysics Data System (ADS)
Martinjako, Jeremy
This thesis seeks to further explore off-design point operation of gas turbines and to examine the capabilities of GasTurb 12 as a tool for off-design analysis. It is a continuation of previous thesis work which initially explored the capabilities of GasTurb 12. The research is conducted in order to: 1) validate GasTurb 12 and, 2) predict off-design performance of the Garrett GTCP85-98D located at the Arizona State University Tempe campus. GasTurb 12 is validated as an off-design point tool by using the program to predict performance of an LM2500+ marine gas turbine. Haglind and Elmegaard (2009) published a paper detailing a second off-design point method and it includes the manufacturer's off-design point data for the LM2500+. GasTurb 12 is used to predict off-design point performance of the LM2500+ and compared to the manufacturer's data. The GasTurb 12 predictions show good correlation. Garrett has published specification data for the GTCP85-98D. This specification data is analyzed to determine the design point and to comment on off-design trends. Arizona State University GTCP85-98D off-design experimental data is evaluated. Trends presented in the data are commented on and explained. The trends match the expected behavior demonstrated in the specification data for the same gas turbine system. It was originally intended that a model of the GTCP85-98D be constructed in GasTurb 12 and used to predict off-design performance. The prediction would be compared to collected experimental data. This is not possible because the free version of GasTurb 12 used in this research does not have a module to model a single spool turboshaft. This module needs to be purchased for this analysis.
The System of Inventory Forecasting in PT. XYZ by using the Method of Holt Winter Multiplicative
NASA Astrophysics Data System (ADS)
Shaleh, W.; Rasim; Wahyudin
2018-01-01
Problems at PT. XYZ currently only rely on manual bookkeeping, then the cost of production will swell and all investments invested to be less to predict sales and inventory of goods. If the inventory prediction of goods is to large, then the cost of production will swell and all investments invested to be less efficient. Vice versa, if the inventory prediction is too small it will impact on consumers, so that consumers are forced to wait for the desired product. Therefore, in this era of globalization, the development of computer technology has become a very important part in every business plan. Almost of all companies, both large and small, use computer technology. By utilizing computer technology, people can make time in solving complex business problems. Computer technology for companies has become an indispensable activity to provide enhancements to the business services they manage but systems and technologies are not limited to the distribution model and data processing but the existing system must be able to analyze the possibilities of future company capabilities. Therefore, the company must be able to forecast conditions and circumstances, either from inventory of goods, force, or profits to be obtained. To forecast it, the data of total sales from December 2014 to December 2016 will be calculated by using the method of Holt Winters, which is the method of time series prediction (Multiplicative Seasonal Method) it is seasonal data that has increased and decreased, also has 4 equations i.e. Single Smoothing, Trending Smoothing, Seasonal Smoothing and Forecasting. From the results of research conducted, error value in the form of MAPE is below 1%, so it can be concluded that forecasting with the method of Holt Winter Multiplicative.
Survival Regression Modeling Strategies in CVD Prediction.
Barkhordari, Mahnaz; Padyab, Mojgan; Sardarinia, Mahsa; Hadaegh, Farzad; Azizi, Fereidoun; Bozorgmanesh, Mohammadreza
2016-04-01
A fundamental part of prevention is prediction. Potential predictors are the sine qua non of prediction models. However, whether incorporating novel predictors to prediction models could be directly translated to added predictive value remains an area of dispute. The difference between the predictive power of a predictive model with (enhanced model) and without (baseline model) a certain predictor is generally regarded as an indicator of the predictive value added by that predictor. Indices such as discrimination and calibration have long been used in this regard. Recently, the use of added predictive value has been suggested while comparing the predictive performances of the predictive models with and without novel biomarkers. User-friendly statistical software capable of implementing novel statistical procedures is conspicuously lacking. This shortcoming has restricted implementation of such novel model assessment methods. We aimed to construct Stata commands to help researchers obtain the aforementioned statistical indices. We have written Stata commands that are intended to help researchers obtain the following. 1, Nam-D'Agostino X 2 goodness of fit test; 2, Cut point-free and cut point-based net reclassification improvement index (NRI), relative absolute integrated discriminatory improvement index (IDI), and survival-based regression analyses. We applied the commands to real data on women participating in the Tehran lipid and glucose study (TLGS) to examine if information relating to a family history of premature cardiovascular disease (CVD), waist circumference, and fasting plasma glucose can improve predictive performance of Framingham's general CVD risk algorithm. The command is adpredsurv for survival models. Herein we have described the Stata package "adpredsurv" for calculation of the Nam-D'Agostino X 2 goodness of fit test as well as cut point-free and cut point-based NRI, relative and absolute IDI, and survival-based regression analyses. We hope this work encourages the use of novel methods in examining predictive capacity of the emerging plethora of novel biomarkers.
Benchmarking hydrological model predictive capability for UK River flows and flood peaks.
NASA Astrophysics Data System (ADS)
Lane, Rosanna; Coxon, Gemma; Freer, Jim; Wagener, Thorsten
2017-04-01
Data and hydrological models are now available for national hydrological analyses. However, hydrological model performance varies between catchments, and lumped, conceptual models are not able to produce adequate simulations everywhere. This study aims to benchmark hydrological model performance for catchments across the United Kingdom within an uncertainty analysis framework. We have applied four hydrological models from the FUSE framework to 1128 catchments across the UK. These models are all lumped models and run at a daily timestep, but differ in the model structural architecture and process parameterisations, therefore producing different but equally plausible simulations. We apply FUSE over a 20 year period from 1988-2008, within a GLUE Monte Carlo uncertainty analyses framework. Model performance was evaluated for each catchment, model structure and parameter set using standard performance metrics. These were calculated both for the whole time series and to assess seasonal differences in model performance. The GLUE uncertainty analysis framework was then applied to produce simulated 5th and 95th percentile uncertainty bounds for the daily flow time-series and additionally the annual maximum prediction bounds for each catchment. The results show that the model performance varies significantly in space and time depending on catchment characteristics including climate, geology and human impact. We identify regions where models are systematically failing to produce good results, and present reasons why this could be the case. We also identify regions or catchment characteristics where one model performs better than others, and have explored what structural component or parameterisation enables certain models to produce better simulations in these catchments. Model predictive capability was assessed for each catchment, through looking at the ability of the models to produce discharge prediction bounds which successfully bound the observed discharge. These results improve our understanding of the predictive capability of simple conceptual hydrological models across the UK and help us to identify where further effort is needed to develop modelling approaches to better represent different catchment and climate typologies.
Computational modeling of in vitro biological responses on polymethacrylate surfaces
Ghosh, Jayeeta; Lewitus, Dan Y; Chandra, Prafulla; Joy, Abraham; Bushman, Jared; Knight, Doyle; Kohn, Joachim
2011-01-01
The objective of this research was to examine the capabilities of QSPR (Quantitative Structure Property Relationship) modeling to predict specific biological responses (fibrinogen adsorption, cell attachment and cell proliferation index) on thin films of different polymethacrylates. Using 33 commercially available monomers it is theoretically possible to construct a library of over 40,000 distinct polymer compositions. A subset of these polymers were synthesized and solvent cast surfaces were prepared in 96 well plates for the measurement of fibrinogen adsorption. NIH 3T3 cell attachment and proliferation index were measured on spin coated thin films of these polymers. Based on the experimental results of these polymers, separate models were built for homo-, co-, and terpolymers in the library with good correlation between experiment and predicted values. The ability to predict biological responses by simple QSPR models for large numbers of polymers has important implications in designing biomaterials for specific biological or medical applications. PMID:21779132
Gaonkar, Narayan; Vaidya, R G
2016-05-01
A simple method to estimate the density of biodiesel blend as simultaneous function of temperature and volume percent of biodiesel is proposed. Employing the Kay's mixing rule, we developed a model and investigated theoretically the density of different vegetable oil biodiesel blends as a simultaneous function of temperature and volume percent of biodiesel. Key advantage of the proposed model is that it requires only a single set of density values of components of biodiesel blends at any two different temperatures. We notice that the density of blend linearly decreases with increase in temperature and increases with increase in volume percent of the biodiesel. The lower values of standard estimate of error (SEE = 0.0003-0.0022) and absolute average deviation (AAD = 0.03-0.15 %) obtained using the proposed model indicate the predictive capability. The predicted values found good agreement with the recent available experimental data.
Development of a Benchmark Example for Delamination Fatigue Growth Prediction
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2010-01-01
The development of a benchmark example for cyclic delamination growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of a Double Cantilever Beam (DCB) specimen, which is independent of the analysis software used and allows the assessment of the delamination growth prediction capabilities in commercial finite element codes. First, the benchmark result was created for the specimen. Second, starting from an initially straight front, the delamination was allowed to grow under cyclic loading in a finite element model of a commercial code. The number of cycles to delamination onset and the number of cycles during stable delamination growth for each growth increment were obtained from the analysis. In general, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. Overall, the results are encouraging but further assessment for mixed-mode delamination is required
Multiscale Fatigue Life Prediction for Composite Panels
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Yarrington, Phillip W.; Arnold, Steven M.
2012-01-01
Fatigue life prediction capabilities have been incorporated into the HyperSizer Composite Analysis and Structural Sizing Software. The fatigue damage model is introduced at the fiber/matrix constituent scale through HyperSizer s coupling with NASA s MAC/GMC micromechanics software. This enables prediction of the micro scale damage progression throughout stiffened and sandwich panels as a function of cycles leading ultimately to simulated panel failure. The fatigue model implementation uses a cycle jumping technique such that, rather than applying a specified number of additional cycles, a specified local damage increment is specified and the number of additional cycles to reach this damage increment is calculated. In this way, the effect of stress redistribution due to damage-induced stiffness change is captured, but the fatigue simulations remain computationally efficient. The model is compared to experimental fatigue life data for two composite facesheet/foam core sandwich panels, demonstrating very good agreement.
Static and fatigue testing of full-scale fuselage panels fabricated using a Therm-X(R) process
NASA Technical Reports Server (NTRS)
Dinicola, Albert J.; Kassapoglou, Christos; Chou, Jack C.
1992-01-01
Large, curved, integrally stiffened composite panels representative of an aircraft fuselage structure were fabricated using a Therm-X process, an alternative concept to conventional two-sided hard tooling and contour vacuum bagging. Panels subsequently were tested under pure shear loading in both static and fatigue regimes to assess the adequacy of the manufacturing process, the effectiveness of damage tolerant design features co-cured with the structure, and the accuracy of finite element and closed-form predictions of postbuckling capability and failure load. Test results indicated the process yielded panels of high quality and increased damage tolerance through suppression of common failure modes such as skin-stiffener separation and frame-stiffener corner failure. Finite element analyses generally produced good predictions of postbuckled shape, and a global-local modelling technique yielded failure load predictions that were within 7% of the experimental mean.
Empirical Evaluation of Hunk Metrics as Bug Predictors
NASA Astrophysics Data System (ADS)
Ferzund, Javed; Ahsan, Syed Nadeem; Wotawa, Franz
Reducing the number of bugs is a crucial issue during software development and maintenance. Software process and product metrics are good indicators of software complexity. These metrics have been used to build bug predictor models to help developers maintain the quality of software. In this paper we empirically evaluate the use of hunk metrics as predictor of bugs. We present a technique for bug prediction that works at smallest units of code change called hunks. We build bug prediction models using random forests, which is an efficient machine learning classifier. Hunk metrics are used to train the classifier and each hunk metric is evaluated for its bug prediction capabilities. Our classifier can classify individual hunks as buggy or bug-free with 86 % accuracy, 83 % buggy hunk precision and 77% buggy hunk recall. We find that history based and change level hunk metrics are better predictors of bugs than code level hunk metrics.
NASA Astrophysics Data System (ADS)
Nascimento, Luis Alberto Herrmann do
This dissertation presents the implementation and validation of the viscoelastic continuum damage (VECD) model for asphalt mixture and pavement analysis in Brazil. It proposes a simulated damage-to-fatigue cracked area transfer function for the layered viscoelastic continuum damage (LVECD) program framework and defines the model framework's fatigue cracking prediction error for asphalt pavement reliability-based design solutions in Brazil. The research is divided into three main steps: (i) implementation of the simplified viscoelastic continuum damage (S-VECD) model in Brazil (Petrobras) for asphalt mixture characterization, (ii) validation of the LVECD model approach for pavement analysis based on field performance observations, and defining a local simulated damage-to-cracked area transfer function for the Fundao Project's pavement test sections in Rio de Janeiro, RJ, and (iii) validation of the Fundao project local transfer function to be used throughout Brazil for asphalt pavement fatigue cracking predictions, based on field performance observations of the National MEPDG Project's pavement test sections, thereby validating the proposed framework's prediction capability. For the first step, the S-VECD test protocol, which uses controlled-on-specimen strain mode-of-loading, was successfully implemented at the Petrobras and used to characterize Brazilian asphalt mixtures that are composed of a wide range of asphalt binders. This research verified that the S-VECD model coupled with the GR failure criterion is accurate for fatigue life predictions of Brazilian asphalt mixtures, even when very different asphalt binders are used. Also, the applicability of the load amplitude sweep (LAS) test for the fatigue characterization of the asphalt binders was checked, and the effects of different asphalt binders on the fatigue damage properties of the asphalt mixtures was investigated. The LAS test results, modeled according to VECD theory, presented a strong correlation with the asphalt mixtures' fatigue performance. In the second step, the S-VECD test protocol was used to characterize the asphalt mixtures used in the 27 selected Fundao project test sections and subjected to real traffic loading. Thus, the asphalt mixture properties, pavement structure data, traffic loading, and climate were input into the LVECD program for pavement fatigue cracking performance simulations. The simulation results showed good agreement with the field-observed distresses. Then, a damage shift approach, based on the initial simulated damage growth rate, was introduced in order to obtain a unique relationship between the LVECD-simulated shifted damage and the pavement-observed fatigue cracked areas. This correlation was fitted to a power form function and defined as the averaged reduced damage-to-cracked area transfer function. The last step consisted of using the averaged reduced damage-to-cracked area transfer function that was developed in the Fundao project to predict pavement fatigue cracking in 17 National MEPDG project test sections. The procedures for the material characterization and pavement data gathering adopted in this step are similar to those used for the Fundao project simulations. This research verified that the transfer function defined for the Fundao project sections can be used for the fatigue performance predictions of a wide range of pavements all over Brazil, as the predicted and observed cracked areas for the National MEPDG pavements presented good agreement, following the same trends found for the Fundao project pavement sites. Based on the prediction errors determined for all 44 pavement test sections (Fundao and National MEPDG test sections), the proposed framework's prediction capability was determined so that reliability-based solutions can be applied for flexible pavement design. It was concluded that the proposed LVECD program framework has very good fatigue cracking prediction capability.
Test prediction for the German PKL Test K5A using RELAP4/MOD6
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Y.S.; Haigh, W.S.; Sullivan, L.H.
RELAP4/MOD6 is the most recent modification in the series of RELAP4 computer programs developed to describe the thermal-hydraulic conditions attendant to postulated transients in light water reactor systems. The major new features in RELAP4/MOD6 include best-estimate pressurized water reactor (PWR) reflood transient analytical models for core heat transfer, local entrainment, and core vapor superheat, and a new set of heat transfer correlations for PWR blowdown and reflood. These new features were used for a test prediction of the Kraftwerk Union three-loop PRIMAR KREISLAUF (PKL) Reflood Test K5A. The results of the prediction were in good agreement with the experimental thermalmore » and hydraulic system data. Comparisons include heater rod surface temperature, system pressure, mass flow rates, and core mixture level. It is concluded that RELAP4/MOD6 is capable of accurately predicting transient reflood phenomena in the 200% cold-leg break test configuration of the PKL reflood facility.« less
Computational and Experimental Study of Supersonic Nozzle Flow and Shock Interactions
NASA Technical Reports Server (NTRS)
Carter, Melissa B.; Elmiligui, Alaa A.; Nayani, Sudheer N.; Castner, Ray; Bruce, Walter E., IV; Inskeep, Jacob
2015-01-01
This study focused on the capability of NASA Tetrahedral Unstructured Software System's CFD code USM3D capability to predict the interaction between a shock and supersonic plume flow. Previous studies, published in 2004, 2009 and 2013, investigated USM3D's supersonic plume flow results versus historical experimental data. This current study builds on that research by utilizing the best practices from the early papers for properly capturing the plume flow and then adding a wedge acting as a shock generator. This computational study is in conjunction with experimental tests conducted at the Glenn Research Center 1'x1' Supersonic Wind Tunnel. The comparison of the computational and experimental data shows good agreement for location and strength of the shocks although there are vertical shifts between the data sets that may be do to the measurement technique.
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2012-01-01
The development of benchmark examples for quasi-static delamination propagation prediction is presented and demonstrated for a commercial code. The examples are based on finite element models of the Mixed-Mode Bending (MMB) specimen. The examples are independent of the analysis software used and allow the assessment of the automated delamination propagation prediction capability in commercial finite element codes based on the virtual crack closure technique (VCCT). First, quasi-static benchmark examples were created for the specimen. Second, starting from an initially straight front, the delamination was allowed to propagate under quasi-static loading. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Good agreement between the results obtained from the automated propagation analysis and the benchmark results could be achieved by selecting input parameters that had previously been determined during analyses of mode I Double Cantilever Beam and mode II End Notched Flexure specimens. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Overall the results are encouraging, but further assessment for mixed-mode delamination fatigue onset and growth is required.
Lessons Learned from the Wide Field Camera 3 Flight Correlation
NASA Technical Reports Server (NTRS)
Peabody, Hume L.; Stavely, Richard A.; Townsend, Jackie; Abel, Josh; Mandi, Joe; Bast, William
2010-01-01
The Wide Field Camera 3 (WFC3) instrument was installed into the Hubble Space Telescope (HST) as part of the activities for STS (Space Transportation System)-125 (HST Servicing Mission 4). Initial model predictions for power and radiator temperature were not in good agreement with flight data during a relatively hot, stable period, with the flight power and temperatures being significantly higher than predictions. Significant efforts were undertaken to identify the causes of the discrepancies and to resolve the flight model correlation problems as the thermal vacuum test correlation indicated good agreement. The WFC3 thermal design performance has proven difficult to accurately predict, since the power dissipation on the radiator typically increases as the radiator temperature increases, due to a Thermo Electric Cooler (TEC) attached to the this radiator. This self beating continues until the radiative emissive capability is met for a given temperature, and only then does the system find a quasi-steady regime. Various other factors may also contribute to the radiator temperature, such as backloadlng from the observatory itself and the planet, local high-absorptivity regions near fasteners/holes, and temperature varying parasitic heat leaks from the instrument itself to the radiator. Each of these effects in turn may increase the radiator temperature, and furthermore the demand on the TEC.
Maltarollo, Vinícius G; Homem-de-Mello, Paula; Honorio, Káthia M
2011-10-01
Current researches on treatments for metabolic diseases involve a class of biological receptors called peroxisome proliferator-activated receptors (PPARs), which control the metabolism of carbohydrates and lipids. A subclass of these receptors, PPARδ, regulates several metabolic processes, and the substances that activate them are being studied as new drug candidates for the treatment of diabetes mellitus and metabolic syndrome. In this study, several PPARδ agonists with experimental biological activity were selected for a structural and chemical study. Electronic, stereochemical, lipophilic and topological descriptors were calculated for the selected compounds using various theoretical methods, such as density functional theory (DFT). Fisher's weight and principal components analysis (PCA) methods were employed to select the most relevant variables for this study. The partial least squares (PLS) method was used to construct the multivariate statistical model, and the best model obtained had 4 PCs, q ( 2 ) = 0.80 and r ( 2 ) = 0.90, indicating a good internal consistency. The prediction residues calculated for the compounds in the test set had low values, indicating the good predictive capability of our PLS model. The model obtained in this study is reliable and can be used to predict the biological activity of new untested compounds. Docking studies have also confirmed the importance of the molecular descriptors selected for this system.
Heterogeneity and Cooperation: The Role of Capability and Valuation on Public Goods Provision
Kolle, Felix
2018-01-01
We experimentally investigate the effects of two different sources of heterogeneity - capability and valuation - on the provision public goods when punishment is possible or not. We find that compared to homogeneous groups, asymmetric valuations for the public good have negative effects on cooperation and its enforcement through informal sanctions. Asymmetric capabilities in providing the public good, in contrast, have a positive and stabilizing effect on voluntary contributions. The main reason for these results are the different externalities contributions have on the other group members’ payoffs affecting individuals’ willingness to cooperate. We thus provide evidence that it is not the asymmetric nature of groups per se that facilitates or impedes collective action, but that it is rather the nature of asymmetry that determines the degree of cooperation and the level of public good provision. PMID:29367794
CHEETAH: A fast thermochemical code for detonation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fried, L.E.
1993-11-01
For more than 20 years, TIGER has been the benchmark thermochemical code in the energetic materials community. TIGER has been widely used because it gives good detonation parameters in a very short period of time. Despite its success, TIGER is beginning to show its age. The program`s chemical equilibrium solver frequently crashes, especially when dealing with many chemical species. It often fails to find the C-J point. Finally, there are many inconveniences for the user stemming from the programs roots in pre-modern FORTRAN. These inconveniences often lead to mistakes in preparing input files and thus erroneous results. We are producingmore » a modern version of TIGER, which combines the best features of the old program with new capabilities, better computational algorithms, and improved packaging. The new code, which will evolve out of TIGER in the next few years, will be called ``CHEETAH.`` Many of the capabilities that will be put into CHEETAH are inspired by the thermochemical code CHEQ. The new capabilities of CHEETAH are: calculate trace levels of chemical compounds for environmental analysis; kinetics capability: CHEETAH will predict chemical compositions as a function of time given individual chemical reaction rates. Initial application: carbon condensation; CHEETAH will incorporate partial reactions; CHEETAH will be based on computer-optimized JCZ3 and BKW parameters. These parameters will be fit to over 20 years of data collected at LLNL. We will run CHEETAH thousands of times to determine the best possible parameter sets; CHEETAH will fit C-J data to JWL`s,and also predict full-wall and half-wall cylinder velocities.« less
Wang, Shuangquan; Sun, Huiyong; Liu, Hui; Li, Dan; Li, Youyong; Hou, Tingjun
2016-08-01
Blockade of human ether-à-go-go related gene (hERG) channel by compounds may lead to drug-induced QT prolongation, arrhythmia, and Torsades de Pointes (TdP), and therefore reliable prediction of hERG liability in the early stages of drug design is quite important to reduce the risk of cardiotoxicity-related attritions in the later development stages. In this study, pharmacophore modeling and machine learning approaches were combined to construct classification models to distinguish hERG active from inactive compounds based on a diverse data set. First, an optimal ensemble of pharmacophore hypotheses that had good capability to differentiate hERG active from inactive compounds was identified by the recursive partitioning (RP) approach. Then, the naive Bayesian classification (NBC) and support vector machine (SVM) approaches were employed to construct classification models by integrating multiple important pharmacophore hypotheses. The integrated classification models showed improved predictive capability over any single pharmacophore hypothesis, suggesting that the broad binding polyspecificity of hERG can only be well characterized by multiple pharmacophores. The best SVM model achieved the prediction accuracies of 84.7% for the training set and 82.1% for the external test set. Notably, the accuracies for the hERG blockers and nonblockers in the test set reached 83.6% and 78.2%, respectively. Analysis of significant pharmacophores helps to understand the multimechanisms of action of hERG blockers. We believe that the combination of pharmacophore modeling and SVM is a powerful strategy to develop reliable theoretical models for the prediction of potential hERG liability.
High Temperature, Permanent Magnet Biased, Fault Tolerant, Homopolar Magnetic Bearing Development
NASA Technical Reports Server (NTRS)
Palazzolo, Alan; Tucker, Randall; Kenny, Andrew; Kang, Kyung-Dae; Ghandi, Varun; Liu, Jinfang; Choi, Heeju; Provenza, Andrew
2008-01-01
This paper summarizes the development of a magnetic bearing designed to operate at 1,000 F. A novel feature of this high temperature magnetic bearing is its homopolar construction which incorporates state of the art high temperature, 1,000 F, permanent magnets. A second feature is its fault tolerance capability which provides the desired control forces with over one-half of the coils failed. The construction and design methodology of the bearing is outlined and test results are shown. The agreement between a 3D finite element, magnetic field based prediction for force is shown to be in good agreement with predictions at room and high temperature. A 5 axis test rig will be complete soon to provide a means to test the magnetic bearings at high temperature and speed.
Prediction of hot deformation behavior of high phosphorus steel using artificial neural network
NASA Astrophysics Data System (ADS)
Singh, Kanchan; Rajput, S. K.; Soota, T.; Verma, Vijay; Singh, Dharmendra
2018-03-01
To predict the hot deformation behavior of high phosphorus steel, the hot compression experiments were performed with the help of thermo-mechanical simulator Gleeble® 3800 in the temperatures ranging from 750 °C to 1050 °C and strain rates of 0.001 s-1, 0.01 s-1, 0.1 s-1, 0.5 s-1, 1.0 s-1 and 10 s-1. The experimental stress-strain data are employed to develop artificial neural network (ANN) model and their predictability. Using different combination of temperature, strain and strain rate as a input parameter and obtained experimental stress as a target, a multi-layer ANN model based on feed-forward back-propagation algorithm is trained, to predict the flow stress for a given processing condition. The relative error between predicted and experimental stress are in the range of ±3.5%, whereas the correlation coefficient (R2) of training and testing data are 0.99986 and 0.99999 respectively. This shows that a well-trained ANN model has excellent capability to predict the hot deformation behavior of materials. Comparative study shows quite good agreement of predicted and experimental values.
Prediction of dynamical systems by symbolic regression
NASA Astrophysics Data System (ADS)
Quade, Markus; Abel, Markus; Shafi, Kamran; Niven, Robert K.; Noack, Bernd R.
2016-07-01
We study the modeling and prediction of dynamical systems based on conventional models derived from measurements. Such algorithms are highly desirable in situations where the underlying dynamics are hard to model from physical principles or simplified models need to be found. We focus on symbolic regression methods as a part of machine learning. These algorithms are capable of learning an analytically tractable model from data, a highly valuable property. Symbolic regression methods can be considered as generalized regression methods. We investigate two particular algorithms, the so-called fast function extraction which is a generalized linear regression algorithm, and genetic programming which is a very general method. Both are able to combine functions in a certain way such that a good model for the prediction of the temporal evolution of a dynamical system can be identified. We illustrate the algorithms by finding a prediction for the evolution of a harmonic oscillator based on measurements, by detecting an arriving front in an excitable system, and as a real-world application, the prediction of solar power production based on energy production observations at a given site together with the weather forecast.
Jin, Xiaochen; Fu, Zhiqiang; Li, Xuehua; Chen, Jingwen
2017-03-22
The octanol-air partition coefficient (K OA ) is a key parameter describing the partition behavior of organic chemicals between air and environmental organic phases. As the experimental determination of K OA is costly, time-consuming and sometimes limited by the availability of authentic chemical standards for the compounds to be determined, it becomes necessary to develop credible predictive models for K OA . In this study, a polyparameter linear free energy relationship (pp-LFER) model for predicting K OA at 298.15 K and a novel model incorporating pp-LFERs with temperature (pp-LFER-T model) were developed from 795 log K OA values for 367 chemicals at different temperatures (263.15-323.15 K), and were evaluated with the OECD guidelines on QSAR model validation and applicability domain description. Statistical results show that both models are well-fitted, robust and have good predictive capabilities. Particularly, the pp-LFER model shows a strong predictive ability for polyfluoroalkyl substances and organosilicon compounds, and the pp-LFER-T model maintains a high predictive accuracy within a wide temperature range (263.15-323.15 K).
Machine Learning Meta-analysis of Large Metagenomic Datasets: Tools and Biological Insights.
Pasolli, Edoardo; Truong, Duy Tin; Malik, Faizan; Waldron, Levi; Segata, Nicola
2016-07-01
Shotgun metagenomic analysis of the human associated microbiome provides a rich set of microbial features for prediction and biomarker discovery in the context of human diseases and health conditions. However, the use of such high-resolution microbial features presents new challenges, and validated computational tools for learning tasks are lacking. Moreover, classification rules have scarcely been validated in independent studies, posing questions about the generality and generalization of disease-predictive models across cohorts. In this paper, we comprehensively assess approaches to metagenomics-based prediction tasks and for quantitative assessment of the strength of potential microbiome-phenotype associations. We develop a computational framework for prediction tasks using quantitative microbiome profiles, including species-level relative abundances and presence of strain-specific markers. A comprehensive meta-analysis, with particular emphasis on generalization across cohorts, was performed in a collection of 2424 publicly available metagenomic samples from eight large-scale studies. Cross-validation revealed good disease-prediction capabilities, which were in general improved by feature selection and use of strain-specific markers instead of species-level taxonomic abundance. In cross-study analysis, models transferred between studies were in some cases less accurate than models tested by within-study cross-validation. Interestingly, the addition of healthy (control) samples from other studies to training sets improved disease prediction capabilities. Some microbial species (most notably Streptococcus anginosus) seem to characterize general dysbiotic states of the microbiome rather than connections with a specific disease. Our results in modelling features of the "healthy" microbiome can be considered a first step toward defining general microbial dysbiosis. The software framework, microbiome profiles, and metadata for thousands of samples are publicly available at http://segatalab.cibio.unitn.it/tools/metaml.
NASA Technical Reports Server (NTRS)
Costakis, W. G.; Wenzel, L. M.
1975-01-01
The relation of the steady-state and dynamic distortions and the stall margin of a J85-13 turbojet engine was investigated. A distortion indicator capable of computing two distortion indices was used. A special purpose signal conditioner was also used as an interface between transducer signals and distortion indicator. A good correlation of steady-state distortion and stall margin was established. The prediction of stall by using the indices as instantaneous distortion indicators was not successful. A sensitivity factor that related the loss of stall margin to the turbulence level was found.
NASA Astrophysics Data System (ADS)
Pietropaolo, A.; Senesi, R.
2008-01-01
A prototype array of resonance detectors for deep inelastic neutron scattering experiments has been installed on the VESUVIO spectrometer, at the ISIS spallation neutron source. Deep inelastic neutron scattering measurements on a reference lead sample and on NaHF 2 molecular system are presented. Despite on an explorative level, the results obtained for the values of mean kinetic energy
Experimental and theoretical sound transmission. [reduction of interior noise in aircraft
NASA Technical Reports Server (NTRS)
Roskam, J.; Muirhead, V. U.; Smith, H. W.; Durenberger, D. W.
1978-01-01
The capabilities of the Kansas University- Flight Research Center for investigating panel sound transmission as a step toward the reduction of interior noise in general aviation aircraft were discussed. Data obtained on panels with holes, on honeycomb panels, and on various panel treatments at normal incidence were documented. The design of equipment for panel transmission loss tests at nonnormal (slanted) sound incidence was described. A comprehensive theory-based prediction method was developed and shows good agreement with experimental observations of the stiffness controlled, the region, the resonance controlled region, and the mass-law region of panel vibration.
Continuous wave operation of quantum cascade lasers with frequency-shifted feedback
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lyakh, A., E-mail: arkadiy.lyakh@ucf.edu; NanoScience Technology Center, University of Central Florida, 12424 Research Pkwy, Orlando, FL 32826; College of Optics and Photonics, University of Central Florida, 304 Scorpius St, Orlando, FL 32826
2016-01-15
Operation of continuous wave quantum cascade lasers with a frequency-shifted feedback provided by an acousto-optic modulator is reported. Measured linewidth of 1.7 cm{sup −1} for these devices, under CW operating conditions, was in a good agreement with predictions of a model based on frequency-shifted feedback seeded by spontaneous emission. Linewidth broadening was observed for short sweep times, consistent with sound wave grating period variation across the illuminated area on the acousto-optic modulator. Standoff detection capability of the AOM-based QCL setup was demonstrated for several solid materials.
NASA Astrophysics Data System (ADS)
Korobko, Dmitry A.; Zolotovskii, Igor O.; Panajotov, Krassimir; Spirin, Vasily V.; Fotiadi, Andrei A.
2017-12-01
We develop a theoretical framework for modeling of semiconductor laser coupled to an external fiber-optic ring resonator. The developed approach has shown good qualitative agreement between theoretical predictions and experimental results for particular configuration of a self-injection locked DFB laser delivering narrow-band radiation. The model is capable of describing the main features of the experimentally measured laser outputs such as laser line narrowing, spectral shape of generated radiation, mode-hoping instabilities and makes possible exploring the key physical mechanisms responsible for the laser operation stability.
A Three-Dimensional Linearized Unsteady Euler Analysis for Turbomachinery Blade Rows
NASA Technical Reports Server (NTRS)
Montgomery, Matthew D.; Verdon, Joseph M.
1996-01-01
A three-dimensional, linearized, Euler analysis is being developed to provide an efficient unsteady aerodynamic analysis that can be used to predict the aeroelastic and aeroacoustic response characteristics of axial-flow turbomachinery blading. The field equations and boundary conditions needed to describe nonlinear and linearized inviscid unsteady flows through a blade row operating within a cylindrical annular duct are presented. In addition, a numerical model for linearized inviscid unsteady flow, which is based upon an existing nonlinear, implicit, wave-split, finite volume analysis, is described. These aerodynamic and numerical models have been implemented into an unsteady flow code, called LINFLUX. A preliminary version of the LINFLUX code is applied herein to selected, benchmark three-dimensional, subsonic, unsteady flows, to illustrate its current capabilities and to uncover existing problems and deficiencies. The numerical results indicate that good progress has been made toward developing a reliable and useful three-dimensional prediction capability. However, some problems, associated with the implementation of an unsteady displacement field and numerical errors near solid boundaries, still exist. Also, accurate far-field conditions must be incorporated into the FINFLUX analysis, so that this analysis can be applied to unsteady flows driven be external aerodynamic excitations.
Failure Analysis of Discrete Damaged Tailored Extension-Shear-Coupled Stiffened Composite Panels
NASA Technical Reports Server (NTRS)
Baker, Donald J.
2005-01-01
The results of an analytical and experimental investigation of the failure of composite is tiffener panels with extension-shear coupling are presented. This tailored concept, when used in the cover skins of a tiltrotor aircraft wing has the potential for increasing the aeroelastic stability margins and improving the aircraft productivity. The extension-shear coupling is achieved by using unbalanced 45 plies in the skin. The failure analysis of two tailored panel configurations that have the center stringer and adjacent skin severed is presented. Finite element analysis of the damaged panels was conducted using STAGS (STructural Analysis of General Shells) general purpose finite element program that includes a progressive failure capability for laminated composite structures that is based on point-stress analysis, traditional failure criteria, and ply discounting for material degradation. The progressive failure predicted the path of the failure and maximum load capability. There is less than 12 percent difference between the predicted failure load and experimental failure load. There is a good match of the panel stiffness and strength between the progressive failure analysis and the experimental results. The results indicate that the tailored concept would be feasible to use in the wing skin of a tiltrotor aircraft.
A discriminatory function for prediction of protein-DNA interactions based on alpha shape modeling.
Zhou, Weiqiang; Yan, Hong
2010-10-15
Protein-DNA interaction has significant importance in many biological processes. However, the underlying principle of the molecular recognition process is still largely unknown. As more high-resolution 3D structures of protein-DNA complex are becoming available, the surface characteristics of the complex become an important research topic. In our work, we apply an alpha shape model to represent the surface structure of the protein-DNA complex and developed an interface-atom curvature-dependent conditional probability discriminatory function for the prediction of protein-DNA interaction. The interface-atom curvature-dependent formalism captures atomic interaction details better than the atomic distance-based method. The proposed method provides good performance in discriminating the native structures from the docking decoy sets, and outperforms the distance-dependent formalism in terms of the z-score. Computer experiment results show that the curvature-dependent formalism with the optimal parameters can achieve a native z-score of -8.17 in discriminating the native structure from the highest surface-complementarity scored decoy set and a native z-score of -7.38 in discriminating the native structure from the lowest RMSD decoy set. The interface-atom curvature-dependent formalism can also be used to predict apo version of DNA-binding proteins. These results suggest that the interface-atom curvature-dependent formalism has a good prediction capability for protein-DNA interactions. The code and data sets are available for download on http://www.hy8.com/bioinformatics.htm kenandzhou@hotmail.com.
Large-scale Parallel Unstructured Mesh Computations for 3D High-lift Analysis
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.; Pirzadeh, S.
1999-01-01
A complete "geometry to drag-polar" analysis capability for the three-dimensional high-lift configurations is described. The approach is based on the use of unstructured meshes in order to enable rapid turnaround for complicated geometries that arise in high-lift configurations. Special attention is devoted to creating a capability for enabling analyses on highly resolved grids. Unstructured meshes of several million vertices are initially generated on a work-station, and subsequently refined on a supercomputer. The flow is solved on these refined meshes on large parallel computers using an unstructured agglomeration multigrid algorithm. Good prediction of lift and drag throughout the range of incidences is demonstrated on a transport take-off configuration using up to 24.7 million grid points. The feasibility of using this approach in a production environment on existing parallel machines is demonstrated, as well as the scalability of the solver on machines using up to 1450 processors.
Integrated fusion simulation with self-consistent core-pedestal coupling
Meneghini, O.; Snyder, P. B.; Smith, S. P.; ...
2016-04-20
In this study, accurate prediction of fusion performance in present and future tokamaks requires taking into account the strong interplay between core transport, pedestal structure, current profile and plasma equilibrium. An integrated modeling workflow capable of calculating the steady-state self- consistent solution to this strongly-coupled problem has been developed. The workflow leverages state-of-the-art components for collisional and turbulent core transport, equilibrium and pedestal stability. Validation against DIII-D discharges shows that the workflow is capable of robustly pre- dicting the kinetic profiles (electron and ion temperature and electron density) from the axis to the separatrix in good agreement with the experiments.more » An example application is presented, showing self-consistent optimization for the fusion performance of the 15 MA D-T ITER baseline scenario as functions of the pedestal density and ion effective charge Z eff.« less
A Model For Rapid Estimation of Economic Loss
NASA Astrophysics Data System (ADS)
Holliday, J. R.; Rundle, J. B.
2012-12-01
One of the loftier goals in seismic hazard analysis is the creation of an end-to-end earthquake prediction system: a "rupture to rafters" work flow that takes a prediction of fault rupture, propagates it with a ground shaking model, and outputs a damage or loss profile at a given location. So far, the initial prediction of an earthquake rupture (either as a point source or a fault system) has proven to be the most difficult and least solved step in this chain. However, this may soon change. The Collaboratory for the Study of Earthquake Predictability (CSEP) has amassed a suite of earthquake source models for assorted testing regions worldwide. These models are capable of providing rate-based forecasts for earthquake (point) sources over a range of time horizons. Furthermore, these rate forecasts can be easily refined into probabilistic source forecasts. While it's still difficult to fully assess the "goodness" of each of these models, progress is being made: new evaluation procedures are being devised and earthquake statistics continue to accumulate. The scientific community appears to be heading towards a better understanding of rupture predictability. Ground shaking mechanics are better understood, and many different sophisticated models exists. While these models tend to be computationally expensive and often regionally specific, they do a good job at matching empirical data. It is perhaps time to start addressing the third step in the seismic hazard prediction system. We present a model for rapid economic loss estimation using ground motion (PGA or PGV) and socioeconomic measures as its input. We show that the model can be calibrated on a global scale and applied worldwide. We also suggest how the model can be improved and generalized to non-seismic natural disasters such as hurricane and severe wind storms.
Using Socioeconomic Data to Calibrate Loss Estimates
NASA Astrophysics Data System (ADS)
Holliday, J. R.; Rundle, J. B.
2013-12-01
One of the loftier goals in seismic hazard analysis is the creation of an end-to-end earthquake prediction system: a "rupture to rafters" work flow that takes a prediction of fault rupture, propagates it with a ground shaking model, and outputs a damage or loss profile at a given location. So far, the initial prediction of an earthquake rupture (either as a point source or a fault system) has proven to be the most difficult and least solved step in this chain. However, this may soon change. The Collaboratory for the Study of Earthquake Predictability (CSEP) has amassed a suite of earthquake source models for assorted testing regions worldwide. These models are capable of providing rate-based forecasts for earthquake (point) sources over a range of time horizons. Furthermore, these rate forecasts can be easily refined into probabilistic source forecasts. While it's still difficult to fully assess the "goodness" of each of these models, progress is being made: new evaluation procedures are being devised and earthquake statistics continue to accumulate. The scientific community appears to be heading towards a better understanding of rupture predictability. Ground shaking mechanics are better understood, and many different sophisticated models exists. While these models tend to be computationally expensive and often regionally specific, they do a good job at matching empirical data. It is perhaps time to start addressing the third step in the seismic hazard prediction system. We present a model for rapid economic loss estimation using ground motion (PGA or PGV) and socioeconomic measures as its input. We show that the model can be calibrated on a global scale and applied worldwide. We also suggest how the model can be improved and generalized to non-seismic natural disasters such as hurricane and severe wind storms.
NASA Technical Reports Server (NTRS)
Lucas, E. J.; Fanning, A. E.; Steers, L. I.
1978-01-01
Results are reported from the initial phase of an effort to provide an adequate technical capability to accurately predict the full scale, flight vehicle, nozzle-afterbody performance of future aircraft based on partial scale, wind tunnel testing. The primary emphasis of this initial effort is to assess the current capability and identify the cause of limitations on this capability. A direct comparison of surface pressure data is made between the results from an 0.1-scale model wind tunnel investigation and a full-scale flight test program to evaluate the current subscale testing techniques. These data were acquired at Mach numbers 0.6, 0.8, 0.9, 1.2, and 1.5 on four nozzle configurations at various vehicle pitch attitudes. Support system interference increments were also documented during the wind tunnel investigation. In general, the results presented indicate a good agreement in trend and level of the surface pressures when corrective increments are applied for known effects and surface differences between the two articles under investigation.
A large-eddy simulation based power estimation capability for wind farms over complex terrain
NASA Astrophysics Data System (ADS)
Senocak, I.; Sandusky, M.; Deleon, R.
2017-12-01
There has been an increasing interest in predicting wind fields over complex terrain at the micro-scale for resource assessment, turbine siting, and power forecasting. These capabilities are made possible by advancements in computational speed from a new generation of computing hardware, numerical methods and physics modelling. The micro-scale wind prediction model presented in this work is based on the large-eddy simulation paradigm with surface-stress parameterization. The complex terrain is represented using an immersed-boundary method that takes into account the parameterization of the surface stresses. Governing equations of incompressible fluid flow are solved using a projection method with second-order accurate schemes in space and time. We use actuator disk models with rotation to simulate the influence of turbines on the wind field. Data regarding power production from individual turbines are mostly restricted because of proprietary nature of the wind energy business. Most studies report percentage drop of power relative to power from the first row. There have been different approaches to predict power production. Some studies simply report available wind power in the upstream, some studies estimate power production using power curves available from turbine manufacturers, and some studies estimate power as torque multiplied by rotational speed. In the present work, we propose a black-box approach that considers a control volume around a turbine and estimate the power extracted from the turbine based on the conservation of energy principle. We applied our wind power prediction capability to wind farms over flat terrain such as the wind farm over Mower County, Minnesota and the Horns Rev offshore wind farm in Denmark. The results from these simulations are in good agreement with published data. We also estimate power production from a hypothetical wind farm in complex terrain region and identify potential zones suitable for wind power production.
Machine learning for outcome prediction of acute ischemic stroke post intra-arterial therapy.
Asadi, Hamed; Dowling, Richard; Yan, Bernard; Mitchell, Peter
2014-01-01
Stroke is a major cause of death and disability. Accurately predicting stroke outcome from a set of predictive variables may identify high-risk patients and guide treatment approaches, leading to decreased morbidity. Logistic regression models allow for the identification and validation of predictive variables. However, advanced machine learning algorithms offer an alternative, in particular, for large-scale multi-institutional data, with the advantage of easily incorporating newly available data to improve prediction performance. Our aim was to design and compare different machine learning methods, capable of predicting the outcome of endovascular intervention in acute anterior circulation ischaemic stroke. We conducted a retrospective study of a prospectively collected database of acute ischaemic stroke treated by endovascular intervention. Using SPSS®, MATLAB®, and Rapidminer®, classical statistics as well as artificial neural network and support vector algorithms were applied to design a supervised machine capable of classifying these predictors into potential good and poor outcomes. These algorithms were trained, validated and tested using randomly divided data. We included 107 consecutive acute anterior circulation ischaemic stroke patients treated by endovascular technique. Sixty-six were male and the mean age of 65.3. All the available demographic, procedural and clinical factors were included into the models. The final confusion matrix of the neural network, demonstrated an overall congruency of ∼ 80% between the target and output classes, with favourable receiving operative characteristics. However, after optimisation, the support vector machine had a relatively better performance, with a root mean squared error of 2.064 (SD: ± 0.408). We showed promising accuracy of outcome prediction, using supervised machine learning algorithms, with potential for incorporation of larger multicenter datasets, likely further improving prediction. Finally, we propose that a robust machine learning system can potentially optimise the selection process for endovascular versus medical treatment in the management of acute stroke.
NASA Astrophysics Data System (ADS)
Padhee, S. K.; Nikam, B. R.; Aggarwal, S. P.; Garg, V.
2014-11-01
Drought is an extreme condition due to moisture deficiency and has adverse effect on society. Agricultural drought occurs when restraining soil moisture produces serious crop stress and affects the crop productivity. The soil moisture regime of rain-fed agriculture and irrigated agriculture behaves differently on both temporal and spatial scale, which means the impact of meteorologically and/or hydrological induced agriculture drought will be different in rain-fed and irrigated areas. However, there is a lack of agricultural drought assessment system in Indian conditions, which considers irrigated and rain-fed agriculture spheres as separate entities. On the other hand recent advancements in the field of earth observation through different satellite based remote sensing have provided researchers a continuous monitoring of soil moisture, land surface temperature and vegetation indices at global scale, which can aid in agricultural drought assessment/monitoring. Keeping this in mind, the present study has been envisaged with the objective to develop agricultural drought assessment and prediction technique by spatially and temporally assimilating effective drought index (EDI) with remote sensing derived parameters. The proposed technique takes in to account the difference in response of rain-fed and irrigated agricultural system towards agricultural drought in the Bundelkhand region (The study area). The key idea was to achieve the goal by utilizing the integrated scenarios from meteorological observations and soil moisture distribution. EDI condition maps were prepared from daily precipitation data recorded by Indian Meteorological Department (IMD), distributed within the study area. With the aid of frequent MODIS products viz. vegetation indices (VIs), and land surface temperature (LST), the coarse resolution soil moisture product from European Space Agency (ESA) were downscaled using linking model based on Triangle method to a finer resolution soil moisture product. EDI and spatially downscaled soil moisture products were later used with MODIS 16 days NDVI product as key elements to assess and predict agricultural drought in irrigated and rain-fed agricultural systems in Bundelkhand region of India. Meteorological drought, soil moisture deficiency and NDVI degradation were inhabited for each and every pixel of the image in GIS environment, for agricultural impact assessment at a 16 day temporal scale for Rabi seasons (October-April) between years 2000 to 2009. Based on the statistical analysis, good correlations were found among the parameters EDI and soil moisture anomaly; NDVI anomaly and soil moisture anomaly lagged to 16 days and these results were exploited for the development of a linear prediction model. The predictive capability of the developed model was validated on the basis of spatial distribution of predicted NDVI which was compared with MODIS NDVI product in the beginning of preceding Rabi season (Oct-Dec of 2010).The predictions of the model were based on future meteorological data (year 2010) and were found to be yielding good results. The developed model have good predictive capability based on future meteorological data (rainfall data) availability, which enhances its utility in analyzing future Agricultural conditions if meteorological data is available.
Gary D. Falk
1981-01-01
A systematic procedure for predicting the payload capability of running, live, and standing skylines is presented. Three hand-held calculator programs are used to predict payload capability that includes the effect of partial suspension. The programs allow for predictions for downhill yarding and for yarding away from the yarder. The equations and basic principles...
Rotor Broadband Noise Prediction with Comparison to Model Data
NASA Technical Reports Server (NTRS)
Brooks, Thomas F.; Burley, Casey L.
2001-01-01
This paper reports an analysis and prediction development of rotor broadband noise. The two primary components of this noise are Blade-Wake Interaction (BWI) noise, due to the blades' interaction with the turbulent wakes of the preceding blades, and "Self" noise, due to the development and shedding of turbulence within the blades' boundary layers. Emphasized in this report is the new code development for Self noise. The analysis and validation employs data from the HART program, a model BO-105 rotor wind tunnel test conducted in the German-Dutch Wind Tunnel (DNW). The BWI noise predictions are based on measured pressure response coherence functions using cross-spectral methods. The Self noise predictions are based on previously reported semiempirical modeling of Self noise obtained from isolated airfoil sections and the use of CAMRAD.Modl to define rotor performance and local blade segment flow conditions. Both BWI and Self noise from individual blade segments are Doppler shifted and summed at the observer positions. Prediction comparisons with measurements show good agreement for a range of rotor operating conditions from climb to steep descent. The broadband noise predictions, along with those of harmonic and impulsive Blade-Vortex Interaction (BVI) noise predictions, demonstrate a significant advance in predictive capability for main rotor noise.
NASA Technical Reports Server (NTRS)
Jones, Kenneth M.; Biedron, Robert T.; Whitlock, Mark
1995-01-01
A computational study was performed to determine the predictive capability of a Reynolds averaged Navier-Stokes code (CFL3D) for two-dimensional and three-dimensional multielement high-lift systems. Three configurations were analyzed: a three-element airfoil, a wing with a full span flap and a wing with a partial span flap. In order to accurately model these complex geometries, two different multizonal structured grid techniques were employed. For the airfoil and full span wing configurations, a chimera or overset grid technique was used. The results of the airfoil analysis illustrated that although the absolute values of lift were somewhat in error, the code was able to predict reasonably well the variation with Reynolds number and flap position. The full span flap analysis demonstrated good agreement with experimental surface pressure data over the wing and flap. Multiblock patched grids were used to model the partial span flap wing. A modification to an existing patched- grid algorithm was required to analyze the configuration as modeled. Comparisons with experimental data were very good, indicating the applicability of the patched-grid technique to analyses of these complex geometries.
NASA Technical Reports Server (NTRS)
Komerath, Narayanan M.; Schreiber, Olivier A.
1987-01-01
The wake model was implemented using a VAX 750 and a Microvax II workstation. Online graphics capability using a DISSPLA graphics package. The rotor model used by Beddoes was significantly extended to include azimuthal variations due to forward flight and a simplified scheme for locating critical points where vortex elements are placed. A test case was obtained for validation of the predictions of induced velocity. Comparison of the results indicates that the code requires some more features before satisfactory predictions can be made over the whole rotor disk. Specifically, shed vorticity due to the azimuthal variation of blade loading must be incorporated into the model. Interactions between vortices shed from the four blades of the model rotor must be included. The Scully code for calculating the velocity field is being modified in parallel with these efforts to enable comparison with experimental data. To date, some comparisons with flow visualization data obtained at Georgia Tech were performed and show good agreement for the isolated rotor case. Comparison of time-resolved velocity data obtained at Georgia Tech also shows good agreement. Modifications are being implemented to enable generation of time-averaged results for comparison with NASA data.
Design features and operational characteristics of the Langley 0.3-meter transonic cryogenic tunnel
NASA Technical Reports Server (NTRS)
Kilgore, R. A.
1976-01-01
Experience with the Langley 0.3 meter transonic cryogenic tunnel, which is fan driven, indicated that such a tunnel presents no unusual design difficulties and is simple to operate. Purging, cooldown, and warmup times were acceptable and were predicted with good accuracy. Cooling with liquid nitrogen was practical over a wide range of operating conditions at power levels required for transonic testing, and good temperature distributions were obtained by using a simple liquid nitrogen injection system. To take full advantage of the unique Reynolds number capabilities of the 0.3 meter transonic tunnel, it was designed to accommodate test sections other than the original, octagonal, three dimensional test section. A 20- by 60-cm two dimensional test section was recently installed and is being calibrated. A two dimensional test section with self-streamlining walls and a test section incorporating a magnetic suspension and balance system are being considered.
Poremba, C; Hero, B; Goertz, H G; Scheel, C; Wai, D; Schaefer, K L; Christiansen, H; Berthold, F; Juergens, H; Boecker, W; Dockhorn-Dworniczak, B
2001-01-01
Neuroblastomas (NB) are a heterogeneous group of childhood tumours with a wide range of likelihood for tumour progression. As traditional parameters do not ensure completely accurate prognostic grouping, new molecular markers are needed for assessing the individual patient's prognosis more precisely. 133 NB of all stages were analysed in blind-trial fashion for telomerase activity (TA), expression of surviving, and MYCN status. These data were correlated with other traditional prognostic indicators and disease outcome. TA is a powerful independent prognostic marker for all stages and is capable of differentiating between good and poor outcome in putative "favourable" clinical or biological subgroups of NB patients. High surviving expression is associated with an adverse outcome, but is more difficult to interprete than TA because survivin expression needs to be accurately quantified to be of predictive value. We propose an extended progression model for NB including emerging prognostic markers, with emphasis on telomerase activity.
Prediction of Business Jet Airloads Using The Overflow Navier-Stokes Code
NASA Technical Reports Server (NTRS)
Bounajem, Elias; Buning, Pieter G.
2001-01-01
The objective of this work is to evaluate the application of Navier-Stokes computational fluid dynamics technology, for the purpose of predicting off-design condition airloads on a business jet configuration in the transonic regime. The NASA Navier-Stokes flow solver OVERFLOW with Chimera overset grid capability, availability of several numerical schemes and convergence acceleration techniques was selected for this work. A set of scripts which have been compiled to reduce the time required for the grid generation process are described. Several turbulence models are evaluated in the presence of separated flow regions on the wing. Computed results are compared to available wind tunnel data for two Mach numbers and a range of angles-of-attack. Comparisons of wing surface pressure from numerical simulation and wind tunnel measurements show good agreement up to fairly high angles-of-attack.
Dichotomy between the band and hopping transport in organic crystals: insights from experiments.
Yavuz, I
2017-10-04
The molecular understanding of charge-transport in organic crystals has often been tangled with identifying the true dynamical origin. While in two distinct cases where complete delocalization and localization of charge-carriers are associated with band-like and hopping-like transports, respectively, their possible coalescence poses some mystery. Moreover, the existing models are still controversial at ambient temperatures. Here, we review the issues in charge-transport theories of organic materials and then provide an overview of prominent transport models. We explored ∼60 organic crystals, the single-crystal hole/electron mobilities of which have been predicted by band-like and hopping-like transport models, separately. Our comparative results show that at room-temperature neither of the models are exclusively capable of accurately predicting mobilities in a very broad range. Hopping-like models well-predict experimental mobilities around μ ∼ 1 cm 2 V -1 s -1 but systematically diverge at high mobilities. Similarly, band-like models are good at μ > ∼50 cm 2 V -1 s -1 but systematically diverge at lower mobilities. These results suggest the development of a unique and robust room-temperature transport model incorporating a mixture of these two extreme cases, whose relative importance is associated with their predominant regions. We deduce that while band models are beneficial for rationally designing high mobility organic-semiconductors, hopping models are good to elucidate the charge-transport of most organic-semiconductors.
Hybrid multiscale modeling and prediction of cancer cell behavior
Habibi, Jafar
2017-01-01
Background Understanding cancer development crossing several spatial-temporal scales is of great practical significance to better understand and treat cancers. It is difficult to tackle this challenge with pure biological means. Moreover, hybrid modeling techniques have been proposed that combine the advantages of the continuum and the discrete methods to model multiscale problems. Methods In light of these problems, we have proposed a new hybrid vascular model to facilitate the multiscale modeling and simulation of cancer development with respect to the agent-based, cellular automata and machine learning methods. The purpose of this simulation is to create a dataset that can be used for prediction of cell phenotypes. By using a proposed Q-learning based on SVR-NSGA-II method, the cells have the capability to predict their phenotypes autonomously that is, to act on its own without external direction in response to situations it encounters. Results Computational simulations of the model were performed in order to analyze its performance. The most striking feature of our results is that each cell can select its phenotype at each time step according to its condition. We provide evidence that the prediction of cell phenotypes is reliable. Conclusion Our proposed model, which we term a hybrid multiscale modeling of cancer cell behavior, has the potential to combine the best features of both continuum and discrete models. The in silico results indicate that the 3D model can represent key features of cancer growth, angiogenesis, and its related micro-environment and show that the findings are in good agreement with biological tumor behavior. To the best of our knowledge, this paper is the first hybrid vascular multiscale modeling of cancer cell behavior that has the capability to predict cell phenotypes individually by a self-generated dataset. PMID:28846712
Benchmark data sets for structure-based computational target prediction.
Schomburg, Karen T; Rarey, Matthias
2014-08-25
Structure-based computational target prediction methods identify potential targets for a bioactive compound. Methods based on protein-ligand docking so far face many challenges, where the greatest probably is the ranking of true targets in a large data set of protein structures. Currently, no standard data sets for evaluation exist, rendering comparison and demonstration of improvements of methods cumbersome. Therefore, we propose two data sets and evaluation strategies for a meaningful evaluation of new target prediction methods, i.e., a small data set consisting of three target classes for detailed proof-of-concept and selectivity studies and a large data set consisting of 7992 protein structures and 72 drug-like ligands allowing statistical evaluation with performance metrics on a drug-like chemical space. Both data sets are built from openly available resources, and any information needed to perform the described experiments is reported. We describe the composition of the data sets, the setup of screening experiments, and the evaluation strategy. Performance metrics capable to measure the early recognition of enrichments like AUC, BEDROC, and NSLR are proposed. We apply a sequence-based target prediction method to the large data set to analyze its content of nontrivial evaluation cases. The proposed data sets are used for method evaluation of our new inverse screening method iRAISE. The small data set reveals the method's capability and limitations to selectively distinguish between rather similar protein structures. The large data set simulates real target identification scenarios. iRAISE achieves in 55% excellent or good enrichment a median AUC of 0.67 and RMSDs below 2.0 Å for 74% and was able to predict the first true target in 59 out of 72 cases in the top 2% of the protein data set of about 8000 structures.
Hybrid multiscale modeling and prediction of cancer cell behavior.
Zangooei, Mohammad Hossein; Habibi, Jafar
2017-01-01
Understanding cancer development crossing several spatial-temporal scales is of great practical significance to better understand and treat cancers. It is difficult to tackle this challenge with pure biological means. Moreover, hybrid modeling techniques have been proposed that combine the advantages of the continuum and the discrete methods to model multiscale problems. In light of these problems, we have proposed a new hybrid vascular model to facilitate the multiscale modeling and simulation of cancer development with respect to the agent-based, cellular automata and machine learning methods. The purpose of this simulation is to create a dataset that can be used for prediction of cell phenotypes. By using a proposed Q-learning based on SVR-NSGA-II method, the cells have the capability to predict their phenotypes autonomously that is, to act on its own without external direction in response to situations it encounters. Computational simulations of the model were performed in order to analyze its performance. The most striking feature of our results is that each cell can select its phenotype at each time step according to its condition. We provide evidence that the prediction of cell phenotypes is reliable. Our proposed model, which we term a hybrid multiscale modeling of cancer cell behavior, has the potential to combine the best features of both continuum and discrete models. The in silico results indicate that the 3D model can represent key features of cancer growth, angiogenesis, and its related micro-environment and show that the findings are in good agreement with biological tumor behavior. To the best of our knowledge, this paper is the first hybrid vascular multiscale modeling of cancer cell behavior that has the capability to predict cell phenotypes individually by a self-generated dataset.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ezato, K.; Shehata, A.M.; Kunugi, T.
1999-08-01
In order to treat strongly heated, forced gas flows at low Reynolds numbers in vertical circular tubes, the {kappa}-{epsilon} turbulence model of Abe, Kondoh, and Nagano (1994), developed for forced turbulent flow between parallel plates with the constant property idealization, has been successfully applied. For thermal energy transport, the turbulent Prandtl number model of Kays and Crawford (1993) was adopted. The capability to handle these flows was assessed via calculations at the conditions of experiments by Shehata (1984), ranging from essentially turbulent to laminarizing due to the heating. Predictions forecast the development of turbulent transport quantities, Reynolds stress, and turbulentmore » heat flux, as well as turbulent viscosity and turbulent kinetic energy. Overall agreement between the calculations and the measured velocity and temperature distributions is good, establishing confidence in the values of the forecast turbulence quantities--and the model which produced them. Most importantly, the model yields predictions which compare well with the measured wall heat transfer parameters and the pressure drop.« less
NASA Astrophysics Data System (ADS)
Cai, Jun; Wang, Kuaishe; Shi, Jiamin; Wang, Wen; Liu, Yingying
2018-01-01
Constitutive analysis for hot working of BFe10-1-2 alloy was carried out by using experimental stress-strain data from isothermal hot compression tests, in a wide range of temperature of 1,023 1,273 K, and strain rate range of 0.001 10 s-1. A constitutive equation based on modified double multiple nonlinear regression was proposed considering the independent effects of strain, strain rate, temperature and their interrelation. The predicted flow stress data calculated from the developed equation was compared with the experimental data. Correlation coefficient (R), average absolute relative error (AARE) and relative errors were introduced to verify the validity of the developed constitutive equation. Subsequently, a comparative study was made on the capability of strain-compensated Arrhenius-type constitutive model. The results showed that the developed constitutive equation based on modified double multiple nonlinear regression could predict flow stress of BFe10-1-2 alloy with good correlation and generalization.
Predicting Stability Constants for Uranyl Complexes Using Density Functional Theory
Vukovic, Sinisa; Hay, Benjamin P.; Bryantsev, Vyacheslav S.
2015-04-02
The ability to predict the equilibrium constants for the formation of 1:1 uranyl:ligand complexes (log K 1 values) provides the essential foundation for the rational design of ligands with enhanced uranyl affinity and selectivity. We also use density functional theory (B3LYP) and the IEFPCM continuum solvation model to compute aqueous stability constants for UO 2 2+ complexes with 18 donor ligands. Theoretical calculations permit reasonably good estimates of relative binding strengths, while the absolute log K 1 values are significantly overestimated. Accurate predictions of the absolute log K 1 values (root mean square deviation from experiment < 1.0 for logmore » K 1 values ranging from 0 to 16.8) can be obtained by fitting the experimental data for two groups of mono and divalent negative oxygen donor ligands. The utility of correlations is demonstrated for amidoxime and imide dioxime ligands, providing a useful means of screening for new ligands with strong chelate capability to uranyl.« less
Novel composites for wing and fuselage applications
NASA Technical Reports Server (NTRS)
Sobel, L. H.; Buttitta, C.; Suarez, J. A.
1995-01-01
Probabilistic predictions based on the IPACS code are presented for the material and structural response of unnotched and notched, IM6/3501-6 Gr/Ep laminates. Comparisons of predicted and measured modulus and strength distributions are given for unnotched unidirectional, cross-ply and quasi-isotropic laminates. The predicted modulus distributions were found to correlate well with the test results for all three unnotched laminates. Correlations of strength distributions for the unnotched laminates are judged good for the unidirectional laminate and fair for the cross-ply laminate, whereas the strength correlation for the quasi-isotropic laminate is judged poor because IPACS did not have a progressive failure capability at the time this work was performed. The report also presents probabilistic and structural reliability analysis predictions for the strain concentration factor (SCF) for an open-hole, quasi-isotropic laminate subjected to longitudinal tension. A special procedure was developed to adapt IPACS for the structural reliability analysis. The reliability results show the importance of identifying the most significant random variables upon which the SCF depends, and of having accurate scatter values for these variables.
NASA Technical Reports Server (NTRS)
Sobel, Larry; Buttitta, Claudio; Suarez, James
1993-01-01
Probabilistic predictions based on the Integrated Probabilistic Assessment of Composite Structures (IPACS) code are presented for the material and structural response of unnotched and notched, 1M6/3501-6 Gr/Ep laminates. Comparisons of predicted and measured modulus and strength distributions are given for unnotched unidirectional, cross-ply, and quasi-isotropic laminates. The predicted modulus distributions were found to correlate well with the test results for all three unnotched laminates. Correlations of strength distributions for the unnotched laminates are judged good for the unidirectional laminate and fair for the cross-ply laminate, whereas the strength correlation for the quasi-isotropic laminate is deficient because IPACS did not yet have a progressive failure capability. The paper also presents probabilistic and structural reliability analysis predictions for the strain concentration factor (SCF) for an open-hole, quasi-isotropic laminate subjected to longitudinal tension. A special procedure was developed to adapt IPACS for the structural reliability analysis. The reliability results show the importance of identifying the most significant random variables upon which the SCF depends, and of having accurate scatter values for these variables.
Analysis, Simulation and Prediction of Cosmetic Defects on Automotive External Panel
NASA Astrophysics Data System (ADS)
Le Port, A.; Thuillier, S.; Borot, C.; Charbonneaux, J.
2011-08-01
The first feeling of quality for a vehicle is linked to its perfect appearance. This has a major impact on the reputation of a car manufacturer. Cosmetic defects are thus more and more taken into account in the process design. Qualifying a part as good or bad from the cosmetic point of view is mainly subjective: the part aspect is considered acceptable if no defect is visible on the vehicle by the final customer. Cosmetic defects that appear during sheet metal forming are checked by visual inspection in light inspection rooms, stoning, or with optical or mechanical sensors or feelers. A lack of cosmetic defect prediction before part production leads to the need for corrective actions, production delays and generates additional costs. This paper first explores the objective description of what cosmetic defects are on a stamped part and where they come from. It then investigates the capability of software to predict these defects, and suggests the use of a cosmetic defects analysis tool developed within PAM-STAMP 2G for its qualitative and quantitative prediction.
NASA Technical Reports Server (NTRS)
Schmidt, Rodney C.; Patankar, Suhas V.
1988-01-01
The use of low Reynolds number (LRN) forms of the k-epsilon turbulence model in predicting transitional boundary layer flow characteristic of gas turbine blades is developed. The research presented consists of: (1) an evaluation of two existing models; (2) the development of a modification to current LRN models; and (3) the extensive testing of the proposed model against experimental data. The prediction characteristics and capabilities of the Jones-Launder (1972) and Lam-Bremhorst (1981) LRN k-epsilon models are evaluated with respect to the prediction of transition on flat plates. Next, the mechanism by which the models simulate transition is considered and the need for additional constraints is discussed. Finally, the transition predictions of a new model are compared with a wide range of different experiments, including transitional flows with free-stream turbulence under conditions of flat plate constant velocity, flat plate constant acceleration, flat plate but strongly variable acceleration, and flow around turbine blade test cascades. In general, calculational procedure yields good agreement with most of the experiments.
A conservative fully implicit algorithm for predicting slug flows
NASA Astrophysics Data System (ADS)
Krasnopolsky, Boris I.; Lukyanov, Alexander A.
2018-02-01
An accurate and predictive modelling of slug flows is required by many industries (e.g., oil and gas, nuclear engineering, chemical engineering) to prevent undesired events potentially leading to serious environmental accidents. For example, the hydrodynamic and terrain-induced slugging leads to unwanted unsteady flow conditions. This demands the development of fast and robust numerical techniques for predicting slug flows. The presented in this paper study proposes a multi-fluid model and its implementation method accounting for phase appearance and disappearance. The numerical modelling of phase appearance and disappearance presents a complex numerical challenge for all multi-component and multi-fluid models. Numerical challenges arise from the singular systems of equations when some phases are absent and from the solution discontinuity when some phases appear or disappear. This paper provides a flexible and robust solution to these issues. A fully implicit formulation described in this work enables to efficiently solve governing fluid flow equations. The proposed numerical method provides a modelling capability of phase appearance and disappearance processes, which is based on switching procedure between various sets of governing equations. These sets of equations are constructed using information about the number of phases present in the computational domain. The proposed scheme does not require an explicit truncation of solutions leading to a conservative scheme for mass and linear momentum. A transient two-fluid model is used to verify and validate the proposed algorithm for conditions of hydrodynamic and terrain-induced slug flow regimes. The developed modelling capabilities allow to predict all the major features of the experimental data, and are in a good quantitative agreement with them.
NASA Astrophysics Data System (ADS)
Castiglioni, Giacomo
Flows over airfoils and blades in rotating machinery, for unmanned and micro-aerial vehicles, wind turbines, and propellers consist of a laminar boundary layer near the leading edge that is often followed by a laminar separation bubble and transition to turbulence further downstream. Typical Reynolds averaged Navier-Stokes turbulence models are inadequate for such flows. Direct numerical simulation is the most reliable, but is also the most computationally expensive alternative. This work assesses the capability of immersed boundary methods and large eddy simulations to reduce the computational requirements for such flows and still provide high quality results. Two-dimensional and three-dimensional simulations of a laminar separation bubble on a NACA-0012 airfoil at Rec = 5x104 and at 5° of incidence have been performed with an immersed boundary code and a commercial code using body fitted grids. Several sub-grid scale models have been implemented in both codes and their performance evaluated. For the two-dimensional simulations with the immersed boundary method the results show good agreement with the direct numerical simulation benchmark data for the pressure coefficient Cp and the friction coefficient Cf, but only when using dissipative numerical schemes. There is evidence that this behavior can be attributed to the ability of dissipative schemes to damp numerical noise coming from the immersed boundary. For the three-dimensional simulations the results show a good prediction of the separation point, but an inaccurate prediction of the reattachment point unless full direct numerical simulation resolution is used. The commercial code shows good agreement with the direct numerical simulation benchmark data in both two and three-dimensional simulations, but the presence of significant, unquantified numerical dissipation prevents a conclusive assessment of the actual prediction capabilities of very coarse large eddy simulations with low order schemes in general cases. Additionally, a two-dimensional sweep of angles of attack from 0° to 5° is performed showing a qualitative prediction of the jump in lift and drag coefficients due to the appearance of the laminar separation bubble. The numerical dissipation inhibits the predictive capabilities of large eddy simulations whenever it is of the same order of magnitude or larger than the sub-grid scale dissipation. The need to estimate the numerical dissipation is most pressing for low-order methods employed by commercial computational fluid dynamics codes. Following the recent work of Schranner et al., the equations and procedure for estimating the numerical dissipation rate and the numerical viscosity in a commercial code are presented. The method allows for the computation of the numerical dissipation rate and numerical viscosity in the physical space for arbitrary sub-domains in a self-consistent way, using only information provided by the code in question. The method is first tested for a three-dimensional Taylor-Green vortex flow in a simple cubic domain and compared with benchmark results obtained using an accurate, incompressible spectral solver. Afterwards the same procedure is applied for the first time to a realistic flow configuration, specifically to the above discussed laminar separation bubble flow over a NACA 0012 airfoil. The method appears to be quite robust and its application reveals that for the code and the flow in question the numerical dissipation can be significantly larger than the viscous dissipation or the dissipation of the classical Smagorinsky sub-grid scale model, confirming the previously qualitative finding.
Beyond Climate and Weather Science: Expanding the Forecasting Family to Serve Societal Needs
NASA Astrophysics Data System (ADS)
Barron, E. J.
2009-05-01
The ability to "anticipate" the future is what makes information from the Earth sciences valuable to society - whether it is the prediction of severe weather or the future availability of water resources in response to climate change. An improved ability to anticipate or forecast has the potential to serve society by simultaneously improving our ability to (1) promote economic vitality, (2) enable environmental stewardship, (3) protect life and property, as well as (4) improve our fundamental knowledge of the earth system. The potential is enormous, yet many appear ready to move quickly toward specific mitigation and adaptation strategies assuming that the science is settled. Five important weakness must be addressed first: (1) the formation of a true "climate services" function and capability, (2) the deliberate investment in expanding the family of forecasting elements to incorporate a broader array of environmental factors and impacts, (3) the investment in the sciences that connect climate to society, (4) a deliberate focus on the problems associated with scale, in particular the difference between the scale of predictive models and the scale associated with societal decisions, and (5) the evolution from climate services and model predictions to the equivalent of "environmental intelligence centers." The objective is to bring the discipline of forecasting to a broader array of environmental challenges. Assessments of the potential impacts of global climate change on societal sectors such as water, human health, and agriculture provide good examples of this challenge. We have the potential to move from a largely reactive mode in addressing adverse health outcomes, for example, to one in which the ties between climate, land cover, infectious disease vectors, and human health are used to forecast and predict adverse human health conditions. The potential exists for a revolution in forecasting, that entrains a much broader set of societal needs and solutions. The argument is made that (for example) the current capabilities in the prediction of environmental health is similar to the capabilities (and potential) of weather forecasting in the 1960's.
NASA Astrophysics Data System (ADS)
Matamala, R.; Fan, Z.; Jastrow, J. D.; Liang, C.; Calderon, F.; Michaelson, G.; Ping, C. L.; Mishra, U.; Hofmann, S. M.
2016-12-01
The large amounts of organic matter stored in permafrost-region soils are preserved in a relatively undecomposed state by the cold and wet environmental conditions limiting decomposer activity. With pending climate changes and the potential for warming of Arctic soils, there is a need to better understand the amount and potential susceptibility to mineralization of the carbon stored in the soils of this region. Studies have suggested that soil C:N ratio or other indicators based on the molecular composition of soil organic matter could be good predictors of potential decomposability. In this study, we investigated the capability of Fourier-transform mid infrared spectroscopy (MidIR) spectroscopy to predict the evolution of carbon dioxide (CO2) produced by Arctic tundra soils during a 60-day laboratory incubation. Soils collected from four tundra sites on the Coastal Plain, and Arctic Foothills of the North Slope of Alaska were separated into active-layer organic, active-layer mineral, and upper permafrost and incubated at 1, 4, 8 and 16 °C. Carbon dioxide production was measured throughout the incubations. Total soil organic carbon (SOC) and total nitrogen (TN) concentrations, salt (0.5 M K2SO4) extractable organic matter (SEOM), and MidIR spectra of the soils were measured before and after incubation. Multivariate partial least squares (PLS) modeling was used to predict cumulative CO2 production, decay rates, and the other measurements. MidIR reliably estimated SOC and TN and SEOM concentrations. The MidIR prediction models of CO2 production were very good for active-layer mineral and upper permafrost soils and good for the active-layer organic soils. SEOM was also a very good predictor of CO2 produced during the incubations. Analysis of the standardized beta coefficients from the PLS models of CO2 production for the three soil layers indicated a small number (9) of influential spectral bands. Of these, bands associated with O-H and N-H stretch, carbonates, and ester C-O appeared to be most important for predicting CO2 production for both active-layer mineral and upper permafrost soils. Further analysis of these influential bands and their relationships to SEOM in soil will be explored. Our results show that the MidIR spectra contains valuable information that can be related to decomposability of soils.
NASA Astrophysics Data System (ADS)
Marçais, J.; de Dreuzy, J.-R.; Ginn, T. R.; Rousseau-Gueutin, P.; Leray, S.
2015-06-01
While central in groundwater resources and contaminant fate, Transit Time Distributions (TTDs) are never directly accessible from field measurements but always deduced from a combination of tracer data and more or less involved models. We evaluate the predictive capabilities of approximate distributions (Lumped Parameter Models abbreviated as LPMs) instead of fully developed aquifer models. We develop a generic assessment methodology based on synthetic aquifer models to establish references for observable quantities as tracer concentrations and prediction targets as groundwater renewal times. Candidate LPMs are calibrated on the observable tracer concentrations and used to infer renewal time predictions, which are compared with the reference ones. This methodology is applied to the produced crystalline aquifer of Plœmeur (Brittany, France) where flows leak through a micaschists aquitard to reach a sloping aquifer where they radially converge to the producing well, issuing broad rather than multi-modal TTDs. One, two and three parameters LPMs were calibrated to a corresponding number of simulated reference anthropogenic tracer concentrations (CFC-11, 85Kr and SF6). Extensive statistical analysis over the aquifer shows that a good fit of the anthropogenic tracer concentrations is neither a necessary nor a sufficient condition to reach acceptable predictive capability. Prediction accuracy is however strongly conditioned by the use of a priori relevant LPMs. Only adequate LPM shapes yield unbiased estimations. In the case of Plœmeur, relevant LPMs should have two parameters to capture the mean and the standard deviation of the residence times and cover the first few decades [0; 50 years]. Inverse Gaussian and shifted exponential performed equally well for the wide variety of the reference TTDs from strongly peaked in recharge zones where flows are diverging to broadly distributed in more converging zones. When using two sufficiently different atmospheric tracers like CFC-11 and 85Kr, groundwater renewal time predictions are accurate at 1-5 years for estimating mean transit times of some decades (10-50 years). 1-parameter LPMs calibrated on a single atmospheric tracer lead to substantially larger errors of the order of 10 years, while 3-parameter LPMs calibrated with a third atmospheric tracers (SF6) do not improve the prediction capabilities. Based on a specific site, this study highlights the high predictive capacities of two atmospheric tracers on the same time range with sufficiently different atmospheric concentration chronicles.
Granular support vector machines with association rules mining for protein homology prediction.
Tang, Yuchun; Jin, Bo; Zhang, Yan-Qing
2005-01-01
Protein homology prediction between protein sequences is one of critical problems in computational biology. Such a complex classification problem is common in medical or biological information processing applications. How to build a model with superior generalization capability from training samples is an essential issue for mining knowledge to accurately predict/classify unseen new samples and to effectively support human experts to make correct decisions. A new learning model called granular support vector machines (GSVM) is proposed based on our previous work. GSVM systematically and formally combines the principles from statistical learning theory and granular computing theory and thus provides an interesting new mechanism to address complex classification problems. It works by building a sequence of information granules and then building support vector machines (SVM) in some of these information granules on demand. A good granulation method to find suitable granules is crucial for modeling a GSVM with good performance. In this paper, we also propose an association rules-based granulation method. For the granules induced by association rules with high enough confidence and significant support, we leave them as they are because of their high "purity" and significant effect on simplifying the classification task. For every other granule, a SVM is modeled to discriminate the corresponding data. In this way, a complex classification problem is divided into multiple smaller problems so that the learning task is simplified. The proposed algorithm, here named GSVM-AR, is compared with SVM by KDDCUP04 protein homology prediction data. The experimental results show that finding the splitting hyperplane is not a trivial task (we should be careful to select the association rules to avoid overfitting) and GSVM-AR does show significant improvement compared to building one single SVM in the whole feature space. Another advantage is that the utility of GSVM-AR is very good because it is easy to be implemented. More importantly and more interestingly, GSVM provides a new mechanism to address complex classification problems.
A method of predicting the energy-absorption capability of composite subfloor beams
NASA Technical Reports Server (NTRS)
Farley, Gary L.
1987-01-01
A simple method of predicting the energy-absorption capability of composite subfloor beam structure was developed. The method is based upon the weighted sum of the energy-absorption capability of constituent elements of a subfloor beam. An empirical data base of energy absorption results from circular and square cross section tube specimens were used in the prediction capability. The procedure is applicable to a wide range of subfloor beam structure. The procedure was demonstrated on three subfloor beam concepts. Agreement between test and prediction was within seven percent for all three cases.
NASA Astrophysics Data System (ADS)
Wei, Yaochi; Kim, Seokpum; Horie, Yasuyuki; Zhou, Min
2017-06-01
A computational approach is developed to predict the probabilistic ignition thresholds of polymer-bonded explosives (PBXs). The simulations explicitly account for microstructure, constituent properties, and interfacial responses and capture processes responsible for the development of hotspots and damage. The specific damage mechanisms considered include viscoelasticity, viscoplasticity, fracture, post-fracture contact, frictional heating, and heat conduction. The probabilistic analysis uses sets of statistically similar microstructure samples to mimic relevant experiments for statistical variations of material behavior due to inherent material heterogeneities. The ignition thresholds and corresponding ignition probability maps are predicted for PBX 9404 and PBX 9501 for the impact loading regime of Up = 200 --1200 m/s. James and Walker-Wasley relations are utilized to establish explicit analytical expressions for the ignition probability as a function of load intensities. The predicted results are in good agreement with available experimental measurements. The capability to computationally predict the macroscopic response out of material microstructures and basic constituent properties lends itself to the design of new materials and the analysis of existing materials. The authors gratefully acknowledge the support from Air Force Office of Scientific Research (AFOSR) and the Defense Threat Reduction Agency (DTRA).
Applicability of a panel method, which includes nonlinear effects, to a forward-swept-wing aircraft
NASA Technical Reports Server (NTRS)
Ross, J. C.
1984-01-01
The ability of a lower order panel method VSAERO, to accurately predict the lift and pitching moment of a complete forward-swept-wing/canard configuration was investigated. The program can simulate nonlinear effects including boundary-layer displacement thickness, wake roll up, and to a limited extent, separated wakes. The predictions were compared with experimental data obtained using a small-scale model in the 7- by 10- Foot Wind Tunnel at NASA Ames Research Center. For the particular configuration under investigation, wake roll up had only a small effect on the force and moment predictions. The effect of the displacement thickness modeling was to reduce the lift curve slope slightly, thus bringing the predicted lift into good agreement with the measured value. Pitching moment predictions were also improved by the boundary-layer simulation. The separation modeling was found to be sensitive to user inputs, but appears to give a reasonable representation of a separated wake. In general, the nonlinear capabilities of the code were found to improve the agreement with experimental data. The usefullness of the code would be enhanced by improving the reliability of the separated wake modeling and by the addition of a leading edge separation model.
van Bokhorst-de van der Schueren, Marian A E; Guaitoli, Patrícia Realino; Jansma, Elise P; de Vet, Henrica C W
2014-02-01
Numerous nutrition screening tools for the hospital setting have been developed. The aim of this systematic review is to study construct or criterion validity and predictive validity of nutrition screening tools for the general hospital setting. A systematic review of English, French, German, Spanish, Portuguese and Dutch articles identified via MEDLINE, Cinahl and EMBASE (from inception to the 2nd of February 2012). Additional studies were identified by checking reference lists of identified manuscripts. Search terms included key words for malnutrition, screening or assessment instruments, and terms for hospital setting and adults. Data were extracted independently by 2 authors. Only studies expressing the (construct, criterion or predictive) validity of a tool were included. 83 studies (32 screening tools) were identified: 42 studies on construct or criterion validity versus a reference method and 51 studies on predictive validity on outcome (i.e. length of stay, mortality or complications). None of the tools performed consistently well to establish the patients' nutritional status. For the elderly, MNA performed fair to good, for the adults MUST performed fair to good. SGA, NRS-2002 and MUST performed well in predicting outcome in approximately half of the studies reviewed in adults, but not in older patients. Not one single screening or assessment tool is capable of adequate nutrition screening as well as predicting poor nutrition related outcome. Development of new tools seems redundant and will most probably not lead to new insights. New studies comparing different tools within one patient population are required. Copyright © 2013 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.
Severe traumatic head injury: prognostic value of brain stem injuries detected at MRI.
Hilario, A; Ramos, A; Millan, J M; Salvador, E; Gomez, P A; Cicuendez, M; Diez-Lobato, R; Lagares, A
2012-11-01
Traumatic brain injuries represent an important cause of death for young people. The main objectives of this work are to correlate brain stem injuries detected at MR imaging with outcome at 6 months in patients with severe TBI, and to determine which MR imaging findings could be related to a worse prognosis. One hundred and eight patients with severe TBI were studied by MR imaging in the first 30 days after trauma. Brain stem injury was categorized as anterior or posterior, hemorrhagic or nonhemorrhagic, and unilateral or bilateral. Outcome measures were GOSE and Barthel Index 6 months postinjury. The relationship between MR imaging findings of brain stem injuries, outcome, and disability was explored by univariate analysis. Prognostic capability of MR imaging findings was also explored by calculation of sensitivity, specificity, and area under the ROC curve for poor and good outcome. Brain stem lesions were detected in 51 patients, of whom 66% showed a poor outcome, as expressed by the GOSE scale. Bilateral involvement was strongly associated with poor outcome (P < .05). Posterior location showed the best discriminatory capability in terms of outcome (OR 6.8, P < .05) and disability (OR 4.8, P < .01). The addition of nonhemorrhagic and anterior lesions or unilateral injuries showed the highest odds and best discriminatory capacity for good outcome. The prognosis worsens in direct relationship to the extent of traumatic injury. Posterior and bilateral brain stem injuries detected at MR imaging are poor prognostic signs. Nonhemorrhagic injuries showed the highest positive predictive value for good outcome.
The mixing of particle clouds plunging into water
DOE Office of Scientific and Technical Information (OSTI.GOV)
Angelini, S.; Theofanous, T.G.; Yuen, W.W.
This work addresses certain fundamental aspects of the premixing phase of steam explosions, At issue are the multifield interaction aspects under highly transient, multidimensional conditions, and in presence of strong phase changes. They are addressed in an experiment (the MAGICO-2000) involving well-characterized particle clouds mixing with water, and detailed measurements on both external and internal characteristics of the mixing zone. Both cold and hot (up to 1500{degrees}C) particle clouds are considered in conjunction with saturated and subcooled water pools. The PMALPHA code is used as an aid in interpreting the experimental results, and the exercise reveals good predictive capabilities formore » it.« less
Experimental investigation of elastic mode control on a model of a transport aircraft
NASA Technical Reports Server (NTRS)
Abramovitz, M.; Heimbaugh, R. M.; Nomura, J. K.; Pearson, R. M.; Shirley, W. A.; Stringham, R. H.; Tescher, E. L.; Zoock, I. E.
1981-01-01
A 4.5 percent DC-10 derivative flexible model with active controls is fabricated, developed, and tested to investigate the ability to suppress flutter and reduce gust loads with active controlled surfaces. The model is analyzed and tested in both semispan and complete model configuration. Analytical methods are refined and control laws are developed and successfully tested on both versions of the model. A 15 to 25 percent increase in flutter speed due to the active system is demonstrated. The capability of an active control system to significantly reduce wing bending moments due to turbulence is demonstrated. Good correlation is obtained between test and analytical prediction.
Yang, Qiang; Arathorn, David W.; Tiruveedhula, Pavan; Vogel, Curtis R.; Roorda, Austin
2010-01-01
We demonstrate an integrated FPGA solution to project highly stabilized, aberration-corrected stimuli directly onto the retina by means of real-time retinal image motion signals in combination with high speed modulation of a scanning laser. By reducing the latency between target location prediction and stimulus delivery, the stimulus location accuracy, in a subject with good fixation, is improved to 0.15 arcminutes from 0.26 arcminutes in our earlier solution. We also demonstrate the new FPGA solution is capable of delivering stabilized large stimulus pattern (up to 256x256 pixels) to the retina. PMID:20721171
Evaluation of Regional Extended-Range Prediction for Tropical Waves Using COAMPS®
NASA Astrophysics Data System (ADS)
Hong, X.; Reynolds, C. A.; Doyle, J. D.; May, P. W.; Chen, S.; Flatau, M. K.; O'Neill, L. W.
2014-12-01
The Navy's Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS1) in a two-way coupled mode is used for two-month regional extended-range prediction for the Madden-Julian Oscillation (MJO) and Tropical Cyclone 05 (TC05) that occurred during the DYNAMO period from November to December 2011. Verification and statistics from two experiments with 45-km and 27-km horizontal resolutions indicate that 27-km run provides a better representation of the three MJO events that occurred during this 2-month period, including the two convectively-coupled Kelvin waves associated with the second MJO event as observed. The 27-km run also significantly reduces forecast error after 15-days, reaching a maximum bias reduction of 89% in the third 15-day period due to the well represented MJO propagation over the Maritime Continent. Correlations between the model forecasts and observations or ECMWF analyses show that the MJO suppressed period is more difficult to predict than the active period. In addition, correlation coefficients for cloud liquid water path (CLWP) and precipitation are relatively low for both cases compared to other variables. The study suggests that a good simulation of TC05 and a good simulation of the Kelvin waves and westerly wind bursts are linked. Further research is needed to investigate the capability in regional extended-range forecasts when the lateral boundary conditions are provided from a long-term global forecast to allow for an assessment of potential operational forecast skill. _____________________________________________________ 1COAMPS is a registered trademark of U.S. Naval Research Laboratory
NASA Technical Reports Server (NTRS)
King, James; Nickling, William G.; Gillies, John A.
2005-01-01
The presence of nonerodible elements is well understood to be a reducing factor for soil erosion by wind, but the limits of its protection of the surface and erosion threshold prediction are complicated by the varying geometry, spatial organization, and density of the elements. The predictive capabilities of the most recent models for estimating wind driven particle fluxes are reduced because of the poor representation of the effectiveness of vegetation to reduce wind erosion. Two approaches have been taken to account for roughness effects on sediment transport thresholds. Marticorena and Bergametti (1995) in their dust emission model parameterize the effect of roughness on threshold with the assumption that there is a relationship between roughness density and the aerodynamic roughness length of a surface. Raupach et al. (1993) offer a different approach based on physical modeling of wake development behind individual roughness elements and the partition of the surface stress and the total stress over a roughened surface. A comparison between the models shows the partitioning approach to be a good framework to explain the effect of roughness on entrainment of sediment by wind. Both models provided very good agreement for wind tunnel experiments using solid objects on a nonerodible surface. However, the Marticorena and Bergametti (1995) approach displays a scaling dependency when the difference between the roughness length of the surface and the overall roughness length is too great, while the Raupach et al. (1993) model's predictions perform better owing to the incorporation of the roughness geometry and the alterations to the flow they can cause.
NASA Technical Reports Server (NTRS)
vanderWall, Berend G.; Lim, Joon W.; Smith, Marilyn J.; Jung, Sung N.; Bailly, Joelle; Baeder, James D.; Boyd, D. Douglas, Jr.
2013-01-01
Significant advancements in computational fluid dynamics (CFD) and their coupling with computational structural dynamics (CSD, or comprehensive codes) for rotorcraft applications have been achieved recently. Despite this, CSD codes with their engineering level of modeling the rotor blade dynamics, the unsteady sectional aerodynamics and the vortical wake are still the workhorse for the majority of applications. This is especially true when a large number of parameter variations is to be performed and their impact on performance, structural loads, vibration and noise is to be judged in an approximate yet reliable and as accurate as possible manner. In this article, the capabilities of such codes are evaluated using the HART II International Workshop database, focusing on a typical descent operating condition which includes strong blade-vortex interactions. A companion article addresses the CFD/CSD coupled approach. Three cases are of interest: the baseline case and two cases with 3/rev higher harmonic blade root pitch control (HHC) with different control phases employed. One setting is for minimum blade-vortex interaction noise radiation and the other one for minimum vibration generation. The challenge is to correctly predict the wake physics-especially for the cases with HHC-and all the dynamics, aerodynamics, modifications of the wake structure and the aero-acoustics coming with it. It is observed that the comprehensive codes used today have a surprisingly good predictive capability when they appropriately account for all of the physics involved. The minimum requirements to obtain these results are outlined.
NASA Technical Reports Server (NTRS)
vanderWall, Berend G.; Lim, Joon W.; Smith, Marilyn J.; Jung, Sung N.; Bailly, Joelle; Baeder, James D.; Boyd, D. Douglas, Jr.
2012-01-01
Despite significant advancements in computational fluid dynamics and their coupling with computational structural dynamics (= CSD, or comprehensive codes) for rotorcraft applications, CSD codes with their engineering level of modeling the rotor blade dynamics, the unsteady sectional aerodynamics and the vortical wake are still the workhorse for the majority of applications. This is especially true when a large number of parameter variations is to be performed and their impact on performance, structural loads, vibration and noise is to be judged in an approximate yet reliable and as accurate as possible manner. In this paper, the capabilities of such codes are evaluated using the HART II Inter- national Workshop data base, focusing on a typical descent operating condition which includes strong blade-vortex interactions. Three cases are of interest: the baseline case and two cases with 3/rev higher harmonic blade root pitch control (HHC) with different control phases employed. One setting is for minimum blade-vortex interaction noise radiation and the other one for minimum vibration generation. The challenge is to correctly predict the wake physics - especially for the cases with HHC - and all the dynamics, aerodynamics, modifications of the wake structure and the aero-acoustics coming with it. It is observed that the comprehensive codes used today have a surprisingly good predictive capability when they appropriately account for all of the physics involved. The minimum requirements to obtain these results are outlined.
In Search of a Time Efficient Approach to Crack and Delamination Growth Predictions in Composites
NASA Technical Reports Server (NTRS)
Krueger, Ronald; Carvalho, Nelson
2016-01-01
Analysis benchmarking was used to assess the accuracy and time efficiency of algorithms suitable for automated delamination growth analysis. First, the Floating Node Method (FNM) was introduced and its combination with a simple exponential growth law (Paris Law) and Virtual Crack Closure technique (VCCT) was discussed. Implementation of the method into a user element (UEL) in Abaqus/Standard(Registered TradeMark) was also presented. For the assessment of growth prediction capabilities, an existing benchmark case based on the Double Cantilever Beam (DCB) specimen was briefly summarized. Additionally, the development of new benchmark cases based on the Mixed-Mode Bending (MMB) specimen to assess the growth prediction capabilities under mixed-mode I/II conditions was discussed in detail. A comparison was presented, in which the benchmark cases were used to assess the existing low-cycle fatigue analysis tool in Abaqus/Standard(Registered TradeMark) in comparison to the FNM-VCCT fatigue growth analysis implementation. The low-cycle fatigue analysis tool in Abaqus/Standard(Registered TradeMark) was able to yield results that were in good agreement with the DCB benchmark example. Results for the MMB benchmark cases, however, only captured the trend correctly. The user element (FNM-VCCT) always yielded results that were in excellent agreement with all benchmark cases, at a fraction of the analysis time. The ability to assess the implementation of two methods in one finite element code illustrated the value of establishing benchmark solutions.
On the prediction of spray angle of liquid-liquid pintle injectors
NASA Astrophysics Data System (ADS)
Cheng, Peng; Li, Qinglian; Xu, Shun; Kang, Zhongtao
2017-09-01
The pintle injector is famous for its capability of deep throttling and low cost. However, the pintle injector has been seldom investigated. To get a good prediction of the spray angle of liquid-liquid pintle injectors, theoretical analysis, numerical simulations and experiments were conducted. Under the hypothesis of incompressible and inviscid flow, a spray angle formula was deduced from the continuity and momentum equations based on a control volume analysis. The formula was then validated by numerical and experimental data. The results indicates that both geometric and injection parameters affect the total momentum ratio (TMR) and then influence the spray angle formed by liquid-liquid pintle injectors. TMR is the pivotal non-dimensional number that dominates the spray angle. Compared with gas-gas pintle injectors, spray angle formed by liquid-liquid injectors is larger, which benefits from the local high pressure zone near the pintle wall caused by the impingement of radial and axial sheets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deng, F.; Nehl, T.W.
1998-09-01
Because of their high efficiency and power density the PM brushless dc motor is a strong candidate for electric and hybrid vehicle propulsion systems. An analytical approach is developed to predict the inverter high frequency pulse width modulation (PWM) switching caused eddy-current losses in a permanent magnet brushless dc motor. The model uses polar coordinates to take curvature effects into account, and is also capable of including the space harmonic effect of the stator magnetic field and the stator lamination effect on the losses. The model was applied to an existing motor design and was verified with the finite elementmore » method. Good agreement was achieved between the two approaches. Hence, the model is expected to be very helpful in predicting PWM switching losses in permanent magnet machine design.« less
Peng, Henry T; Edginton, Andrea N; Cheung, Bob
2013-10-01
Physiologically based pharmacokinetic models were developed using MATLAB Simulink® and PK-Sim®. We compared the capability and usefulness of these two models by simulating pharmacokinetic changes of midazolam under exercise and heat stress to verify the usefulness of MATLAB Simulink® as a generic PBPK modeling software. Although both models show good agreement with experimental data obtained under resting condition, their predictions of pharmacokinetics changes are less accurate in the stressful conditions. However, MATLAB Simulink® may be more flexible to include physiologically based processes such as oral absorption and simulate various stress parameters such as stress intensity, duration and timing of drug administration to improve model performance. Further work will be conducted to modify algorithms in our generic model developed using MATLAB Simulink® and to investigate pharmacokinetics under other physiological stress such as trauma. © The Author(s) 2013.
V&V Of CFD Modeling Of The Argonne Bubble Experiment: FY15 Summary Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoyt, Nathaniel C.; Wardle, Kent E.; Bailey, James L.
2015-09-30
In support of the development of accelerator-driven production of the fission product Mo 99, computational fluid dynamics (CFD) simulations of an electron-beam irradiated, experimental-scale bubble chamber have been conducted in order to aid in interpretation of existing experimental results, provide additional insights into the physical phenomena, and develop predictive thermal hydraulic capabilities that can be applied to full-scale target solution vessels. Toward that end, a custom hybrid Eulerian-Eulerian-Lagrangian multiphase solver was developed, and simulations have been performed on high-resolution meshes. Good agreement between experiments and simulations has been achieved, especially with respect to the prediction of the maximum temperature ofmore » the uranyl sulfate solution in the experimental vessel. These positive results suggest that the simulation methodology that has been developed will prove to be suitable to assist in the development of full-scale production hardware.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wass, P. J.; Araujo, H.; Sumner, T.
We present the concept, design and testing of the radiation monitor for LISA Pathfinder. Galactic cosmic rays (GCRs) and solar energetic particles (SEPs) will cause charging of the LISA Pathfinder test masses producing unwanted disturbances which could be significant during a large solar eruption. A radiation monitor on board LISA Pathfinder, using silicon PIN diodes as particle detectors, will measure the particle flux responsible for charging. It will also be able to record spectral information to identify solar energetic particle events. The design of the monitor was supported by Monte Carlo simulations which allow detailed predictions of the radiation monitormore » performance. We present these predictions as well as the results of high-energy proton tests carried out at the Paul Scherrer Institute, Switzerland. The tests show good agreement with our simulations and confirm the capability of the radiation monitor to perform well in the space environment, meeting all science requirements.« less
The Mars Exploration Rover (MER) Transverse Impulse Rocket System (TIRS)
NASA Technical Reports Server (NTRS)
SanMartin, Alejandro Miguel; Bailey, Erik
2005-01-01
In a very short period of time the MER project successfully developed and tested a system, TIRS/DIMES, to improve the probability of success in the presence of large Martian winds. The successful development of TIRS/DIMES played a big role in the landing site selection process by enabling the landing of Spirit on Gusev crater, a site of very high scientific interest but with known high wind conditions. The performance of TIRS by Spirit at Gusev Crater was excellent. The velocity prediction error was small and Big TIRS was fired reducing the impact horizontal velocity from approximately 23 meters per second to approximately 11 meters per second, well within the airbag capabilities. The performance of TIRS by Opportunity at Meridiani was good. The velocity prediction error was rather large (approximately 6 meters per second, a less than 2 sigma value, but TIRS did not fire which was the correct action.
Recent Progress Towards Predicting Aircraft Ground Handling Performance
NASA Technical Reports Server (NTRS)
Yager, T. J.; White, E. J.
1981-01-01
The significant progress which has been achieved in development of aircraft ground handling simulation capability is reviewed and additional improvements in software modeling identified. The problem associated with providing necessary simulator input data for adequate modeling of aircraft tire/runway friction behavior is discussed and efforts to improve this complex model, and hence simulator fidelity, are described. Aircraft braking performance data obtained on several wet runway surfaces is compared to ground vehicle friction measurements and, by use of empirically derived methods, good agreement between actual and estimated aircraft braking friction from ground vehilce data is shown. The performance of a relatively new friction measuring device, the friction tester, showed great promise in providing data applicable to aircraft friction performance. Additional research efforts to improve methods of predicting tire friction performance are discussed including use of an instrumented tire test vehicle to expand the tire friction data bank and a study of surface texture measurement techniques.
Early Estimation of Solar Activity Cycle: Potential Capability and Limits
NASA Technical Reports Server (NTRS)
Kitiashvili, Irina N.; Collins, Nancy S.
2017-01-01
The variable solar magnetic activity known as the 11-year solar cycle has the longest history of solar observations. These cycles dramatically affect conditions in the heliosphere and the Earth's space environment. Our current understanding of the physical processes that make up global solar dynamics and the dynamo that generates the magnetic fields is sketchy, resulting in unrealistic descriptions in theoretical and numerical models of the solar cycles. The absence of long-term observations of solar interior dynamics and photospheric magnetic fields hinders development of accurate dynamo models and their calibration. In such situations, mathematical data assimilation methods provide an optimal approach for combining the available observational data and their uncertainties with theoretical models in order to estimate the state of the solar dynamo and predict future cycles. In this presentation, we will discuss the implementation and performance of an Ensemble Kalman Filter data assimilation method based on the Parker migratory dynamo model, complemented by the equation of magnetic helicity conservation and longterm sunspot data series. This approach has allowed us to reproduce the general properties of solar cycles and has already demonstrated a good predictive capability for the current cycle, 24. We will discuss further development of this approach, which includes a more sophisticated dynamo model, synoptic magnetogram data, and employs the DART Data Assimilation Research Testbed.
Nomogram Prediction of Overall Survival After Curative Irradiation for Uterine Cervical Cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seo, YoungSeok; Yoo, Seong Yul; Kim, Mi-Sook
Purpose: The purpose of this study was to develop a nomogram capable of predicting the probability of 5-year survival after radical radiotherapy (RT) without chemotherapy for uterine cervical cancer. Methods and Materials: We retrospectively analyzed 549 patients that underwent radical RT for uterine cervical cancer between March 1994 and April 2002 at our institution. Multivariate analysis using Cox proportional hazards regression was performed and this Cox model was used as the basis for the devised nomogram. The model was internally validated for discrimination and calibration by bootstrap resampling. Results: By multivariate regression analysis, the model showed that age, hemoglobin levelmore » before RT, Federation Internationale de Gynecologie Obstetrique (FIGO) stage, maximal tumor diameter, lymph node status, and RT dose at Point A significantly predicted overall survival. The survival prediction model demonstrated good calibration and discrimination. The bootstrap-corrected concordance index was 0.67. The predictive ability of the nomogram proved to be superior to FIGO stage (p = 0.01). Conclusions: The devised nomogram offers a significantly better level of discrimination than the FIGO staging system. In particular, it improves predictions of survival probability and could be useful for counseling patients, choosing treatment modalities and schedules, and designing clinical trials. However, before this nomogram is used clinically, it should be externally validated.« less
Two Machine Learning Approaches for Short-Term Wind Speed Time-Series Prediction.
Ak, Ronay; Fink, Olga; Zio, Enrico
2016-08-01
The increasing liberalization of European electricity markets, the growing proportion of intermittent renewable energy being fed into the energy grids, and also new challenges in the patterns of energy consumption (such as electric mobility) require flexible and intelligent power grids capable of providing efficient, reliable, economical, and sustainable energy production and distribution. From the supplier side, particularly, the integration of renewable energy sources (e.g., wind and solar) into the grid imposes an engineering and economic challenge because of the limited ability to control and dispatch these energy sources due to their intermittent characteristics. Time-series prediction of wind speed for wind power production is a particularly important and challenging task, wherein prediction intervals (PIs) are preferable results of the prediction, rather than point estimates, because they provide information on the confidence in the prediction. In this paper, two different machine learning approaches to assess PIs of time-series predictions are considered and compared: 1) multilayer perceptron neural networks trained with a multiobjective genetic algorithm and 2) extreme learning machines combined with the nearest neighbors approach. The proposed approaches are applied for short-term wind speed prediction from a real data set of hourly wind speed measurements for the region of Regina in Saskatchewan, Canada. Both approaches demonstrate good prediction precision and provide complementary advantages with respect to different evaluation criteria.
Shah, Ankur J; Donovan, Maureen D
2007-04-20
The purpose of this research was to compare the viscoelastic properties of several neutral and anionic polysaccharide polymers with their mucociliary transport rates (MTR) across explants of ciliated bovine tracheal tissue to identify rheologic parameters capable of predicting the extent of reduction in mucociliary transport. The viscoelastic properties of the polymer gels and gels mixed with mucus were quantified using controlled stress rheometry. In general, the anionic polysaccharides were more efficient at decreasing the mucociliary transport rate than were the neutral polymers, and a concentration threshold, where no further decreases in mucociliary transport occurred with increasing polymer concentration, was observed for several of the neutral polysaccharides. No single rheologic parameter (eta, G', G'', tan delta, G*) was a good predictor of the extent of mucociliary transport reduction, but a combination of the apparent viscosity (eta), tangent to the phase angle (tan delta), and complex modulus (G*) was found to be useful in the identification of formulations capable of decreasing MTR. The relative values of each of the rheologic parameters were unique for each polymer, yet once the relationships between the rheologic parameters and mucociliary transport rate reduction were determined, formulations capable of resisting mucociliary clearance could be rapidly optimized.
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2012-01-01
The development of benchmark examples for quasi-static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for Abaqus/Standard. The example is based on a finite element model of a Double-Cantilever Beam specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, a quasi-static benchmark example was created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall the results are encouraging, but further assessment for mixed-mode delamination is required.
Development of Benchmark Examples for Static Delamination Propagation and Fatigue Growth Predictions
NASA Technical Reports Server (NTRS)
Kruger, Ronald
2011-01-01
The development of benchmark examples for static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of an End-Notched Flexure (ENF) specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, static benchmark examples were created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during stable delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall, the results are encouraging but further assessment for mixed-mode delamination is required.
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2011-01-01
The development of benchmark examples for static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of an End-Notched Flexure (ENF) specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, static benchmark examples were created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall the results are encouraging, but further assessment for mixed-mode delamination is required.
Tutorial: Neural networks and their potential application in nuclear power plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uhrig, R.E.
A neural network is a data processing system consisting of a number of simple, highly interconnected processing elements in an architecture inspired by the structure of the cerebral cortex portion of the brain. Hence, neural networks are often capable of doing things which humans or animals do well but which conventional computers often do poorly. Neural networks have emerged in the past few years as an area of unusual opportunity for research, development and application to a variety of real world problems. Indeed, neural networks exhibit characteristics and capabilities not provided by any other technology. Examples include reading Japanese Kanjimore » characters and human handwriting, reading a typewritten manuscript aloud, compensating for alignment errors in robots, interpreting very noise'' signals (e.g. electroencephalograms), modeling complex systems that cannot be modelled mathematically, and predicting whether proposed loans will be good or fail. This paper presents a brief tutorial on neural networks and describes research on the potential applications to nuclear power plants.« less
Productivity and injectivity of horizontal wells. Quarterly report, October 1--December 31, 1993
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fayers, F.J.; Aziz, K.; Hewett, T.A.
1993-03-10
A number of activities have been carried out in the last three months. A list outlining these efforts is presented below followed by brief description of each activity in the subsequent sections of this report: Progress is being made on the development of a black oil three-phase simulator which will allow the use of a generalized Voronoi grid in the plane perpendicular to a horizontal well. The available analytical solutions in the literature for calculating productivity indices (Inflow Performance) of horizontal wells have been reviewed. The pseudo-steady state analytic model of Goode and Kuchuk has been applied to an examplemore » problem. A general mechanistic two-phase flow model is under development. The model is capable of predicting flow transition boundaries for a horizontal pipe at any inclination angle. It also has the capability of determining pressure drops and holdups for all the flow regimes. A large code incorporating all the features of the model has been programmed and is currently being tested.« less
On the Conditioning of Machine-Learning-Assisted Turbulence Modeling
NASA Astrophysics Data System (ADS)
Wu, Jinlong; Sun, Rui; Wang, Qiqi; Xiao, Heng
2017-11-01
Recently, several researchers have demonstrated that machine learning techniques can be used to improve the RANS modeled Reynolds stress by training on available database of high fidelity simulations. However, obtaining improved mean velocity field remains an unsolved challenge, restricting the predictive capability of current machine-learning-assisted turbulence modeling approaches. In this work we define a condition number to evaluate the model conditioning of data-driven turbulence modeling approaches, and propose a stability-oriented machine learning framework to model Reynolds stress. Two canonical flows, the flow in a square duct and the flow over periodic hills, are investigated to demonstrate the predictive capability of the proposed framework. The satisfactory prediction performance of mean velocity field for both flows demonstrates the predictive capability of the proposed framework for machine-learning-assisted turbulence modeling. With showing the capability of improving the prediction of mean flow field, the proposed stability-oriented machine learning framework bridges the gap between the existing machine-learning-assisted turbulence modeling approaches and the demand of predictive capability of turbulence models in real applications.
Tavares, A P M; Coelho, M A Z; Agapito, M S M; Coutinho, J A P; Xavier, A M R B
2006-09-01
Experimental design and response surface methodologies were applied to optimize laccase production by Trametes versicolor in a bioreactor. The effects of three factors, initial glucose concentration (0 and 9 g/L), agitation (100 and 180 rpm), and pH (3.0 and 5.0), were evaluated to identify the significant effects and its interactions in the laccase production. The pH of the medium was found to be the most important factor, followed by initial glucose concentration and the interaction of both factors. Agitation did not seem to play an important role in laccase production, nor did the interaction agitation x medium pH and agitation x initial glucose concentration. Response surface analysis showed that an initial glucose concentration of 11 g/L and pH controlled at 5.2 were the optimal conditions for laccase production by T. versicolor. Under these conditions, the predicted value for laccase activity was >10,000 U/L, which is in good agreement with the laccase activity obtained experimentally (11,403 U/L). In addition, a mathematical model for the bioprocess was developed. It is shown that it provides a good description of the experimental profile observed, and that it is capable of predicting biomass growth based on secondary process variables.
Benchmarking test of empirical root water uptake models
NASA Astrophysics Data System (ADS)
dos Santos, Marcos Alex; de Jong van Lier, Quirijn; van Dam, Jos C.; Freire Bezerra, Andre Herman
2017-01-01
Detailed physical models describing root water uptake (RWU) are an important tool for the prediction of RWU and crop transpiration, but the hydraulic parameters involved are hardly ever available, making them less attractive for many studies. Empirical models are more readily used because of their simplicity and the associated lower data requirements. The purpose of this study is to evaluate the capability of some empirical models to mimic the RWU distribution under varying environmental conditions predicted from numerical simulations with a detailed physical model. A review of some empirical models used as sub-models in ecohydrological models is presented, and alternative empirical RWU models are proposed. All these empirical models are analogous to the standard Feddes model, but differ in how RWU is partitioned over depth or how the transpiration reduction function is defined. The parameters of the empirical models are determined by inverse modelling of simulated depth-dependent RWU. The performance of the empirical models and their optimized empirical parameters depends on the scenario. The standard empirical Feddes model only performs well in scenarios with low root length density R, i.e. for scenarios with low RWU compensation
. For medium and high R, the Feddes RWU model cannot mimic properly the root uptake dynamics as predicted by the physical model. The Jarvis RWU model in combination with the Feddes reduction function (JMf) only provides good predictions for low and medium R scenarios. For high R, it cannot mimic the uptake patterns predicted by the physical model. Incorporating a newly proposed reduction function into the Jarvis model improved RWU predictions. Regarding the ability of the models to predict plant transpiration, all models accounting for compensation show good performance. The Akaike information criterion (AIC) indicates that the Jarvis (2010) model (JMII), with no empirical parameters to be estimated, is the best model
. The proposed models are better in predicting RWU patterns similar to the physical model. The statistical indices point to them as the best alternatives for mimicking RWU predictions of the physical model.
Can future land use change be usefully predicted?
NASA Astrophysics Data System (ADS)
Ramankutty, N.; Coomes, O.
2011-12-01
There has been increasing recognition over the last decade that land use and land cover change is an important driver of global environmental change. Consequently, there have been growing efforts to understanding processes of land change from local-to-global scales, and to develop models to predict future changes in the land. However, we believe that such efforts are hampered by limited attention being paid to the critical points of land change. Here, we present a framework for understanding land use change by distinguishing within-regime land-use dynamics from land-use regime shifts. Illustrative historical examples reveal the significance of land-use regime shifts. We further argue that the land-use literature predominantly demonstrates a good understanding (with predictive power) of within-regime dynamics, while understanding of land-use regime shifts is limited to ex post facto explanations with limited predictive capability. The focus of land use change science needs to be redirected toward studying land-use regime shifts if we are to have any hope of making useful future projections. We present a preliminary framework for understanding land-use regime-shifts, using two case studies in Latin America as examples. We finally discuss the implications of our proposal for land change science.
Power prediction in mobile communication systems using an optimal neural-network structure.
Gao, X M; Gao, X Z; Tanskanen, J A; Ovaska, S J
1997-01-01
Presents a novel neural-network-based predictor for received power level prediction in direct sequence code division multiple access (DS/CDMA) systems. The predictor consists of an adaptive linear element (Adaline) followed by a multilayer perceptron (MLP). An important but difficult problem in designing such a cascade predictor is to determine the complexity of the networks. We solve this problem by using the predictive minimum description length (PMDL) principle to select the optimal numbers of input and hidden nodes. This approach results in a predictor with both good noise attenuation and excellent generalization capability. The optimized neural networks are used for predictive filtering of very noisy Rayleigh fading signals with 1.8 GHz carrier frequency. Our results show that the optimal neural predictor can provide smoothed in-phase and quadrature signals with signal-to-noise ratio (SNR) gains of about 12 and 7 dB at the urban mobile speeds of 5 and 50 km/h, respectively. The corresponding power signal SNR gains are about 11 and 5 dB. Therefore, the neural predictor is well suitable for power control applications where ldquodelaylessrdquo noise attenuation and efficient reduction of fast fading are required.
NASA Astrophysics Data System (ADS)
Efthimiou, G. C.; Andronopoulos, S.; Bartzis, J. G.; Berbekar, E.; Harms, F.; Leitl, B.
2017-02-01
One of the key issues of recent research on the dispersion inside complex urban environments is the ability to predict individual exposure (maximum dosages) of an airborne material which is released continuously from a point source. The present work addresses the question whether the computational fluid dynamics (CFD)-Reynolds-averaged Navier-Stokes (RANS) methodology can be used to predict individual exposure for various exposure times. This is feasible by providing the two RANS concentration moments (mean and variance) and a turbulent time scale to a deterministic model. The whole effort is focused on the prediction of individual exposure inside a complex real urban area. The capabilities of the proposed methodology are validated against wind-tunnel data (CUTE experiment). The present simulations were performed 'blindly', i.e. the modeller had limited information for the inlet boundary conditions and the results were kept unknown until the end of the COST Action ES1006. Thus, a high uncertainty of the results was expected. The general performance of the methodology due to this 'blind' strategy is good. The validation metrics fulfil the acceptance criteria. The effect of the grid and the turbulence model on the model performance is examined.
Simulation of Silicon Photomultiplier Signals
NASA Astrophysics Data System (ADS)
Seifert, Stefan; van Dam, Herman T.; Huizenga, Jan; Vinke, Ruud; Dendooven, Peter; Lohner, Herbert; Schaart, Dennis R.
2009-12-01
In a silicon photomultiplier (SiPM), also referred to as multi-pixel photon counter (MPPC), many Geiger-mode avalanche photodiodes (GM-APDs) are connected in parallel so as to combine the photon counting capabilities of each of these so-called microcells into a proportional light sensor. The discharge of a single microcell is relatively well understood and electronic models exist to simulate this process. In this paper we introduce an extended model that is able to simulate the simultaneous discharge of multiple cells. This model is used to predict the SiPM signal in response to fast light pulses as a function of the number of fired cells, taking into account the influence of the input impedance of the SiPM preamplifier. The model predicts that the electronic signal is not proportional to the number of fired cells if the preamplifier input impedance is not zero. This effect becomes more important for SiPMs with lower parasitic capacitance (which otherwise is a favorable property). The model is validated by comparing its predictions to experimental data obtained with two different SiPMs (Hamamatsu S10362-11-25u and Hamamatsu S10362-33-25c) illuminated with ps laser pulses. The experimental results are in good agreement with the model predictions.
An English Professor Considers Mathematics.
ERIC Educational Resources Information Center
Duncan, Noreen L.
There is a common belief that people have limited mental capabilities in that they are either good at English or mathematics, but not both. There is also a myth that men are naturally good at math, while women are not. But there are many good mathematicians who also write well. Also, good students appear to be good students, regardless of the…
Investigation of use of space data in watershed hydrology
NASA Technical Reports Server (NTRS)
Blanchard, B. J. (Principal Investigator)
1975-01-01
The author has identified the following significant results. Digital data from the ERTS multispectral scanner were used to investigate the feasibility of identifying differences in watershed runoff capability with spaceborne sensors. Linear combinations of the two visible light bands and a combination of the four visible and near infrared bands were related to a coefficient used in the Soil Conservation Service storm runoff equation. Good relationships were found in two scenes, both with dry surface conditions, over the same watersheds. The relationships defined by both combinations of digital data were tested on a independent set of 10 watersheds and on an additional 22 subwatersheds. Coefficients predicted with the ERTS data proved better than coefficients developed with conventional methods.
NASA Technical Reports Server (NTRS)
Lee, H.-W.; Lam, K. S.; Devries, P. L.; George, T. F.
1980-01-01
A new semiclassical decoupling scheme (the trajectory-based decoupling scheme) is introduced in a computational study of vibrational-to-electronic energy transfer for a simple model system that simulates collinear atom-diatom collisions. The probability of energy transfer (P) is calculated quasiclassically using the new scheme as well as quantum mechanically as a function of the atomic electronic-energy separation (lambda), with overall good agreement between the two sets of results. Classical mechanics with the new decoupling scheme is found to be capable of predicting resonance behavior whereas an earlier decoupling scheme (the coordinate-based decoupling scheme) failed. Interference effects are not exhibited in P vs lambda results.
Large-eddy simulation of propeller noise
NASA Astrophysics Data System (ADS)
Keller, Jacob; Mahesh, Krishnan
2016-11-01
We will discuss our ongoing work towards developing the capability to predict far field sound from the large-eddy simulation of propellers. A porous surface Ffowcs-Williams and Hawkings (FW-H) acoustic analogy, with a dynamic endcapping method (Nitzkorski and Mahesh, 2014) is developed for unstructured grids in a rotating frame of reference. The FW-H surface is generated automatically using Delaunay triangulation and is representative of the underlying volume mesh. The approach is validated for tonal trailing edge sound from a NACA 0012 airfoil. LES of flow around a propeller at design advance ratio is compared to experiment and good agreement is obtained. Results for the emitted far field sound will be discussed. This work is supported by ONR.
Evaluation of the three-dimensional parabolic flow computer program SHIP
NASA Technical Reports Server (NTRS)
Pan, Y. S.
1978-01-01
The three-dimensional parabolic flow program SHIP designed for predicting supersonic combustor flow fields is evaluated to determine its capabilities. The mathematical foundation and numerical procedure are reviewed; simplifications are pointed out and commented upon. The program is then evaluated numerically by applying it to several subsonic and supersonic, turbulent, reacting and nonreacting flow problems. Computational results are compared with available experimental or other analytical data. Good agreements are obtained when the simplifications on which the program is based are justified. Limitations of the program and the needs for improvement and extension are pointed out. The present three dimensional parabolic flow program appears to be potentially useful for the development of supersonic combustors.
An Enriched Shell Element for Delamination Simulation in Composite Laminates
NASA Technical Reports Server (NTRS)
McElroy, Mark
2015-01-01
A formulation is presented for an enriched shell finite element capable of delamination simulation in composite laminates. The element uses an adaptive splitting approach for damage characterization that allows for straightforward low-fidelity model creation and a numerically efficient solution. The Floating Node Method is used in conjunction with the Virtual Crack Closure Technique to predict delamination growth and represent it discretely at an arbitrary ply interface. The enriched element is verified for Mode I delamination simulation using numerical benchmark data. After determining important mesh configuration guidelines for the vicinity of the delamination front in the model, a good correlation was found between the enriched shell element model results and the benchmark data set.
Rahmani, Mashaallah; Kaykhaii, Massoud; Sasani, Mojtaba
2018-01-05
This study aimed to investigate the efficiency of 3A zeolite as a novel adsorbent for removal of Rhodamine B and Malachite green dyes from water samples. To increase the removal efficiency, effecting parameters on adsorption process were investigated and optimized by adopting Taguchi design of experiments approach. The percentage contribution of each parameter on the removal of Rhodamine B and Malachite green dyes determined using ANOVA and showed that the most effective parameters in removal of RhB and MG by 3A zeolite are initial concentration of dye and pH, respectively. Under optimized condition, the amount predicted by Taguchi design method and the value obtained experimentally, showed good closeness (more than 94.86%). Good adsorption efficiency obtained for proposed methods indicates that, the 3A zeolite is capable to remove the significant amounts of Rhodamine B and Malachite green from environmental water samples. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Rahmani, Mashaallah; Kaykhaii, Massoud; Sasani, Mojtaba
2018-01-01
This study aimed to investigate the efficiency of 3A zeolite as a novel adsorbent for removal of Rhodamine B and Malachite green dyes from water samples. To increase the removal efficiency, effecting parameters on adsorption process were investigated and optimized by adopting Taguchi design of experiments approach. The percentage contribution of each parameter on the removal of Rhodamine B and Malachite green dyes determined using ANOVA and showed that the most effective parameters in removal of RhB and MG by 3A zeolite are initial concentration of dye and pH, respectively. Under optimized condition, the amount predicted by Taguchi design method and the value obtained experimentally, showed good closeness (more than 94.86%). Good adsorption efficiency obtained for proposed methods indicates that, the 3A zeolite is capable to remove the significant amounts of Rhodamine B and Malachite green from environmental water samples.
Assessment of Spectral Doppler in Preclinical Ultrasound Using a Small-Size Rotating Phantom
Yang, Xin; Sun, Chao; Anderson, Tom; Moran, Carmel M.; Hadoke, Patrick W.F.; Gray, Gillian A.; Hoskins, Peter R.
2013-01-01
Preclinical ultrasound scanners are used to measure blood flow in small animals, but the potential errors in blood velocity measurements have not been quantified. This investigation rectifies this omission through the design and use of phantoms and evaluation of measurement errors for a preclinical ultrasound system (Vevo 770, Visualsonics, Toronto, ON, Canada). A ray model of geometric spectral broadening was used to predict velocity errors. A small-scale rotating phantom, made from tissue-mimicking material, was developed. True and Doppler-measured maximum velocities of the moving targets were compared over a range of angles from 10° to 80°. Results indicate that the maximum velocity was overestimated by up to 158% by spectral Doppler. There was good agreement (<10%) between theoretical velocity errors and measured errors for beam-target angles of 50°–80°. However, for angles of 10°–40°, the agreement was not as good (>50%). The phantom is capable of validating the performance of blood velocity measurement in preclinical ultrasound. PMID:23711503
NASA Astrophysics Data System (ADS)
Yan, Hui; Wang, K. G.; Jones, Jim E.
2016-06-01
A parallel algorithm for large-scale three-dimensional phase-field simulations of phase coarsening is developed and implemented on high-performance architectures. From the large-scale simulations, a new kinetics in phase coarsening in the region of ultrahigh volume fraction is found. The parallel implementation is capable of harnessing the greater computer power available from high-performance architectures. The parallelized code enables increase in three-dimensional simulation system size up to a 5123 grid cube. Through the parallelized code, practical runtime can be achieved for three-dimensional large-scale simulations, and the statistical significance of the results from these high resolution parallel simulations are greatly improved over those obtainable from serial simulations. A detailed performance analysis on speed-up and scalability is presented, showing good scalability which improves with increasing problem size. In addition, a model for prediction of runtime is developed, which shows a good agreement with actual run time from numerical tests.
Reasoning and Knowledge Acquisition Framework for 5G Network Analytics
2017-01-01
Autonomic self-management is a key challenge for next-generation networks. This paper proposes an automated analysis framework to infer knowledge in 5G networks with the aim to understand the network status and to predict potential situations that might disrupt the network operability. The framework is based on the Endsley situational awareness model, and integrates automated capabilities for metrics discovery, pattern recognition, prediction techniques and rule-based reasoning to infer anomalous situations in the current operational context. Those situations should then be mitigated, either proactive or reactively, by a more complex decision-making process. The framework is driven by a use case methodology, where the network administrator is able to customize the knowledge inference rules and operational parameters. The proposal has also been instantiated to prove its adaptability to a real use case. To this end, a reference network traffic dataset was used to identify suspicious patterns and to predict the behavior of the monitored data volume. The preliminary results suggest a good level of accuracy on the inference of anomalous traffic volumes based on a simple configuration. PMID:29065473
Development of PRIME for irradiation performance analysis of U-Mo/Al dispersion fuel
NASA Astrophysics Data System (ADS)
Jeong, Gwan Yoon; Kim, Yeon Soo; Jeong, Yong Jin; Park, Jong Man; Sohn, Dong-Seong
2018-04-01
A prediction code for the thermo-mechanical performance of research reactor fuel (PRIME) has been developed with the implementation of developed models to analyze the irradiation behavior of U-Mo dispersion fuel. The code is capable of predicting the two-dimensional thermal and mechanical performance of U-Mo dispersion fuel during irradiation. A finite element method was employed to solve the governing equations for thermal and mechanical equilibria. Temperature- and burnup-dependent material properties of the fuel meat constituents and cladding were used. The numerical solution schemes in PRIME were verified by benchmarking solutions obtained using a commercial finite element analysis program (ABAQUS). The code was validated using irradiation data from RERTR, HAMP-1, and E-FUTURE tests. The measured irradiation data used in the validation were IL thickness, volume fractions of fuel meat constituents for the thermal analysis, and profiles of the plate thickness changes and fuel meat swelling for the mechanical analysis. The prediction results were in good agreement with the measurement data for both thermal and mechanical analyses, confirming the validity of the code.
NASA Technical Reports Server (NTRS)
Schmidt, James F.
1995-01-01
An off-design axial-flow compressor code is presented and is available from COSMIC for predicting the aerodynamic performance maps of fans and compressors. Steady axisymmetric flow is assumed and the aerodynamic solution reduces to solving the two-dimensional flow field in the meridional plane. A streamline curvature method is used for calculating this flow-field outside the blade rows. This code allows for bleed flows and the first five stators can be reset for each rotational speed, capabilities which are necessary for large multistage compressors. The accuracy of the off-design performance predictions depend upon the validity of the flow loss and deviation correlation models. These empirical correlations for the flow loss and deviation are used to model the real flow effects and the off-design code will compute through small reverse flow regions. The input to this off-design code is fully described and a user's example case for a two-stage fan is included with complete input and output data sets. Also, a comparison of the off-design code predictions with experimental data is included which generally shows good agreement.
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2012-01-01
The development of benchmark examples for quasi-static delamination propagation prediction is presented. The example is based on a finite element model of the Mixed-Mode Bending (MMB) specimen for 50% mode II. The benchmarking is demonstrated for Abaqus/Standard, however, the example is independent of the analysis software used and allows the assessment of the automated delamination propagation prediction capability in commercial finite element codes based on the virtual crack closure technique (VCCT). First, a quasi-static benchmark example was created for the specimen. Second, starting from an initially straight front, the delamination was allowed to propagate under quasi-static loading. Third, the load-displacement as well as delamination length versus applied load/displacement relationships from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Overall, the results are encouraging, but further assessment for mixed-mode delamination fatigue onset and growth is required.
Reasoning and Knowledge Acquisition Framework for 5G Network Analytics.
Sotelo Monge, Marco Antonio; Maestre Vidal, Jorge; García Villalba, Luis Javier
2017-10-21
Autonomic self-management is a key challenge for next-generation networks. This paper proposes an automated analysis framework to infer knowledge in 5G networks with the aim to understand the network status and to predict potential situations that might disrupt the network operability. The framework is based on the Endsley situational awareness model, and integrates automated capabilities for metrics discovery, pattern recognition, prediction techniques and rule-based reasoning to infer anomalous situations in the current operational context. Those situations should then be mitigated, either proactive or reactively, by a more complex decision-making process. The framework is driven by a use case methodology, where the network administrator is able to customize the knowledge inference rules and operational parameters. The proposal has also been instantiated to prove its adaptability to a real use case. To this end, a reference network traffic dataset was used to identify suspicious patterns and to predict the behavior of the monitored data volume. The preliminary results suggest a good level of accuracy on the inference of anomalous traffic volumes based on a simple configuration.
Predicting individual fusional range from optometric data
NASA Astrophysics Data System (ADS)
Endrikhovski, Serguei; Jin, Elaine; Miller, Michael E.; Ford, Robert W.
2005-03-01
A model was developed to predict the range of disparities that can be fused by an individual user from optometric measurements. This model uses parameters, such as dissociated phoria and fusional reserves, to calculate an individual user"s fusional range (i.e., the disparities that can be fused on stereoscopic displays) when the user views a stereoscopic stimulus from various distances. This model is validated by comparing its output with data from a study in which the individual fusional range of a group of users was quantified while they viewed a stereoscopic display from distances of 0.5, 1.0, and 2.0 meters. Overall, the model provides good data predictions for the majority of the subjects and can be generalized for other viewing conditions. The model may, therefore, be used within a customized stereoscopic system, which would render stereoscopic information in a way that accounts for the individual differences in fusional range. Because the comfort of an individual user also depends on the user"s ability to fuse stereo images, such a system may, consequently, improve the comfort level and viewing experience for people with different stereoscopic fusional capabilities.
A Continuum Model for the Effect of Dynamic Recrystallization on the Stress⁻Strain Response.
Kooiker, H; Perdahcıoğlu, E S; van den Boogaard, A H
2018-05-22
Austenitic Stainless Steels and High-Strength Low-Alloy (HSLA) steels show significant dynamic recovery and dynamic recrystallization (DRX) during hot forming. In order to design optimal and safe hot-formed products, a good understanding and constitutive description of the material behavior is vital. A new continuum model is presented and validated on a wide range of deformation conditions including high strain rate deformation. The model is presented in rate form to allow for the prediction of material behavior in transient process conditions. The proposed model is capable of accurately describing the stress⁻strain behavior of AISI 316LN in hot forming conditions, also the high strain rate DRX-induced softening observed during hot torsion of HSLA is accurately predicted. It is shown that the increase in recrystallization rate at high strain rates observed in experiments can be captured by including the elastic energy due to the dynamic stress in the driving pressure for recrystallization. Furthermore, the predicted resulting grain sizes follow the power-law dependence with steady state stress that is often reported in literature and the evolution during hot deformation shows the expected trend.
NASA Astrophysics Data System (ADS)
Augustine, Starrlight; Rosa, Sara; Kooijman, Sebastiaan A. L. M.; Carlotti, François; Poggiale, Jean-Christophe
2014-11-01
Parameters for the standard Dynamic Energy Budget (DEB) model were estimated for the purple mauve stinger, Pelagia noctiluca, using literature data. Overall, the model predictions are in good agreement with data covering the full life-cycle. The parameter set we obtain suggests that P. noctiluca is well adapted to survive long periods of starvation since the predicted maximum reserve capacity is extremely high. Moreover we predict that the reproductive output of larger individuals is relatively insensitive to changes in food level while wet mass and length are. Furthermore, the parameters imply that even if food were scarce (ingestion levels only 14% of the maximum for a given size) an individual would still mature and be able to reproduce. We present detailed model predictions for embryo development and discuss the developmental energetics of the species such as the fact that the metabolism of ephyrae accelerates for several days after birth. Finally we explore a number of concrete testable model predictions which will help to guide future research. The application of DEB theory to the collected data allowed us to conclude that P. noctiluca combines maximizing allocation to reproduction with rather extreme capabilities to survive starvation. The combination of these properties might explain why P. noctiluca is a rapidly growing concern to fisheries and tourism.
Schilirò, Luca; Montrasio, Lorella; Scarascia Mugnozza, Gabriele
2016-11-01
In recent years, physically-based numerical models have frequently been used in the framework of early-warning systems devoted to rainfall-induced landslide hazard monitoring and mitigation. For this reason, in this work we describe the potential of SLIP (Shallow Landslides Instability Prediction), a simplified physically-based model for the analysis of shallow landslide occurrence. In order to test the reliability of this model, a back analysis of recent landslide events occurred in the study area (located SW of Messina, northeastern Sicily, Italy) on October 1st, 2009 was performed. The simulation results have been compared with those obtained for the same event by using TRIGRS, another well-established model for shallow landslide prediction. Afterwards, a simulation over a 2-year span period has been performed for the same area, with the aim of evaluating the performance of SLIP as early warning tool. The results confirm the good predictive capability of the model, both in terms of spatial and temporal prediction of the instability phenomena. For this reason, we recommend an operating procedure for the real-time definition of shallow landslide triggering scenarios at the catchment scale, which is based on the use of SLIP calibrated through a specific multi-methodological approach. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Dong, Hancheng; Jin, Xiaoning; Lou, Yangbing; Wang, Changhong
2014-12-01
Lithium-ion batteries are used as the main power source in many electronic and electrical devices. In particular, with the growth in battery-powered electric vehicle development, the lithium-ion battery plays a critical role in the reliability of vehicle systems. In order to provide timely maintenance and replacement of battery systems, it is necessary to develop a reliable and accurate battery health diagnostic that takes a prognostic approach. Therefore, this paper focuses on two main methods to determine a battery's health: (1) Battery State-of-Health (SOH) monitoring and (2) Remaining Useful Life (RUL) prediction. Both of these are calculated by using a filter algorithm known as the Support Vector Regression-Particle Filter (SVR-PF). Models for battery SOH monitoring based on SVR-PF are developed with novel capacity degradation parameters introduced to determine battery health in real time. Moreover, the RUL prediction model is proposed, which is able to provide the RUL value and update the RUL probability distribution to the End-of-Life cycle. Results for both methods are presented, showing that the proposed SOH monitoring and RUL prediction methods have good performance and that the SVR-PF has better monitoring and prediction capability than the standard particle filter (PF).
Governing for the Common Good.
Ruger, Jennifer Prah
2015-12-01
The proper object of global health governance (GHG) should be the common good, ensuring that all people have the opportunity to flourish. A well-organized global society that promotes the common good is to everyone's advantage. Enabling people to flourish includes enabling their ability to be healthy. Thus, we must assess health governance by its effectiveness in enhancing health capabilities. Current GHG fails to support human flourishing, diminishes health capabilities and thus does not serve the common good. The provincial globalism theory of health governance proposes a Global Health Constitution and an accompanying Global Institute of Health and Medicine that together propose to transform health governance. Multiple lines of empirical research suggest that these institutions would be effective, offering the most promising path to a healthier, more just world.
Gabbay, Itay E; Gabbay, Uri
2013-01-01
Excess adverse events may be attributable to poor surgical performance but also to case-mix, which is controlled through the Standardized Incidence Ratio (SIR). SIR calculations can be complicated, resource consuming, and unfeasible in some settings. This article suggests a novel method for SIR approximation. In order to evaluate a potential SIR surrogate measure we predefined acceptance criteria. We developed a new measure - Approximate Risk Index (ARI). "Number Needed for Event" (NNE) is the theoretical number of patients needed "to produce" one adverse event. ARI is defined as the quotient of the group of patients needed for no observed events Ge by total patients treated Ga. Our evaluation compared 2500 surgical units and over 3 million heterogeneous risk surgical patients that were induced through a computerized simulation. Surgical unit's data were computed for SIR and ARI to evaluate compliance with the predefined criteria. Approximation was evaluated by correlation analysis and performance prediction capability by Receiver Operating Characteristics (ROC) analysis. ARI strongly correlates with SIR (r(2) = 0.87, p < 0.05). ARI prediction of excessive risk revealed excellent ROC (Area Under the Curve > 0.9) 87% sensitivity and 91% specificity. ARI provides good approximation of SIR and excellent prediction capability. ARI is simple and cost-effective as it requires thorough risk evaluation of only the adverse events patients. ARI can provide a crucial screening and performance evaluation quality control tool. The ARI method may suit other clinical and epidemiological settings where relatively small fraction of the entire population is affected. Copyright © 2013 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.
Global reaction mechanism for the auto-ignition of full boiling range gasoline and kerosene fuels
NASA Astrophysics Data System (ADS)
Vandersickel, A.; Wright, Y. M.; Boulouchos, K.
2013-12-01
Compact reaction schemes capable of predicting auto-ignition are a prerequisite for the development of strategies to control and optimise homogeneous charge compression ignition (HCCI) engines. In particular for full boiling range fuels exhibiting two stage ignition a tremendous demand exists in the engine development community. The present paper therefore meticulously assesses a previous 7-step reaction scheme developed to predict auto-ignition for four hydrocarbon blends and proposes an important extension of the model constant optimisation procedure, allowing for the model to capture not only ignition delays, but also the evolutions of representative intermediates and heat release rates for a variety of full boiling range fuels. Additionally, an extensive validation of the later evolutions by means of various detailed n-heptane reaction mechanisms from literature has been presented; both for perfectly homogeneous, as well as non-premixed/stratified HCCI conditions. Finally, the models potential to simulate the auto-ignition of various full boiling range fuels is demonstrated by means of experimental shock tube data for six strongly differing fuels, containing e.g. up to 46.7% cyclo-alkanes, 20% napthalenes or complex branched aromatics such as methyl- or ethyl-napthalene. The good predictive capability observed for each of the validation cases as well as the successful parameterisation for each of the six fuels, indicate that the model could, in principle, be applied to any hydrocarbon fuel, providing suitable adjustments to the model parameters are carried out. Combined with the optimisation strategy presented, the model therefore constitutes a major step towards the inclusion of real fuel kinetics into full scale HCCI engine simulations.
SF-FDTD analysis of a predictive physical model for parallel aligned liquid crystal devices
NASA Astrophysics Data System (ADS)
Márquez, Andrés.; Francés, Jorge; Martínez, Francisco J.; Gallego, Sergi; Alvarez, Mariela L.; Calzado, Eva M.; Pascual, Inmaculada; Beléndez, Augusto
2017-08-01
Recently we demonstrated a novel and simplified model enabling to calculate the voltage dependent retardance provided by parallel aligned liquid crystal devices (PA-LCoS) for a very wide range of incidence angles and any wavelength in the visible. To our knowledge it represents the most simplified approach still showing predictive capability. Deeper insight into the physics behind the simplified model is necessary to understand if the parameters in the model are physically meaningful. Since the PA-LCoS is a black-box where we do not have information about the physical parameters of the device, we cannot perform this kind of analysis using the experimental retardance measurements. In this work we develop realistic simulations for the non-linear tilt of the liquid crystal director across the thickness of the liquid crystal layer in the PA devices. We consider these profiles to have a sine-like shape, which is a good approximation for typical ranges of applied voltage in commercial PA-LCoS microdisplays. For these simulations we develop a rigorous method based on the split-field finite difference time domain (SF-FDTD) technique which provides realistic retardance values. These values are used as the experimental measurements to which the simplified model is fitted. From this analysis we learn that the simplified model is very robust, providing unambiguous solutions when fitting its parameters. We also learn that two of the parameters in the model are physically meaningful, proving a useful reverse-engineering approach, with predictive capability, to probe into internal characteristics of the PA-LCoS device.
NASA Astrophysics Data System (ADS)
Wang, Yujie; Pan, Rui; Liu, Chang; Chen, Zonghai; Ling, Qiang
2018-01-01
The battery power capability is intimately correlated with the climbing, braking and accelerating performance of the electric vehicles. Accurate power capability prediction can not only guarantee the safety but also regulate driving behavior and optimize battery energy usage. However, the nonlinearity of the battery model is very complex especially for the lithium iron phosphate batteries. Besides, the hysteresis loop in the open-circuit voltage curve is easy to cause large error in model prediction. In this work, a multi-parameter constraints dynamic estimation method is proposed to predict the battery continuous period power capability. A high-fidelity battery model which considers the battery polarization and hysteresis phenomenon is presented to approximate the high nonlinearity of the lithium iron phosphate battery. Explicit analyses of power capability with multiple constraints are elaborated, specifically the state-of-energy is considered in power capability assessment. Furthermore, to solve the problem of nonlinear system state estimation, and suppress noise interference, the UKF based state observer is employed for power capability prediction. The performance of the proposed methodology is demonstrated by experiments under different dynamic characterization schedules. The charge and discharge power capabilities of the lithium iron phosphate batteries are quantitatively assessed under different time scales and temperatures.
Newman, J; Egan, T; Harbourne, N; O'Riordan, D; Jacquier, J C; O'Sullivan, M
2014-08-01
Sensory evaluation can be problematic for ingredients with a bitter taste during research and development phase of new food products. In this study, 19 dairy protein hydrolysates (DPH) were analysed by an electronic tongue and their physicochemical characteristics, the data obtained from these methods were correlated with their bitterness intensity as scored by a trained sensory panel and each model was also assessed by its predictive capabilities. The physiochemical characteristics of the DPHs investigated were degree of hydrolysis (DH%), and data relating to peptide size and relative hydrophobicity from size exclusion chromatography (SEC) and reverse phase (RP) HPLC. Partial least square regression (PLS) was used to construct the prediction models. All PLS regressions had good correlations (0.78 to 0.93) with the strongest being the combination of data obtained from SEC and RP HPLC. However, the PLS with the strongest predictive power was based on the e-tongue which had the PLS regression with the lowest root mean predicted residual error sum of squares (PRESS) in the study. The results show that the PLS models constructed with the e-tongue and the combination of SEC and RP-HPLC has potential to be used for prediction of bitterness and thus reducing the reliance on sensory analysis in DPHs for future food research. Copyright © 2014 Elsevier B.V. All rights reserved.
Ihmaid, Saleh K; Ahmed, Hany E A; Zayed, Mohamed F; Abadleh, Mohammed M
2016-01-30
The main step in a successful drug discovery pipeline is the identification of small potent compounds that selectively bind to the target of interest with high affinity. However, there is still a shortage of efficient and accurate computational methods with powerful capability to study and hence predict compound selectivity properties. In this work, we propose an affordable machine learning method to perform compound selectivity classification and prediction. For this purpose, we have collected compounds with reported activity and built a selectivity database formed of 153 cathepsin K and S inhibitors that are considered of medicinal interest. This database has three compound sets, two K/S and S/K selective ones and one non-selective KS one. We have subjected this database to the selectivity classification tool 'Emergent Self-Organizing Maps' for exploring its capability to differentiate selective cathepsin inhibitors for one target over the other. The method exhibited good clustering performance for selective ligands with high accuracy (up to 100 %). Among the possibilites, BAPs and MACCS molecular structural fingerprints were used for such a classification. The results exhibited the ability of the method for structure-selectivity relationship interpretation and selectivity markers were identified for the design of further novel inhibitors with high activity and target selectivity.
Kinetic Modeling of Sunflower Grain Filling and Fatty Acid Biosynthesis
Durruty, Ignacio; Aguirrezábal, Luis A. N.; Echarte, María M.
2016-01-01
Grain growth and oil biosynthesis are complex processes that involve various enzymes placed in different sub-cellular compartments of the grain. In order to understand the mechanisms controlling grain weight and composition, we need mathematical models capable of simulating the dynamic behavior of the main components of the grain during the grain filling stage. In this paper, we present a non-structured mechanistic kinetic model developed for sunflower grains. The model was first calibrated for sunflower hybrid ACA855. The calibrated model was able to predict the theoretical amount of carbohydrate equivalents allocated to the grain, grain growth and the dynamics of the oil and non-oil fraction, while considering maintenance requirements and leaf senescence. Incorporating into the model the serial-parallel nature of fatty acid biosynthesis permitted a good representation of the kinetics of palmitic, stearic, oleic, and linoleic acids production. A sensitivity analysis showed that the relative influence of input parameters changed along grain development. Grain growth was mostly affected by the specific growth parameter (μ′) while fatty acid composition strongly depended on their own maximum specific rate parameters. The model was successfully applied to two additional hybrids (MG2 and DK3820). The proposed model can be the first building block toward the development of a more sophisticated model, capable of predicting the effects of environmental conditions on grain weight and composition, in a comprehensive and quantitative way. PMID:27242809
Set-up and validation of a Delft-FEWS based coastal hazard forecasting system
NASA Astrophysics Data System (ADS)
Valchev, Nikolay; Eftimova, Petya; Andreeva, Nataliya
2017-04-01
European coasts are increasingly threatened by hazards related to low-probability and high-impact hydro-meteorological events. Uncertainties in hazard prediction and capabilities to cope with their impact lie in both future storm pattern and increasing coastal development. Therefore, adaptation to future conditions requires a re-evaluation of coastal disaster risk reduction (DRR) strategies and introduction of a more efficient mix of prevention, mitigation and preparedness measures. The latter presumes that development of tools, which can manage the complex process of merging data and models and generate products on the current and expected hydro-and morpho-dynamic states of the coasts, such as forecasting system of flooding and erosion hazards at vulnerable coastal locations (hotspots), is of vital importance. Output of such system can be of an utmost value for coastal stakeholders and the entire coastal community. In response to these challenges, Delft-FEWS provides a state-of-the-art framework for implementation of such system with vast capabilities to trigger the early warning process. In addition, this framework is highly customizable to the specific requirements of any individual coastal hotspot. Since its release many Delft-FEWS based forecasting system related to inland flooding have been developed. However, limited number of coastal applications was implemented. In this paper, a set-up of Delft-FEWS based forecasting system for Varna Bay (Bulgaria) and a coastal hotspot, which includes a sandy beach and port infrastructure, is presented. It is implemented in the frame of RISC-KIT project (Resilience-Increasing Strategies for Coasts - toolKIT). The system output generated in hindcast mode is validated with available observations of surge levels, wave and morphodynamic parameters for a sequence of three short-duration and relatively weak storm events occurred during February 4-12, 2015. Generally, the models' performance is considered as very good and results obtained - quite promising for reliable prediction of both boundary conditions and coastal hazard and gives a good basis for estimation of onshore impact.
Fusion cross sections measurements with MUSIC
NASA Astrophysics Data System (ADS)
Carnelli, P. F. F.; Fernández Niello, J. O.; Almaraz-Calderon, S.; Rehm, K. E.; Albers, M.; Digiovine, B.; Esbensen, H.; Henderson, D.; Jiang, C. L.; Nusair, O.; Palchan-Hazan, T.; Pardo, R. C.; Ugalde, C.; Paul, M.; Alcorta, M.; Bertone, P. F.; Lai, J.; Marley, S. T.
2014-09-01
The interaction between exotic nuclei plays an important role for understanding the reaction mechanism of the fusion processes as well as for the energy production in stars. With the advent of radioactive beams new frontiers for fusion reaction studies have become accessible. We have performed the first measurements of the total fusion cross sections in the systems 10 , 14 , 15C + 12C using a newly developed active target-detector system (MUSIC). Comparison of the obtained cross sections with theoretical predictions show a good agreement in the energy region accessible with existing radioactive beams. This type of comparison allows us to calibrate the calculations for cases that cannot be studied in the laboratory with the current experimental capabilities. The high efficiency of this active detector system will allow future measurements with even more neutron-rich isotopes. The interaction between exotic nuclei plays an important role for understanding the reaction mechanism of the fusion processes as well as for the energy production in stars. With the advent of radioactive beams new frontiers for fusion reaction studies have become accessible. We have performed the first measurements of the total fusion cross sections in the systems 10 , 14 , 15C + 12C using a newly developed active target-detector system (MUSIC). Comparison of the obtained cross sections with theoretical predictions show a good agreement in the energy region accessible with existing radioactive beams. This type of comparison allows us to calibrate the calculations for cases that cannot be studied in the laboratory with the current experimental capabilities. The high efficiency of this active detector system will allow future measurements with even more neutron-rich isotopes. This work is supported by the U.S. DOE Office of Nuclear Physics under Contract No. DE-AC02-06CH11357 and the Universidad Nacional de San Martin, Argentina, Grant SJ10/39.
Di Matteo, Francesco; Martino, Margareth; Rea, Roberta; Pandolfi, Monica; Panzera, Francesco; Stigliano, Egidio; Schena, Emiliano; Saccomandi, Paola; Silvestri, Sergio; Pacella, Claudio Maurizio; Breschi, Luca; Perrone, Giuseppe; Coppola, Roberto; Costamagna, Guido
2013-11-01
Laser ablation (LA) with a neodymium-doped yttrium aluminum garnet (Nd:YAG) laser is a minimally invasive approach able to achieve a high rate of complete tissue necrosis. In a previous study we described the feasibility of EUS-guided Nd:YAG pancreas LA performed in vivo in a porcine model. To establish the best laser setting of Nd:YAG lasers for pancreatic tissue ablation. A secondary aim was to investigate the prediction capability of a mathematical model on ablation volume. Ex vivo animal study. Hospital animal laboratory. Explanted pancreatic glands from 60 healthy farm pigs. Laser output powers (OP) of 1.5, 3, 6, 10, 15, and 20 W were supplied. Ten trials for each OP were performed under US guidance on ex vivo healthy porcine pancreatic tissue. Ablation volume (Va) and central carbonization volume (Vc) were measured on histologic specimens as the sum of the lesion areas multiplied by the thickness of each slide. The theoretical model of the laser-tissue interaction was based on the Pennes equation. A circumscribed ablation zone was observed in all histologic specimens. Va values grow with the increase of the OP up to 10 W and reach a plateau between 10 and 20 W. The trend of Vc values rises constantly until 20 W. The theoretical model shows a good agreement with experimental Va and Vc for OP between 1.5 and 10 W. Ex vivo study. Volumes recorded suggest that the best laser OP could be the lowest one to obtain similar Va with smaller Vc in order to avoid the risk of thermal injury to the surrounding tissue. The good agreement between the two models demonstrates the prediction capability of the theoretical model on laser-induced ablation volume in an ex vivo animal model and supports its potential use for estimating the ablation size at different laser OPs. Copyright © 2013 American Society for Gastrointestinal Endoscopy. Published by Mosby, Inc. All rights reserved.
Mormina, Maru
2018-03-01
Science and technology are key to economic and social development, yet the capacity for scientific innovation remains globally unequally distributed. Although a priority for development cooperation, building or developing research capacity is often reduced in practice to promoting knowledge transfers, for example through North-South partnerships. Research capacity building/development tends to focus on developing scientists' technical competencies through training, without parallel investments to develop and sustain the socioeconomic and political structures that facilitate knowledge creation. This, the paper argues, significantly contributes to the scientific divide between developed and developing countries more than any skills shortage. Using Charles Taylor's concept of irreducibly social goods, the paper extends Sen's Capabilities Approach beyond its traditional focus on individual entitlements to present a view of scientific knowledge as a social good and the capability to produce it as a social capability. Expanding this capability requires going beyond current fragmented approaches to research capacity building to holistically strengthen the different social, political and economic structures that make up a nation's innovation system. This has implications for the interpretation of human rights instruments beyond their current focus on access to knowledge and for focusing science policy and global research partnerships to design approaches to capacity building/development beyond individual training/skills building.
A Man-Machine System for Contemporary Counseling Practice: Diagnosis and Prediction.
ERIC Educational Resources Information Center
Roach, Arthur J.
This paper looks at present and future capabilities for diagnosis and prediction in computer-based guidance efforts and reviews the problems and potentials which will accompany the implementation of such capabilities. In addition to necessary procedural refinement in prediction, future developments in computer-based educational and career…
Prognostic value of resident clinical performance ratings.
Williams, Reed G; Dunnington, Gary L
2004-10-01
This study investigated the concurrent and predictive validity of end-of-rotation (EOR) clinical performance ratings. Surgeon EOR ratings of residents were collected and compared with end-of-year (EOY) progress decisions and to EOR and EOY confidential judgments of resident ability to provide patient care without direct supervision. Eighty percent to 85% of EOR ratings were Excellent or Very Good. Five percent or fewer were Fair or Poor. Almost all residents receiving Excellent or Very Good EOR ratings also received positive EOR judgments about ability to provide patient care without direct supervision. Residents rated Fair or Poor received negative EOR judgments about ability to provide patient care without direct supervision. As the cumulative percent of Good, Fair, and Poor EOR ratings increased, the number of residents promoted without stipulations at the end of the year decreased and the percentage of faculty members who judged the residents capable of providing effective patient care without direct supervision at the end of the year declined. All residents receiving 40% or more EOR ratings below Very Good had stipulations associated with their promotion. Despite use of descriptive anchors on the scale, clinical performance ratings have no direct meaning. Their meaning needs to be established in the same manner as is done in setting normal values for diagnostic tests, ie, by establishing the relationship between EOR ratings and practice outcomes.
A variable capacitance based modeling and power capability predicting method for ultracapacitor
NASA Astrophysics Data System (ADS)
Liu, Chang; Wang, Yujie; Chen, Zonghai; Ling, Qiang
2018-01-01
Methods of accurate modeling and power capability predicting for ultracapacitors are of great significance in management and application of lithium-ion battery/ultracapacitor hybrid energy storage system. To overcome the simulation error coming from constant capacitance model, an improved ultracapacitor model based on variable capacitance is proposed, where the main capacitance varies with voltage according to a piecewise linear function. A novel state-of-charge calculation approach is developed accordingly. After that, a multi-constraint power capability prediction is developed for ultracapacitor, in which a Kalman-filter-based state observer is designed for tracking ultracapacitor's real-time behavior. Finally, experimental results verify the proposed methods. The accuracy of the proposed model is verified by terminal voltage simulating results under different temperatures, and the effectiveness of the designed observer is proved by various test conditions. Additionally, the power capability prediction results of different time scales and temperatures are compared, to study their effects on ultracapacitor's power capability.
Use of experimental airborne infections for monitoring altered host defenses.
Gardner, D E
1982-01-01
The success or failure of the respiratory system to defend itself against airborne infectious agents largely depends upon the efficiency of the pulmonary defenses to maintain sterility and to dispose of unwanted substances. Both specific and nonspecific host defenses cooperate in the removal and inactivation of such agents. Several studies have shown that these defenses are vulnerable to a wide range of environmental agents and that there is a good relationship between exposure to pollutant and the impaired resistance to pulmonary disease. There are numerous immunological, biochemical and physiological techniques that are routinely used to identify and to characterize individual impairments of these defenses. Based on these effects, various hypotheses are proposed as to what health consequences could be expected from these effects. The ultimate test is whether the host, with its compromised defense mechanisms, is still capable of sustaining the total injury and continuing to defend itself against opportunistic pathogens. This paper describes the use of an experimental airborne infectious disease model capable of predicting subtle changes in host defenses at concentrations below which there are any other overt toxicological effects. Such sensitivity is possible because the model measure not just a single "health" parameter, but instead is capable of reflecting the total responses caused by the test chemical. Images FIGURE 3. PMID:7060549
Assessments of a Turbulence Model Based on Menter's Modification to Rotta's Two-Equation Model
NASA Technical Reports Server (NTRS)
Abdol-Hamid, Khaled S.
2013-01-01
The main objective of this paper is to construct a turbulence model with a more reliable second equation simulating length scale. In the present paper, we assess the length scale equation based on Menter s modification to Rotta s two-equation model. Rotta shows that a reliable second equation can be formed in an exact transport equation from the turbulent length scale L and kinetic energy. Rotta s equation is well suited for a term-by-term modeling and shows some interesting features compared to other approaches. The most important difference is that the formulation leads to a natural inclusion of higher order velocity derivatives into the source terms of the scale equation, which has the potential to enhance the capability of Reynolds-averaged Navier-Stokes (RANS) to simulate unsteady flows. The model is implemented in the PAB3D solver with complete formulation, usage methodology, and validation examples to demonstrate its capabilities. The detailed studies include grid convergence. Near-wall and shear flows cases are documented and compared with experimental and Large Eddy Simulation (LES) data. The results from this formulation are as good or better than the well-known SST turbulence model and much better than k-epsilon results. Overall, the study provides useful insights into the model capability in predicting attached and separated flows.
A DRDC Management Accountability Framework
2009-09-01
51 A.2 Cultural Theory: Risk, Blame and Good Governance ................................................. 53 A.3...the MAF. These elements guide good management, enclosing the elements required to make good decisions. 18 In essence, the elements focus on the...of these areas.20 As a guide to good management practices, the elements focus on organizational capacity and capability within a department 21
Humor Ability Reveals Intelligence, Predicts Mating Success, and Is Higher in Males
ERIC Educational Resources Information Center
Greengross, Gil; Miller, Geoffrey
2011-01-01
A good sense of humor is sexually attractive, perhaps because it reveals intelligence, creativity, and other "good genes" or "good parent" traits. If so, intelligence should predict humor production ability, which in turn should predict mating success. In this study, 400 university students (200 men and 200 women) completed…
A Deep Space Orbit Determination Software: Overview and Event Prediction Capability
NASA Astrophysics Data System (ADS)
Kim, Youngkwang; Park, Sang-Young; Lee, Eunji; Kim, Minsik
2017-06-01
This paper presents an overview of deep space orbit determination software (DSODS), as well as validation and verification results on its event prediction capabilities. DSODS was developed in the MATLAB object-oriented programming environment to support the Korea Pathfinder Lunar Orbiter (KPLO) mission. DSODS has three major capabilities: celestial event prediction for spacecraft, orbit determination with deep space network (DSN) tracking data, and DSN tracking data simulation. To achieve its functionality requirements, DSODS consists of four modules: orbit propagation (OP), event prediction (EP), data simulation (DS), and orbit determination (OD) modules. This paper explains the highest-level data flows between modules in event prediction, orbit determination, and tracking data simulation processes. Furthermore, to address the event prediction capability of DSODS, this paper introduces OP and EP modules. The role of the OP module is to handle time and coordinate system conversions, to propagate spacecraft trajectories, and to handle the ephemerides of spacecraft and celestial bodies. Currently, the OP module utilizes the General Mission Analysis Tool (GMAT) as a third-party software component for highfidelity deep space propagation, as well as time and coordinate system conversions. The role of the EP module is to predict celestial events, including eclipses, and ground station visibilities, and this paper presents the functionality requirements of the EP module. The validation and verification results show that, for most cases, event prediction errors were less than 10 millisec when compared with flight proven mission analysis tools such as GMAT and Systems Tool Kit (STK). Thus, we conclude that DSODS is capable of predicting events for the KPLO in real mission applications.
A Dual-Plane PIV Study of Turbulent Heat Transfer Flows
NASA Technical Reports Server (NTRS)
Wernet, Mark P.; Wroblewski, Adam C.; Locke, Randy J.
2016-01-01
Thin film cooling is a widely used technique in turbomachinery and rocket propulsion applications, where cool injection air protects a surface from hot combustion gases. The injected air typically has a different velocity and temperature from the free stream combustion flow, yielding a flow field with high turbulence and large temperature differences. These thin film cooling flows provide a good test case for evaluating computational model prediction capabilities. The goal of this work is to provide a database of flow field measurements for validating computational flow prediction models applied to turbulent heat transfer flows. In this work we describe the application of a Dual-Plane Particle Image Velocimetry (PIV) technique in a thin film cooling wind tunnel facility where the injection air stream velocity and temperatures are varied in order to provide benchmark turbulent heat transfer flow field measurements. The Dual-Plane PIV data collected include all three components of velocity and all three components of vorticity, spanning the width of the tunnel at multiple axial measurement planes.
Design, Fabrication and Testing of a Crushable Energy Absorber for a Passive Earth Entry Vehicle
NASA Technical Reports Server (NTRS)
Kellas, Sotiris; Corliss, James M. (Technical Monitor)
2002-01-01
A conceptual study was performed to investigate the impact response of a crushable energy absorber for a passive Earth entry vehicle. The spherical energy-absorbing concept consisted of a foam-filled composite cellular structure capable of omni-directional impact-load attenuation as well as penetration resistance. Five composite cellular samples of hemispherical geometry were fabricated and tested dynamically with impact speeds varying from 30 to 42 meters per second. Theoretical crush load predictions were obtained with the aid of a generalized theory which accounts for the energy dissipated during the folding deformation of the cell-walls. Excellent correlation was obtained between theoretical predictions and experimental tests on characteristic cell-web intersections. Good correlation of theory with experiment was also found to exist for the more complex spherical cellular structures. All preliminary design requirements were met by the cellular structure concept, which exhibited a near-ideal sustained crush-load and approximately 90% crush stroke.
A 4.8 kbps code-excited linear predictive coder
NASA Technical Reports Server (NTRS)
Tremain, Thomas E.; Campbell, Joseph P., Jr.; Welch, Vanoy C.
1988-01-01
A secure voice system STU-3 capable of providing end-to-end secure voice communications (1984) was developed. The terminal for the new system will be built around the standard LPC-10 voice processor algorithm. The performance of the present STU-3 processor is considered to be good, its response to nonspeech sounds such as whistles, coughs and impulse-like noises may not be completely acceptable. Speech in noisy environments also causes problems with the LPC-10 voice algorithm. In addition, there is always a demand for something better. It is hoped that LPC-10's 2.4 kbps voice performance will be complemented with a very high quality speech coder operating at a higher data rate. This new coder is one of a number of candidate algorithms being considered for an upgraded version of the STU-3 in late 1989. The problems of designing a code-excited linear predictive (CELP) coder to provide very high quality speech at a 4.8 kbps data rate that can be implemented on today's hardware are considered.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, L.K.; Mohr, D.; Planchon, H.P.
This article discusses a series of successful loss-of-flow-without-scram tests conducted in Experimental Breeder Reactor-II (EBR-II), a metal-fueled, sodium-cooled fast reactor. These May 1985 tests demonstrated the capability of the EBR to reduce reactor power passively during a loss of flow and to maintain reactor temperatures within bounds without any reliance on an active safety system. The tests were run from reduced power to ensure that temperatures could be maintained well below the fuel-clad eutectic temperature. Good agreement was found between selected test data and pretest predictions made with the EBR-II system analysis code NATDEMO and the hot channel analysis codemore » HOTCHAN. The article also discusses safety assessments of the tests as well as modifications required on the EBR-II reactor safety system for conducting required on the EBR-II reactor safety system for the conducting the tests.« less
3D Printed Organ Models with Physical Properties of Tissue and Integrated Sensors.
Qiu, Kaiyan; Zhao, Zichen; Haghiashtiani, Ghazaleh; Guo, Shuang-Zhuang; He, Mingyu; Su, Ruitao; Zhu, Zhijie; Bhuiyan, Didarul B; Murugan, Paari; Meng, Fanben; Park, Sung Hyun; Chu, Chih-Chang; Ogle, Brenda M; Saltzman, Daniel A; Konety, Badrinath R; Sweet, Robert M; McAlpine, Michael C
2018-03-01
The design and development of novel methodologies and customized materials to fabricate patient-specific 3D printed organ models with integrated sensing capabilities could yield advances in smart surgical aids for preoperative planning and rehearsal. Here, we demonstrate 3D printed prostate models with physical properties of tissue and integrated soft electronic sensors using custom-formulated polymeric inks. The models show high quantitative fidelity in static and dynamic mechanical properties, optical characteristics, and anatomical geometries to patient tissues and organs. The models offer tissue-mimicking tactile sensation and behavior and thus can be used for the prediction of organ physical behavior under deformation. The prediction results show good agreement with values obtained from simulations. The models also allow the application of surgical and diagnostic tools to their surface and inner channels. Finally, via the conformal integration of 3D printed soft electronic sensors, pressure applied to the models with surgical tools can be quantitatively measured.
Constitutive Equation with Varying Parameters for Superplastic Flow Behavior
NASA Astrophysics Data System (ADS)
Guan, Zhiping; Ren, Mingwen; Jia, Hongjie; Zhao, Po; Ma, Pinkui
2014-03-01
In this study, constitutive equations for superplastic materials with an extra large elongation were investigated through mechanical analysis. From the view of phenomenology, firstly, some traditional empirical constitutive relations were standardized by restricting some strain paths and parameter conditions, and the coefficients in these relations were strictly given new mechanical definitions. Subsequently, a new, general constitutive equation with varying parameters was theoretically deduced based on the general mechanical equation of state. The superplastic tension test data of Zn-5%Al alloy at 340 °C under strain rates, velocities, and loads were employed for building a new constitutive equation and examining its validity. Analysis results indicated that the constitutive equation with varying parameters could characterize superplastic flow behavior in practical superplastic forming with high prediction accuracy and without any restriction of strain path or deformation condition, showing good industrial or scientific interest. On the contrary, those empirical equations have low prediction capabilities due to constant parameters and poor applicability because of the limit of special strain path or parameter conditions based on strict phenomenology.
3D Printed Organ Models with Physical Properties of Tissue and Integrated Sensors
Qiu, Kaiyan; Zhao, Zichen; Haghiashtiani, Ghazaleh; Guo, Shuang-Zhuang; He, Mingyu; Su, Ruitao; Zhu, Zhijie; Bhuiyan, Didarul B.; Murugan, Paari; Meng, Fanben; Park, Sung Hyun; Chu, Chih-Chang; Ogle, Brenda M.; Saltzman, Daniel A.; Konety, Badrinath R.
2017-01-01
The design and development of novel methodologies and customized materials to fabricate patient-specific 3D printed organ models with integrated sensing capabilities could yield advances in smart surgical aids for preoperative planning and rehearsal. Here, we demonstrate 3D printed prostate models with physical properties of tissue and integrated soft electronic sensors using custom-formulated polymeric inks. The models show high quantitative fidelity in static and dynamic mechanical properties, optical characteristics, and anatomical geometries to patient tissues and organs. The models offer tissue-mimicking tactile sensation and behavior and thus can be used for the prediction of organ physical behavior under deformation. The prediction results show good agreement with values obtained from simulations. The models also allow the application of surgical and diagnostic tools to their surface and inner channels. Finally, via the conformal integration of 3D printed soft electronic sensors, pressure applied to the models with surgical tools can be quantitatively measured. PMID:29608202
Development and parameter identification of a visco-hyperelastic model for the periodontal ligament.
Huang, Huixiang; Tang, Wencheng; Tan, Qiyan; Yan, Bin
2017-04-01
The present study developed and implemented a new visco-hyperelastic model that is capable of predicting the time-dependent biomechanical behavior of the periodontal ligament. The constitutive model has been implemented into the finite element package ABAQUS by means of a user-defined material subroutine (UMAT). The stress response is decomposed into two constitutive parts in parallel which are a hyperelastic and a time-dependent viscoelastic stress response. In order to identify the model parameters, the indentation equation based on V-W hyperelastic model and the indentation creep model are developed. Then the parameters are determined by fitting them to the corresponding nanoindentation experimental data of the PDL. The nanoindentation experiment was simulated by finite element analysis to validate the visco-hyperelastic model. The simulated results are in good agreement with the experimental data, which demonstrates that the visco-hyperelastic model developed is able to accurately predict the time-dependent mechanical behavior of the PDL. Copyright © 2017 Elsevier Ltd. All rights reserved.
Zhang, Guoqing; Sun, Qingyan; Hou, Ying; Hong, Zhanying; Zhang, Jun; Zhao, Liang; Zhang, Hai; Chai, Yifeng
2009-07-01
The purpose of this paper was to study the enantioseparation mechanism of triadimenol compounds by carboxymethylated (CM)-beta-CD mediated CE. All the enantiomers were separated under the same experimental conditions to study the chiral recognition mechanism using a 30 mM sodium dihydrogen phosphate buffer at pH 2.2 adjusted by phosphoric acid. The inclusion courses between CM-beta-CD and enantiomers were investigated by the means of molecular docking technique. It was found that there were at least three points (one hydrophobic bond and two hydrogen bonds) involved in the interaction of each enantiomer with the chiral selectors. A new mathematic model has been built up based on the results of molecular mechanics calculations, which could analyze the relationship between the resolution of enantioseparation and the interaction energy in the docking area. Comparing the results of the separation by CE, the established mathematic model demonstrated good capability to predict chiral separation of triadimenol enantiomers using CM-beta-CD mediated CE.
NASA Technical Reports Server (NTRS)
Duque, Earl P. N.; Johnson, Wayne; vanDam, C. P.; Chao, David D.; Cortes, Regina; Yee, Karen
1999-01-01
Accurate, reliable and robust numerical predictions of wind turbine rotor power remain a challenge to the wind energy industry. The literature reports various methods that compare predictions to experiments. The methods vary from Blade Element Momentum Theory (BEM), Vortex Lattice (VL), to variants of Reynolds-averaged Navier-Stokes (RaNS). The BEM and VL methods consistently show discrepancies in predicting rotor power at higher wind speeds mainly due to inadequacies with inboard stall and stall delay models. The RaNS methodologies show promise in predicting blade stall. However, inaccurate rotor vortex wake convection, boundary layer turbulence modeling and grid resolution has limited their accuracy. In addition, the inherently unsteady stalled flow conditions become computationally expensive for even the best endowed research labs. Although numerical power predictions have been compared to experiment. The availability of good wind turbine data sufficient for code validation experimental data that has been extracted from the IEA Annex XIV download site for the NREL Combined Experiment phase II and phase IV rotor. In addition, the comparisons will show data that has been further reduced into steady wind and zero yaw conditions suitable for comparisons to "steady wind" rotor power predictions. In summary, the paper will present and discuss the capabilities and limitations of the three numerical methods and make available a database of experimental data suitable to help other numerical methods practitioners validate their own work.
Observability during planetary approach navigation
NASA Technical Reports Server (NTRS)
Bishop, Robert H.; Burkhart, P. Daniel; Thurman, Sam W.
1993-01-01
The objective of the research is to develop an analytic technique to predict the relative navigation capability of different Earth-based radio navigation measurements. In particular, the problem is to determine the relative ability of geocentric range and Doppler measurements to detect the effects of the target planet gravitational attraction on the spacecraft during the planetary approach and near-encounter mission phases. A complete solution to the two-dimensional problem has been developed. Relatively simple analytic formulas are obtained for range and Doppler measurements which describe the observability content of the measurement data along the approach trajectories. An observability measure is defined which is based on the observability matrix for nonlinear systems. The results show good agreement between the analytic observability analysis and the computational batch processing method.
NASA Astrophysics Data System (ADS)
Boyd-Lee, Ashley; King, Julia
1992-07-01
A discrete statistical model of fatigue crack growth in a nickel base superalloy Waspaloy, which is quantitative from the start of the short crack regime to failure, is presented. Instantaneous crack growth rate distributions and persistence of arrest distributions are used to compute fatigue lives and worst case scenarios without extrapolation. The basis of the model is non-material specific, it provides an improved method of analyzing crack growth rate data. For Waspaloy, the model shows the importance of good bulk fatigue crack growth resistance to resist early short fatigue crack growth and the importance of maximizing crack arrest both by the presence of a proportion of small grains and by maximizing grain boundary corrugation.
NASA Technical Reports Server (NTRS)
Hsu, C.-H.; Lan, C. E.
1985-01-01
Wing rock is one type of lateral-directional instabilities at high angles of attack. To predict wing rock characteristics and to design airplanes to avoid wing rock, parameters affecting wing rock characteristics must be known. A new nonlinear aerodynamic model is developed to investigate the main aerodynamic nonlinearities causing wing rock. In the present theory, the Beecham-Titchener asymptotic method is used to derive expressions for the limit-cycle amplitude and frequency of wing rock from nonlinear flight dynamics equations. The resulting expressions are capable of explaining the existence of wing rock for all types of aircraft. Wing rock is developed by negative or weakly positive roll damping, and sustained by nonlinear aerodynamic roll damping. Good agreement between theoretical and experimental results is obtained.
NASA Technical Reports Server (NTRS)
Henry, M. W.; Wolf, H.; Siemers, Paul M., III
1988-01-01
The SEADS pressure data obtained from the Shuttle flight 61-C are analyzed in conjunction with the preflight database. Based on wind tunnel data, the sensitivity of the Shuttle Orbiter stagnation region pressure distribution to angle of attack and Mach number is demonstrated. Comparisons are made between flight and wind tunnel SEADS orifice pressure distributions at several points throughout the re-entry. It is concluded that modified Newtonian theory provides a good tool for the design of a flush air data system, furnishing data for determining orifice locations and transducer sizing. Ground-based wind tunnel facilities are capable of providing the correction factors necessary for the derivation of accurate air data parameters from pressure data.
Chin, P W; Spezi, E; Lewis, D G
2003-08-21
A software solution has been developed to carry out Monte Carlo simulations of portal dosimetry using the BEAMnrc/DOSXYZnrc code at oblique gantry angles. The solution is based on an integrated phantom, whereby the effect of incident beam obliquity was included using geometric transformations. Geometric transformations are accurate within +/- 1 mm and +/- 1 degrees with respect to exact values calculated using trigonometry. An application in portal image prediction of an inhomogeneous phantom demonstrated good agreement with measured data, where the root-mean-square of the difference was under 2% within the field. Thus, we achieved a dose model framework capable of handling arbitrary gantry angles, voxel-by-voxel phantom description and realistic particle transport throughout the geometry.
Hu, Rui; Yu, Yiqi
2016-09-08
For efficient and accurate temperature predictions of sodium fast reactor structures, a 3-D full-core conjugate heat transfer modeling capability is developed for an advanced system analysis tool, SAM. The hexagon lattice core is modeled with 1-D parallel channels representing the subassembly flow, and 2-D duct walls and inter-assembly gaps. The six sides of the hexagon duct wall and near-wall coolant region are modeled separately to account for different temperatures and heat transfer between coolant flow and each side of the duct wall. The Jacobian Free Newton Krylov (JFNK) solution method is applied to solve the fluid and solid field simultaneouslymore » in a fully coupled fashion. The 3-D full-core conjugate heat transfer modeling capability in SAM has been demonstrated by a verification test problem with 7 fuel assemblies in a hexagon lattice layout. In addition, the SAM simulation results are compared with RANS-based CFD simulations. Very good agreements have been achieved between the results of the two approaches.« less
Meta-tips for lab-on-fiber optrodes
NASA Astrophysics Data System (ADS)
Principe, M.; Consales, M.; Micco, A.; Crescitelli, A.; Castaldi, G.; Esposito, E.; La Ferrara, V.; Cutolo, A.; Galdi, V.; Cusano, A.
2016-05-01
We realize the first optical-fiber "meta-tip" that integrates a metasurface on the tip of an optical fiber. In our proposed configuration a Babinet-inverted plasmonic metasurface is fabricated by patterning (via focused-ion-beam) an array of rectangular aperture nanoantennas in a thin gold film. Via spatial modulation of the nanoantennas size, we properly tune their resonances so as to impress abrupt arbitrary phase variations in the transmitted field wavefront. As a proof-of-principle, we fabricate and characterize several prototypes implementing in the near-infrared the beam-steering with various angles. We also explore the limit case where surface waves are excited, and its capability to work as refractive index sensors. Notably, its sensitivity overwhelms that of the corresponding gradient-free plasmonic array, thus paving the way to the use of metasurfaces for label-free chemical and biological sensing. Our experimental results, in fairly good agreement with numerical predictions, demonstrate the practical feasibility of the meta-tip concept, and set the stage for the integration of metasurfaces, and their exceptional capabilities to manipulate light, in fiber-optics technological platforms, within the emerging "lab-on-fiber" paradigm.
The UK Military Experience of Thoracic Injury in the Wars in Iraq and Afghanistan
2013-01-01
investigations including computed tomography (CT), laboratory and blood bank. A Role 4 hospital is a fixed capability in the home nation capable of providing full...not an independent predictor of mortality in our model. Goodness of the logistic regression model fit was demonstrated using a Hosmer and Lemeshow test...of good practice and ethical care; thus we believe the hidden mortality is minimal. It is possible that in some circumstances, the desire to do
Coupling Matched Molecular Pairs with Machine Learning for Virtual Compound Optimization.
Turk, Samo; Merget, Benjamin; Rippmann, Friedrich; Fulle, Simone
2017-12-26
Matched molecular pair (MMP) analyses are widely used in compound optimization projects to gain insights into structure-activity relationships (SAR). The analysis is traditionally done via statistical methods but can also be employed together with machine learning (ML) approaches to extrapolate to novel compounds. The here introduced MMP/ML method combines a fragment-based MMP implementation with different machine learning methods to obtain automated SAR decomposition and prediction. To test the prediction capabilities and model transferability, two different compound optimization scenarios were designed: (1) "new fragments" which occurs when exploring new fragments for a defined compound series and (2) "new static core and transformations" which resembles for instance the identification of a new compound series. Very good results were achieved by all employed machine learning methods especially for the new fragments case, but overall deep neural network models performed best, allowing reliable predictions also for the new static core and transformations scenario, where comprehensive SAR knowledge of the compound series is missing. Furthermore, we show that models trained on all available data have a higher generalizability compared to models trained on focused series and can extend beyond chemical space covered in the training data. Thus, coupling MMP with deep neural networks provides a promising approach to make high quality predictions on various data sets and in different compound optimization scenarios.
Prediction of Frequency for Simulation of Asphalt Mix Fatigue Tests Using MARS and ANN
Fakhri, Mansour
2014-01-01
Fatigue life of asphalt mixes in laboratory tests is commonly determined by applying a sinusoidal or haversine waveform with specific frequency. The pavement structure and loading conditions affect the shape and the frequency of tensile response pulses at the bottom of asphalt layer. This paper introduces two methods for predicting the loading frequency in laboratory asphalt fatigue tests for better simulation of field conditions. Five thousand (5000) four-layered pavement sections were analyzed and stress and strain response pulses in both longitudinal and transverse directions was determined. After fitting the haversine function to the response pulses by the concept of equal-energy pulse, the effective length of the response pulses were determined. Two methods including Multivariate Adaptive Regression Splines (MARS) and Artificial Neural Network (ANN) methods were then employed to predict the effective length (i.e., frequency) of tensile stress and strain pulses in longitudinal and transverse directions based on haversine waveform. It is indicated that, under controlled stress and strain modes, both methods (MARS and ANN) are capable of predicting the frequency of loading in HMA fatigue tests with very good accuracy. The accuracy of ANN method is, however, more than MARS method. It is furthermore shown that the results of the present study can be generalized to sinusoidal waveform by a simple equation. PMID:24688400
Wang, Qingzhi; Zhao, Hongxia; Wang, Yan; Xie, Qing; Chen, Jingwen; Quan, Xie
2017-09-08
Organophosphate flame retardants (OPFRs) are ubiquitous in the environment. To better understand and predict their environmental transport and fate, well-defined physicochemical properties are required. Vapor pressures ( P ) of 14 OPFRs were estimated as a function of temperature ( T ) by gas chromatography (GC), while 1,1,1-trichioro-2,2-bis (4-chlorophenyl) ethane ( p,p '-DDT) was acted as a reference substance. Their log P GC values and internal energies of phase transfer (△ vap H ) ranged from -6.17 to -1.25 and 74.1 kJ/mol to 122 kJ/mol, respectively. Substitution pattern and molar volume ( V M ) were found to be capable of influencing log P GC values of the OPFRs. The halogenated alkyl-OPFRs had lower log P GC values than aryl-or alkyl-OPFRs. The bigger the molar volume was, the smaller the log P GC value was. In addition, a quantitative structure-property relationship (QSPR) model of log P GC versus different relative retention times (RRTs) was developed with a high cross-validated value ( Q 2 cum ) of 0.946, indicating a good predictive ability and stability. Therefore, the log P GC values of the OPFRs without standard substance can be predicted by using their RRTs on different GC columns.
Prediction of frequency for simulation of asphalt mix fatigue tests using MARS and ANN.
Ghanizadeh, Ali Reza; Fakhri, Mansour
2014-01-01
Fatigue life of asphalt mixes in laboratory tests is commonly determined by applying a sinusoidal or haversine waveform with specific frequency. The pavement structure and loading conditions affect the shape and the frequency of tensile response pulses at the bottom of asphalt layer. This paper introduces two methods for predicting the loading frequency in laboratory asphalt fatigue tests for better simulation of field conditions. Five thousand (5000) four-layered pavement sections were analyzed and stress and strain response pulses in both longitudinal and transverse directions was determined. After fitting the haversine function to the response pulses by the concept of equal-energy pulse, the effective length of the response pulses were determined. Two methods including Multivariate Adaptive Regression Splines (MARS) and Artificial Neural Network (ANN) methods were then employed to predict the effective length (i.e., frequency) of tensile stress and strain pulses in longitudinal and transverse directions based on haversine waveform. It is indicated that, under controlled stress and strain modes, both methods (MARS and ANN) are capable of predicting the frequency of loading in HMA fatigue tests with very good accuracy. The accuracy of ANN method is, however, more than MARS method. It is furthermore shown that the results of the present study can be generalized to sinusoidal waveform by a simple equation.
Corner Wrinkling at a Square Membrane Due to Symmetric Mechanical Loads
NASA Technical Reports Server (NTRS)
Blandino, Joseph R.; Johnston, John D.; Dharamsi, Urmil K.; Brodeur, Stephen J. (Technical Monitor)
2001-01-01
Thin-film membrane structures are under consideration for use in many future gossamer spacecraft systems. Examples include sunshields for large aperture telescopes, solar sails, and membrane optics. The development of capabilities for testing and analyzing pre-tensioned, thin film membrane structures is an important and challenging aspect of gossamer spacecraft technology development. This paper presents results from experimental and computational studies performed to characterize the wrinkling behavior of thin-fi[m membranes under mechanical loading. The test article is a 500 mm square membrane subjected to symmetric comer loads. Data is presented for loads ranging from 0.49 N to 4.91 N. The experimental results show that as the load increases the number of wrinkles increases, while the wrinkle amplitude decreases. The computational model uses a finite element implementation of Stein-Hedgepeth membrane wrinkling theory to predict the behavior of the membrane. Comparisons were made with experimental results for the wrinkle angle and wrinkled region. There was reasonably good agreement between the measured wrinkle angle and the predicted directions of the major principle stresses. The shape of the wrinkle region predicted by the finite element model matches that observed in the experiments; however, the size of the predicted region is smaller than that determined in the experiments.
Wang, Qingzhi; Zhao, Hongxia; Wang, Yan; Xie, Qing; Chen, Jingwen; Quan, Xie
2017-11-01
Organophosphate flame retardants (OPFRs) have attracted wide concerns due to their toxicities and ubiquitous occurrence in the environment. In this work, Octanol-air partition coefficient (K OA ) for 14 OPFRs including 4 halogenated alkyl-, 5 aryl- and 5 alkyl-OPFRs, were estimated as a function of temperature using a gas chromatographic retention time (GC-RT) method. Their log K OA-GC values and internal energies of phase transfer (Δ OA U/kJmol -1 ) ranged from 8.03 to 13.0 and from 69.7 to 149, respectively. Substitution pattern and molar volume (V M ) were found to be capable of influencing log K OA-GC values of OPFRs. The halogenated alkyl-OPFRs had higher log K OA-GC values than aryl- or alkyl-OPFRs. The bigger the molar volume was, the greater the log K OA-GC values increased. In addition, a predicted model of log K OA-GC versus different relative retention times (RRTs) was developed with a high cross-validated value (Q 2 (cum) ) of 0.951, indicating a good predictive ability and stability. Therefore, the log K OA-GC values of the remaining OPFRs can be predicted by using their RRTs on different GC columns. Copyright © 2017 Elsevier Inc. All rights reserved.
Parrish, Rudolph S.; Smith, Charles N.
1990-01-01
A quantitative method is described for testing whether model predictions fall within a specified factor of true values. The technique is based on classical theory for confidence regions on unknown population parameters and can be related to hypothesis testing in both univariate and multivariate situations. A capability index is defined that can be used as a measure of predictive capability of a model, and its properties are discussed. The testing approach and the capability index should facilitate model validation efforts and permit comparisons among competing models. An example is given for a pesticide leaching model that predicts chemical concentrations in the soil profile.
Building a Predictive Capability for Decision-Making that Supports MultiPEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carmichael, Joshua Daniel
Multi-phenomenological explosion monitoring (multiPEM) is a developing science that uses multiple geophysical signatures of explosions to better identify and characterize their sources. MultiPEM researchers seek to integrate explosion signatures together to provide stronger detection, parameter estimation, or screening capabilities between different sources or processes. This talk will address forming a predictive capability for screening waveform explosion signatures to support multiPEM.
NASA Astrophysics Data System (ADS)
Mura, Matteo; Bottalico, Francesca; Giannetti, Francesca; Bertani, Remo; Giannini, Raffaello; Mancini, Marco; Orlandini, Simone; Travaglini, Davide; Chirici, Gherardo
2018-04-01
The spatial prediction of growing stock volume is one of the most frequent application of remote sensing for supporting the sustainable management of forest ecosystems. For such a purpose data from active or passive sensors are used as predictor variables in combination with measures taken in the field in sampling plots. The Sentinel-2 (S2) satellites are equipped with a Multi Spectral Instrument (MSI) capable of acquiring 13 bands in the visible and infrared domains with a spatial resolution varying between 10 and 60 m. The present study aimed at evaluating the performance of the S2-MSI imagery for estimating the growing stock volume of forest ecosystems. To do so we used 240 plots measured in two study areas in Italy. The imputation was carried out with eight k-Nearest Neighbours (k-NN) methods available in the open source YaImpute R package. In order to evaluate the S2-MSI performance we repeated the experimental protocol also with two other sets of images acquired by two well-known satellites equipped with multi spectral instruments: Landsat 8 OLI and RapidEye scanner. We found that S2 worked better than Landsat in 37.5% of the cases and in 62.5% of the cases better than RapidEye. In one study area the best performance was obtained with Landsat OLI (RMSD = 6.84%) and in the other with S2 (RMSD = 22.94%), both with the k-NN system based on a distance matrix calculated with the Random Forest algorithm. The results confirmed that S2 images are suitable for predicting growing stock volume obtaining good performances (average RMSD for both the test areas of less than 19%).
Brackman, Emily H; Morris, Blair W; Andover, Margaret S
2016-01-01
The interpersonal psychological theory of suicide provides a useful framework for considering the relationship between non-suicidal self-injury and suicide. Researchers propose that NSSI increases acquired capability for suicide. We predicted that both NSSI frequency and the IPTS acquired capability construct (decreased fear of death and increased pain tolerance) would separately interact with suicidal ideation to predict suicide attempts. Undergraduate students (N = 113) completed self-report questionnaires, and a subsample (n = 66) also completed a pain sensitivity task. NSSI frequency significantly moderated the association between suicidal ideation and suicide attempts. However, in a separate model, acquired capability did not moderate this relationship. Our understanding of the relationship between suicidal ideation and suicidal behavior can be enhanced by factors associated with NSSI that are distinct from the acquired capability construct.
NASA Technical Reports Server (NTRS)
Maxwell, M. S.
1984-01-01
Present technology allows radiometric monitoring of the Earth, ocean and atmosphere from a geosynchronous platform with good spatial, spectral and temporal resolution. The proposed system could provide a capability for multispectral remote sensing with a 50 m nadir spatial resolution in the visible bands, 250 m in the 4 micron band and 1 km in the 11 micron thermal infrared band. The diffraction limited telescope has a 1 m aperture, a 10 m focal length (with a shorter focal length in the infrared) and linear and area arrays of detectors. The diffraction limited resolution applies to scenes of any brightness but for a dark low contrast scenes, the good signal to noise ratio of the system contribute to the observation capability. The capabilities of the AGP system are assessed for quantitative observations of ocean scenes. Instrument and ground system configuration are presented and projected sensor capabilities are analyzed.
Zhang, Shu-Xin; Chai, Xin-Sheng; He, Liang
2016-09-16
This work reports on a method for the accurate determination of fiber water-retaining capability at process conditions by headspace gas chromatography (HS-GC) method. The method was based the HS-GC measurement of water vapor on a set closed vials containing in a given amount pulp with different amounts of water addition, from under-saturation to over-saturation. By plotting the equilibrated water vapor signal vs. the amount of water added in pulp, two different trend lines can be observed, in which the transition of the lines corresponds to fiber water-retaining capability. The results showed that the HS-GC method has good measurement precision (much better than the reference method) and good accuracy. The present method can be also used for determining pulp fiber water-retaining capability at the process temperatures in both laboratory research and mill applications. Copyright © 2016 Elsevier B.V. All rights reserved.
2007-06-05
whether or not the module is capable of changing from a not - good to a good state. If this is true, the module is associated with a portion of main memory...the module is ca- pable of changing from a good to a not - good state. A false value reflects the ability of good software to protect itself from...qualities of the module. One exception to this rule is that the kernel is corruptible if and only if the hypervisor is not good , since a bad
Space shuttle booster multi-engine base flow analysis
NASA Technical Reports Server (NTRS)
Tang, H. H.; Gardiner, C. R.; Anderson, W. A.; Navickas, J.
1972-01-01
A comprehensive review of currently available techniques pertinent to several prominent aspects of the base thermal problem of the space shuttle booster is given along with a brief review of experimental results. A tractable engineering analysis, capable of predicting the power-on base pressure, base heating, and other base thermal environmental conditions, such as base gas temperature, is presented and used for an analysis of various space shuttle booster configurations. The analysis consists of a rational combination of theoretical treatments of the prominent flow interaction phenomena in the base region. These theories consider jet mixing, plume flow, axisymmetric flow effects, base injection, recirculating flow dynamics, and various modes of heat transfer. Such effects as initial boundary layer expansion at the nozzle lip, reattachment, recompression, choked vent flow, and nonisoenergetic mixing processes are included in the analysis. A unified method was developed and programmed to numerically obtain compatible solutions for the various flow field components in both flight and ground test conditions. Preliminary prediction for a 12-engine space shuttle booster base thermal environment was obtained for a typical trajectory history. Theoretical predictions were also obtained for some clustered-engine experimental conditions. Results indicate good agreement between the data and theoretical predicitons.
Analysis of Wind Tunnel Oscillatory Data of the X-31A Aircraft
NASA Technical Reports Server (NTRS)
Smith, Mark S.
1999-01-01
Wind tunnel oscillatory tests in pitch, roll, and yaw were performed on a 19%-scale model of the X-31A aircraft. These tests were used to study the aerodynamic characteristics of the X-31A in response to harmonic oscillations at six frequencies. In-phase and out-of-phase components of the aerodynamic coefficients were obtained over a range of angles of attack from 0 to 90 deg. To account for the effect of frequency on the data, mathematical models with unsteady terms were formulated by use of two different indicial functions. Data from a reduced set of frequencies were used to estimate model parameters, including steady-state static and dynamic stability derivatives. Both models showed good prediction capability and the ability to accurately fit the measured data. Estimated static stability derivatives compared well with those obtained from static wind tunnel tests. The roll and yaw rate derivative estimates were compared with rotary-balanced wind tunnel data and theoretical predictions. The estimates and theoretical predictions were in agreement at small angles of attack. The rotary-balance data showed, in general, acceptable agreement with the steady-state derivative estimates.
Heat Transfer Measurements and Predictions on a Power Generation Gas Turbine Blade
NASA Technical Reports Server (NTRS)
Giel, Paul W.; Bunker, Ronald S.; VanFossen, G. James; Boyle, Robert J.
2000-01-01
Detailed heat transfer measurements and predictions are given for a power generation turbine rotor with 129 deg of nominal turning and an axial chord of 137 mm. Data were obtained for a set of four exit Reynolds numbers comprised of the design point of 628,000, -20%, +20%, and +40%. Three ideal exit pressure ratios were examined including the design point of 1.378, -10%, and +10%. Inlet incidence angles of 0 deg and +/-2 deg were also examined. Measurements were made in a linear cascade with highly three-dimensional blade passage flows that resulted from the high flow turning and thick inlet boundary layers. Inlet turbulence was generated with a blown square bar grid. The purpose of the work is the extension of three-dimensional predictive modeling capability for airfoil external heat transfer to engine specific conditions including blade shape, Reynolds numbers, and Mach numbers. Data were obtained by a steady-state technique using a thin-foil heater wrapped around a low thermal conductivity blade. Surface temperatures were measured using calibrated liquid crystals. The results show the effects of strong secondary vortical flows, laminar-to-turbulent transition, and also show good detail in the stagnation region.
A General Method for Predicting Amino Acid Residues Experiencing Hydrogen Exchange
Wang, Boshen; Perez-Rathke, Alan; Li, Renhao; Liang, Jie
2018-01-01
Information on protein hydrogen exchange can help delineate key regions involved in protein-protein interactions and provides important insight towards determining functional roles of genetic variants and their possible mechanisms in disease processes. Previous studies have shown that the degree of hydrogen exchange is affected by hydrogen bond formations, solvent accessibility, proximity to other residues, and experimental conditions. However, a general predictive method for identifying residues capable of hydrogen exchange transferable to a broad set of proteins is lacking. We have developed a machine learning method based on random forest that can predict whether a residue experiences hydrogen exchange. Using data from the Start2Fold database, which contains information on 13,306 residues (3,790 of which experience hydrogen exchange and 9,516 which do not exchange), our method achieves good performance. Specifically, we achieve an overall out-of-bag (OOB) error, an unbiased estimate of the test set error, of 20.3 percent. Using a randomly selected test data set consisting of 500 residues experiencing hydrogen exchange and 500 which do not, our method achieves an accuracy of 0.79, a recall of 0.74, a precision of 0.82, and an F1 score of 0.78.
NASA Technical Reports Server (NTRS)
Suzen, Y. B.; Huang, P. G.
2005-01-01
A transport equation for the intermittency factor is employed to predict transitional flows under the effects of pressure gradients, freestream turbulence intensities, Reynolds number variations, flow separation and reattachment. and unsteady wake-blade interactions representing diverse operating conditions encountered in low-pressure turbines. The intermittent behaviour of the transitional flows is taken into account and incorporated into computations by modifying the eddy viscosity, Mu(sub t), with the intermittency factor, gamma. Turbulent quantities are predicted by using Menter's two-equation turbulence model (SST). The onset location of transition is obtained from correlations based on boundary-layer momentum thickness, acceleration parameter, and turbulence intensity. The intermittency factor is obtained from a transport model which can produce both the experimentally observed streamwise variation of intermittency and a realistic profile in the cross stream direction. The intermittency transport model is tested and validated against several well documented low pressure turbine experiments ranging from flat plate cases to unsteady wake-blade interaction experiments. Overall, good agreement between the experimental data and computational results is obtained illustrating the predicting capabilities of the model and the current intermittency transport modelling approach for transitional flow simulations.
Prediction and validation of blowout limits of co-flowing jet diffusion flames -- effect of dilution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karbasi, M.; Wierzba, I.
1996-10-01
The blowout limits of a co-flowing turbulent methane jet diffusion flame with addition of diluent in either jet fuel or surrounding air stream is studied both analytically and experimentally. Helium, nitrogen and carbon dioxide were employed as the diluents. Experiments indicated that an addition of diluents to the jet fuel or surrounding air stream decreased the stability limit of the jet diffusion flames. The strongest effect was observed with carbon dioxide as the diluent followed by nitrogen and then by helium. A model of extinction based on recognized criterion of the mixing time scale to characteristic combustion time scale ratiomore » using experimentally derived correlations is proposed. It is capable of predicting the large reduction of the jet blowout velocity due to a relatively small increase in the co-flow stream velocity along with an increase in the concentration of diluent in either the jet fuel or surrounding air stream. Experiments were carried out to validate the model. The predicted blowout velocities of turbulent jet diffusion flames obtained using this model are in good agreement with the corresponding experimental data.« less
NASA Astrophysics Data System (ADS)
Ranjan, R.; Menon, S.
2018-04-01
The two-level simulation (TLS) method evolves both the large-and the small-scale fields in a two-scale approach and has shown good predictive capabilities in both isotropic and wall-bounded high Reynolds number (Re) turbulent flows in the past. Sensitivity and ability of this modelling approach to predict fundamental features (such as backscatter, counter-gradient turbulent transport, small-scale vorticity, etc.) seen in high Re turbulent flows is assessed here by using two direct numerical simulation (DNS) datasets corresponding to a forced isotropic turbulence at Taylor's microscale-based Reynolds number Reλ ≈ 433 and a fully developed turbulent flow in a periodic channel at friction Reynolds number Reτ ≈ 1000. It is shown that TLS captures the dynamics of local co-/counter-gradient transport and backscatter at the requisite scales of interest. These observations are further confirmed through a posteriori investigation of the flow in a periodic channel at Reτ = 2000. The results reveal that the TLS method can capture both the large- and the small-scale flow physics in a consistent manner, and at a reduced overall cost when compared to the estimated DNS or wall-resolved LES cost.
Gómez-Carracedo, M P; Andrade, J M; Rutledge, D N; Faber, N M
2007-03-07
Selecting the correct dimensionality is critical for obtaining partial least squares (PLS) regression models with good predictive ability. Although calibration and validation sets are best established using experimental designs, industrial laboratories cannot afford such an approach. Typically, samples are collected in an (formally) undesigned way, spread over time and their measurements are included in routine measurement processes. This makes it hard to evaluate PLS model dimensionality. In this paper, classical criteria (leave-one-out cross-validation and adjusted Wold's criterion) are compared to recently proposed alternatives (smoothed PLS-PoLiSh and a randomization test) to seek out the optimum dimensionality of PLS models. Kerosene (jet fuel) samples were measured by attenuated total reflectance-mid-IR spectrometry and their spectra where used to predict eight important properties determined using reference methods that are time-consuming and prone to analytical errors. The alternative methods were shown to give reliable dimensionality predictions when compared to external validation. By contrast, the simpler methods seemed to be largely affected by the largest changes in the modeling capabilities of the first components.
A Continuum Model for the Effect of Dynamic Recrystallization on the Stress–Strain Response
Perdahcıoğlu, E. S.; van den Boogaard, A. H.
2018-01-01
Austenitic Stainless Steels and High-Strength Low-Alloy (HSLA) steels show significant dynamic recovery and dynamic recrystallization (DRX) during hot forming. In order to design optimal and safe hot-formed products, a good understanding and constitutive description of the material behavior is vital. A new continuum model is presented and validated on a wide range of deformation conditions including high strain rate deformation. The model is presented in rate form to allow for the prediction of material behavior in transient process conditions. The proposed model is capable of accurately describing the stress–strain behavior of AISI 316LN in hot forming conditions, also the high strain rate DRX-induced softening observed during hot torsion of HSLA is accurately predicted. It is shown that the increase in recrystallization rate at high strain rates observed in experiments can be captured by including the elastic energy due to the dynamic stress in the driving pressure for recrystallization. Furthermore, the predicted resulting grain sizes follow the power-law dependence with steady state stress that is often reported in literature and the evolution during hot deformation shows the expected trend. PMID:29789492
Development of PRIME for irradiation performance analysis of U-Mo/Al dispersion fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeong, Gwan Yoon; Kim, Yeon Soo; Jeong, Yong Jin
A prediction code for the thermo-mechanical performance of research reactor fuel (PRIME) has been developed with the implementation of developed models to analyze the irradiation behavior of U-Mo dispersion fuel. The code is capable of predicting the two-dimensional thermal and mechanical performance of U-Mo dispersion fuel during irradiation. A finite element method was employed to solve the governing equations for thermal and mechanical equilibria. Temperature-and burnup-dependent material properties of the fuel meat constituents and cladding were used. The numerical solution schemes in PRIME were verified by benchmarking solutions obtained using a commercial finite element analysis program (ABAQUS).The code was validatedmore » using irradiation data from RERTR, HAMP-1, and E-FUTURE tests. The measured irradiation data used in the validation were IL thickness, volume fractions of fuel meat constituents for the thermal analysis, and profiles of the plate thickness changes and fuel meat swelling for the mechanical analysis. The prediction results were in good agreement with the measurement data for both thermal and mechanical analyses, confirming the validity of the code. (c) 2018 Elsevier B.V. All rights reserved.« less
NASA Technical Reports Server (NTRS)
Hollis, Brian R.; Hollingsworth, Kevin E.
2014-01-01
Aeroheating data on mid lift-to-drag ratio entry vehicle configurations has been obtained through hypersonic wind tunnel testing. Vehicles of this class have been proposed for high-mass Mars missions, such as sample return and crewed exploration, for which the conventional sphere-cone entry vehicle geometries of previous Mars missions are insufficient. Several configurations were investigated, including elliptically-blunted cylinders with both circular and elliptical cross sections, biconic geometries based on launch vehicle dual-use shrouds, and parametrically-optimized analytic geometries. Testing was conducted at Mach 6 over a range of Reynolds numbers sufficient to generate laminar, transitional, and turbulent flow. Global aeroheating data were obtained using phosphor thermography. Both stream-wise and cross-flow transition occured on different configurations. Comparisons were made with laminar and turbulent computational predictions generated with an algebraic turbulence model. Predictions were generally in good agreement in regions of laminar or fully-turbulent flow; however for transitional cases, the lack of a transition onset prediction capability produced less accurate comparisons. The data obtained in this study are intended to be used for prelimary mission design studies and the development and validation of computational methods.
NASA Astrophysics Data System (ADS)
Kim, Seokpum; Wei, Yaochi; Horie, Yasuyuki; Zhou, Min
2018-05-01
The design of new materials requires establishment of macroscopic measures of material performance as functions of microstructure. Traditionally, this process has been an empirical endeavor. An approach to computationally predict the probabilistic ignition thresholds of polymer-bonded explosives (PBXs) using mesoscale simulations is developed. The simulations explicitly account for microstructure, constituent properties, and interfacial responses and capture processes responsible for the development of hotspots and damage. The specific mechanisms tracked include viscoelasticity, viscoplasticity, fracture, post-fracture contact, frictional heating, and heat conduction. The probabilistic analysis uses sets of statistically similar microstructure samples to directly mimic relevant experiments for quantification of statistical variations of material behavior due to inherent material heterogeneities. The particular thresholds and ignition probabilities predicted are expressed in James type and Walker-Wasley type relations, leading to the establishment of explicit analytical expressions for the ignition probability as function of loading. Specifically, the ignition thresholds corresponding to any given level of ignition probability and ignition probability maps are predicted for PBX 9404 for the loading regime of Up = 200-1200 m/s where Up is the particle speed. The predicted results are in good agreement with available experimental measurements. A parametric study also shows that binder properties can significantly affect the macroscopic ignition behavior of PBXs. The capability to computationally predict the macroscopic engineering material response relations out of material microstructures and basic constituent and interfacial properties lends itself to the design of new materials as well as the analysis of existing materials.
Vaporization and Zonal Mixing in Performance Modeling of Advanced LOX-Methane Rockets
NASA Technical Reports Server (NTRS)
Williams, George J., Jr.; Stiegemeier, Benjamin R.
2013-01-01
Initial modeling of LOX-Methane reaction control (RCE) 100 lbf thrusters and larger, 5500 lbf thrusters with the TDK/VIPER code has shown good agreement with sea-level and altitude test data. However, the vaporization and zonal mixing upstream of the compressible flow stage of the models leveraged empirical trends to match the sea-level data. This was necessary in part because the codes are designed primarily to handle the compressible part of the flow (i.e. contraction through expansion) and in part because there was limited data on the thrusters themselves on which to base a rigorous model. A more rigorous model has been developed which includes detailed vaporization trends based on element type and geometry, radial variations in mixture ratio within each of the "zones" associated with elements and not just between zones of different element types, and, to the extent possible, updated kinetic rates. The Spray Combustion Analysis Program (SCAP) was leveraged to support assumptions in the vaporization trends. Data of both thrusters is revisited and the model maintains a good predictive capability while addressing some of the major limitations of the previous version.
Assessing the detection capability of a dense infrasound network in the southern Korean Peninsula
NASA Astrophysics Data System (ADS)
Che, Il-Young; Le Pichon, Alexis; Kim, Kwangsu; Shin, In-Cheol
2017-08-01
The Korea Infrasound Network (KIN) is a dense seismoacoustic array network consisting of eight small-aperture arrays with an average interarray spacing of ∼100 km. The processing of the KIN historical recordings over 10 yr in the 0.05-5 Hz frequency band shows that the dominant sources of signals are microbaroms and human activities. The number of detections correlates well with the seasonal and daily variability of the stratospheric wind dynamics. The quantification of the spatiotemporal variability of the KIN detection performance is simulated using a frequency-dependent semi-empirical propagation modelling technique. The average detection thresholds predicted for the region of interest by using both the KIN arrays and the International Monitoring System (IMS) infrasound station network at a given frequency of 1.6 Hz are estimated to be 5.6 and 10.0 Pa for two- and three-station coverage, respectively, which was about three times lower than the thresholds predicted by using only the IMS stations. The network performance is significantly enhanced from May to August, with detection thresholds being one order of magnitude lower than the rest of the year due to prevailing steady stratospheric winds. To validate the simulations, the amplitudes of ground-truth repeated surface mining explosions at an open-pit limestone mine were measured over a 19-month period. Focusing on the spatiotemporal variability of the stratospheric winds which control to first order where infrasound signals are expected to be detected, the predicted detectable signal amplitude at the mine and the detection capability at one KIN array located at a distance of 175 km are found to be in good agreement with the observations from the measurement campaign. The detection threshold in summer is ∼2 Pa and increases up to ∼300 Pa in winter. Compared with the low and stable thresholds in summer, the high temporal variability of the KIN performance is well predicted throughout the year. Simulations show that the performance of the global infrasound network of the IMS is significantly improved by adding KIN. This study shows the usefulness of dense regional networks to enhance detection capability in regions of interest in the context of future verification of the Comprehensive Nuclear-Test-Ban Treaty.
An, Ji-Yong; You, Zhu-Hong; Meng, Fan-Rong; Xu, Shu-Juan; Wang, Yin
2016-05-18
Protein-Protein Interactions (PPIs) play essential roles in most cellular processes. Knowledge of PPIs is becoming increasingly more important, which has prompted the development of technologies that are capable of discovering large-scale PPIs. Although many high-throughput biological technologies have been proposed to detect PPIs, there are unavoidable shortcomings, including cost, time intensity, and inherently high false positive and false negative rates. For the sake of these reasons, in silico methods are attracting much attention due to their good performances in predicting PPIs. In this paper, we propose a novel computational method known as RVM-AB that combines the Relevance Vector Machine (RVM) model and Average Blocks (AB) to predict PPIs from protein sequences. The main improvements are the results of representing protein sequences using the AB feature representation on a Position Specific Scoring Matrix (PSSM), reducing the influence of noise using a Principal Component Analysis (PCA), and using a Relevance Vector Machine (RVM) based classifier. We performed five-fold cross-validation experiments on yeast and Helicobacter pylori datasets, and achieved very high accuracies of 92.98% and 95.58% respectively, which is significantly better than previous works. In addition, we also obtained good prediction accuracies of 88.31%, 89.46%, 91.08%, 91.55%, and 94.81% on other five independent datasets C. elegans, M. musculus, H. sapiens, H. pylori, and E. coli for cross-species prediction. To further evaluate the proposed method, we compare it with the state-of-the-art support vector machine (SVM) classifier on the yeast dataset. The experimental results demonstrate that our RVM-AB method is obviously better than the SVM-based method. The promising experimental results show the efficiency and simplicity of the proposed method, which can be an automatic decision support tool. To facilitate extensive studies for future proteomics research, we developed a freely available web server called RVMAB-PPI in Hypertext Preprocessor (PHP) for predicting PPIs. The web server including source code and the datasets are available at http://219.219.62.123:8888/ppi_ab/.
NASA Technical Reports Server (NTRS)
Schoeberl, Mark; Rychekewkitsch, Michael; Andrucyk, Dennis; McConaughy, Gail; Meeson, Blanche; Hildebrand, Peter; Einaudi, Franco (Technical Monitor)
2000-01-01
NASA's Earth Science Enterprise's long range vision is to enable the development of a national proactive environmental predictive capability through targeted scientific research and technological innovation. Proactive environmental prediction means the prediction of environmental events and their secondary consequences. These consequences range from disasters and disease outbreak to improved food production and reduced transportation, energy and insurance costs. The economic advantage of this predictive capability will greatly outweigh the cost of development. Developing this predictive capability requires a greatly improved understanding of the earth system and the interaction of the various components of that system. It also requires a change in our approach to gathering data about the earth and a change in our current methodology in processing that data including its delivery to the customers. And, most importantly, it requires a renewed partnership between NASA and its sister agencies. We identify six application themes that summarize the potential of proactive environmental prediction. We also identify four technology themes that articulate our approach to implementing proactive environmental prediction.
NASA Astrophysics Data System (ADS)
Liu, J.; Zhang, Q.; Yan, J. D.; Zhong, J.; Fang, M. T. C.
2016-11-01
It is shown that the arc model based on laminar flow cannot predict satisfactorily the voltage of an air arc burning in a supersonic nozzle. The Prandtl mixing length model (PML) and a modified k-epsilon turbulence model (MKE) are used to introduce turbulence enhanced momentum and energy transport. Arc voltages predicted by these two turbulence models are in good agreement with experiments at the stagnation pressure (P 0) of 10 bar. The predicted arc voltages by MKE for P 0 = 13 bar and 7 bar are in better agreement with experiments than those predicted by PML. MKE is therefore a preferred turbulence model for an air nozzle arc. There are two peaks in ρC P of air at 4000 K and 7000 K due, respectively, to the dissociation of oxygen and that of nitrogen. These peaks produce corresponding peaks in turbulent thermal conductivity, which results in very broad radial temperature profile and a large arc radius. Thus, turbulence indirectly enhances axial enthalpy transport, which becomes the dominant energy transport process for the overall energy balance of the arc column at high currents. When the current reduces, turbulent thermal conduction gradually becomes dominant. The temperature dependence of ρC P has a decisive influence on the radial temperature profile of a turbulent arc, thus the thermal interruption capability of a gas. Comparison between ρC P for air and SF6 shows that ρC P for SF6 has peaks below 4000 K. This renders a distinctive arc core and a small arc radius for turbulent SF6, thus superior arc quenching capability. It is suggested, for the first time, that ρC P provides guidance for the search of a replacement switching gas for SF6.
NASA Astrophysics Data System (ADS)
Al-Abadi, Alaa M.
2017-05-01
In recent years, delineation of groundwater productivity zones plays an increasingly important role in sustainable management of groundwater resource throughout the world. In this study, groundwater productivity index of northeastern Wasit Governorate was delineated using probabilistic frequency ratio (FR) and Shannon's entropy models in framework of GIS. Eight factors believed to influence the groundwater occurrence in the study area were selected and used as the input data. These factors were elevation (m), slope angle (degree), geology, soil, aquifer transmissivity (m2/d), storativity (dimensionless), distance to river (m), and distance to faults (m). In the first step, borehole location inventory map consisting of 68 boreholes with relatively high yield (>8 l/sec) was prepared. 47 boreholes (70 %) were used as training data and the remaining 21 (30 %) were used for validation. The predictive capability of each model was determined using relative operating characteristic technique. The results of the analysis indicate that the FR model with a success rate of 87.4 % and prediction rate 86.9 % performed slightly better than Shannon's entropy model with success rate of 84.4 % and prediction rate of 82.4 %. The resultant groundwater productivity index was classified into five classes using natural break classification scheme: very low, low, moderate, high, and very high. The high-very high classes for FR and Shannon's entropy models occurred within 30 % (217 km2) and 31 % (220 km2), respectively indicating low productivity conditions of the aquifer system. From final results, both of the models were capable to prospect GWPI with very good results, but FR was better in terms of success and prediction rates. Results of this study could be helpful for better management of groundwater resources in the study area and give planners and decision makers an opportunity to prepare appropriate groundwater investment plans.
Spectrometric Estimation of Total Nitrogen Concentration in Douglas-Fir Foliage
NASA Technical Reports Server (NTRS)
Johnson, Lee F.; Billow, Christine R.; Peterson, David L. (Technical Monitor)
1995-01-01
Spectral measurements of fresh and dehydrated Douglas-fir foliage, from trees cultivated under three fertilization treatments, were acquired with a laboratory spectrophotometer. The slope (first-derivative) of the fresh- and dry-leaf absorbance spectra at locations near known protein absorption features was strongly correlated with total nitrogen (TN) concentration of the foliage samples. Particularly strong correlation was observed between the first-derivative spectra in the 2150-2170 nm region and TN, reaching a local maximum in the fresh-leaf spectra of -0.84 at 2 160 nm. Stepwise regression was used to generate calibration equations relating first derivative spectra from fresh, dry/intact, and dry/ground samples to TN concentration. Standard errors of calibration were 1.52 mg g-1 (fresh), 1.33 (dry/intact), and 1.20 (dry/ground), with goodness-of-fit 0.94 and greater. Cross-validation was performed with the fresh-leaf dataset to examine the predictive capability of the regression method; standard errors of prediction ranged from 1.47 - 2.37 mg g(exp -1) across seven different validation sets, prediction goodness of fit ranged from .85-.94, and wavelength selection was fairly insensitive to the membership of the calibration set. All regressions in this study tended to select wavelengths in the 2100-2350 nm region, with the primary selection in the 2142-2172 nm region. The study provides positive evidence concerning the feasibility of assessing TN status of fresh-leaf samples by spectrometric means. We assert that the ability to extract biochemical information from fresh-leaf spectra is a necessary but insufficient condition regarding the use of remote sensing for canopy-level biochemical estimation.
Roland, Lauren T.; Kallogjeri, Dorina; Sinks, Belinda C.; Rauch, Steven D.; Shepard, Neil T.; White, Judith A.; Goebel, Joel A.
2015-01-01
Objective Test performance of a focused dizziness questionnaire’s ability to discriminate between peripheral and non-peripheral causes of vertigo. Study Design Prospective multi-center Setting Four academic centers with experienced balance specialists Patients New dizzy patients Interventions A 32-question survey was given to participants. Balance specialists were blinded and a diagnosis was established for all participating patients within 6 months. Main outcomes Multinomial logistic regression was used to evaluate questionnaire performance in predicting final diagnosis and differentiating between peripheral and non-peripheral vertigo. Univariate and multivariable stepwise logistic regression were used to identify questions as significant predictors of the ultimate diagnosis. C-index was used to evaluate performance and discriminative power of the multivariable models. Results 437 patients participated in the study. Eight participants without confirmed diagnoses were excluded and 429 were included in the analysis. Multinomial regression revealed that the model had good overall predictive accuracy of 78.5% for the final diagnosis and 75.5% for differentiating between peripheral and non-peripheral vertigo. Univariate logistic regression identified significant predictors of three main categories of vertigo: peripheral, central and other. Predictors were entered into forward stepwise multivariable logistic regression. The discriminative power of the final models for peripheral, central and other causes were considered good as measured by c-indices of 0.75, 0.7 and 0.78, respectively. Conclusions This multicenter study demonstrates a focused dizziness questionnaire can accurately predict diagnosis for patients with chronic/relapsing dizziness referred to outpatient clinics. Additionally, this survey has significant capability to differentiate peripheral from non-peripheral causes of vertigo and may, in the future, serve as a screening tool for specialty referral. Clinical utility of this questionnaire to guide specialty referral is discussed. PMID:26485598
Roland, Lauren T; Kallogjeri, Dorina; Sinks, Belinda C; Rauch, Steven D; Shepard, Neil T; White, Judith A; Goebel, Joel A
2015-12-01
Test performance of a focused dizziness questionnaire's ability to discriminate between peripheral and nonperipheral causes of vertigo. Prospective multicenter. Four academic centers with experienced balance specialists. New dizzy patients. A 32-question survey was given to participants. Balance specialists were blinded and a diagnosis was established for all participating patients within 6 months. Multinomial logistic regression was used to evaluate questionnaire performance in predicting final diagnosis and differentiating between peripheral and nonperipheral vertigo. Univariate and multivariable stepwise logistic regression were used to identify questions as significant predictors of the ultimate diagnosis. C-index was used to evaluate performance and discriminative power of the multivariable models. In total, 437 patients participated in the study. Eight participants without confirmed diagnoses were excluded and 429 were included in the analysis. Multinomial regression revealed that the model had good overall predictive accuracy of 78.5% for the final diagnosis and 75.5% for differentiating between peripheral and nonperipheral vertigo. Univariate logistic regression identified significant predictors of three main categories of vertigo: peripheral, central, and other. Predictors were entered into forward stepwise multivariable logistic regression. The discriminative power of the final models for peripheral, central, and other causes was considered good as measured by c-indices of 0.75, 0.7, and 0.78, respectively. This multicenter study demonstrates a focused dizziness questionnaire can accurately predict diagnosis for patients with chronic/relapsing dizziness referred to outpatient clinics. Additionally, this survey has significant capability to differentiate peripheral from nonperipheral causes of vertigo and may, in the future, serve as a screening tool for specialty referral. Clinical utility of this questionnaire to guide specialty referral is discussed.
Taylor, Fiona G M; Quirke, Philip; Heald, Richard J; Moran, Brendan; Blomqvist, Lennart; Swift, Ian; Sebag-Montefiore, David J; Tekkis, Paris; Brown, Gina
2011-04-01
To assess local recurrence, disease-free survival, and overall survival in magnetic resonance imaging (MRI)-predicted good prognosis tumors treated by surgery alone. The MERCURY study reported that high-resolution MRI can accurately stage rectal cancer. The routine policy in most centers involved in the MERCURY study was primary surgery alone in MRI-predicted stage II or less and in MRI "good prognosis" stage III with selective avoidance of neoadjuvant therapy. Data were collected prospectively on all patients included in the MERCURY study who were staged as MRI-defined "good" prognosis tumors. "Good" prognosis included MRI-predicted safe circumferential resection margins, with MRI-predicted T2/T3a/T3b (less than 5 mm spread from muscularis propria), regardless of MRI N stage. None received preoperative or postoperative radiotherapy. Overall survival, disease-free survival, and local recurrence were calculated. Of 374 patients followed up in the MERCURY study, 122 (33%) were defined as "good prognosis" stage III or less on MRI. Overall and disease-free survival for all patients with MRI "good prognosis" stage I, II and III disease at 5 years was 68% and 85%, respectively. The local recurrence rate for this series of patients predicted to have a good prognosis tumor on MRI was 3%. The preoperative identification of good prognosis tumors using MRI will allow stratification of patients and better targeting of preoperative therapy. This study confirms the ability of MRI to select patients who are likely to have a good outcome with primary surgery alone.
Predicting turns in proteins with a unified model.
Song, Qi; Li, Tonghua; Cong, Peisheng; Sun, Jiangming; Li, Dapeng; Tang, Shengnan
2012-01-01
Turns are a critical element of the structure of a protein; turns play a crucial role in loops, folds, and interactions. Current prediction methods are well developed for the prediction of individual turn types, including α-turn, β-turn, and γ-turn, etc. However, for further protein structure and function prediction it is necessary to develop a uniform model that can accurately predict all types of turns simultaneously. In this study, we present a novel approach, TurnP, which offers the ability to investigate all the turns in a protein based on a unified model. The main characteristics of TurnP are: (i) using newly exploited features of structural evolution information (secondary structure and shape string of protein) based on structure homologies, (ii) considering all types of turns in a unified model, and (iii) practical capability of accurate prediction of all turns simultaneously for a query. TurnP utilizes predicted secondary structures and predicted shape strings, both of which have greater accuracy, based on innovative technologies which were both developed by our group. Then, sequence and structural evolution features, which are profile of sequence, profile of secondary structures and profile of shape strings are generated by sequence and structure alignment. When TurnP was validated on a non-redundant dataset (4,107 entries) by five-fold cross-validation, we achieved an accuracy of 88.8% and a sensitivity of 71.8%, which exceeded the most state-of-the-art predictors of certain type of turn. Newly determined sequences, the EVA and CASP9 datasets were used as independent tests and the results we achieved were outstanding for turn predictions and confirmed the good performance of TurnP for practical applications.
Predicting Turns in Proteins with a Unified Model
Song, Qi; Li, Tonghua; Cong, Peisheng; Sun, Jiangming; Li, Dapeng; Tang, Shengnan
2012-01-01
Motivation Turns are a critical element of the structure of a protein; turns play a crucial role in loops, folds, and interactions. Current prediction methods are well developed for the prediction of individual turn types, including α-turn, β-turn, and γ-turn, etc. However, for further protein structure and function prediction it is necessary to develop a uniform model that can accurately predict all types of turns simultaneously. Results In this study, we present a novel approach, TurnP, which offers the ability to investigate all the turns in a protein based on a unified model. The main characteristics of TurnP are: (i) using newly exploited features of structural evolution information (secondary structure and shape string of protein) based on structure homologies, (ii) considering all types of turns in a unified model, and (iii) practical capability of accurate prediction of all turns simultaneously for a query. TurnP utilizes predicted secondary structures and predicted shape strings, both of which have greater accuracy, based on innovative technologies which were both developed by our group. Then, sequence and structural evolution features, which are profile of sequence, profile of secondary structures and profile of shape strings are generated by sequence and structure alignment. When TurnP was validated on a non-redundant dataset (4,107 entries) by five-fold cross-validation, we achieved an accuracy of 88.8% and a sensitivity of 71.8%, which exceeded the most state-of-the-art predictors of certain type of turn. Newly determined sequences, the EVA and CASP9 datasets were used as independent tests and the results we achieved were outstanding for turn predictions and confirmed the good performance of TurnP for practical applications. PMID:23144872
Confronting uncertainty in flood damage predictions
NASA Astrophysics Data System (ADS)
Schröter, Kai; Kreibich, Heidi; Vogel, Kristin; Merz, Bruno
2015-04-01
Reliable flood damage models are a prerequisite for the practical usefulness of the model results. Oftentimes, traditional uni-variate damage models as for instance depth-damage curves fail to reproduce the variability of observed flood damage. Innovative multi-variate probabilistic modelling approaches are promising to capture and quantify the uncertainty involved and thus to improve the basis for decision making. In this study we compare the predictive capability of two probabilistic modelling approaches, namely Bagging Decision Trees and Bayesian Networks. For model evaluation we use empirical damage data which are available from computer aided telephone interviews that were respectively compiled after the floods in 2002, 2005 and 2006, in the Elbe and Danube catchments in Germany. We carry out a split sample test by sub-setting the damage records. One sub-set is used to derive the models and the remaining records are used to evaluate the predictive performance of the model. Further we stratify the sample according to catchments which allows studying model performance in a spatial transfer context. Flood damage estimation is carried out on the scale of the individual buildings in terms of relative damage. The predictive performance of the models is assessed in terms of systematic deviations (mean bias), precision (mean absolute error) as well as in terms of reliability which is represented by the proportion of the number of observations that fall within the 95-quantile and 5-quantile predictive interval. The reliability of the probabilistic predictions within validation runs decreases only slightly and achieves a very good coverage of observations within the predictive interval. Probabilistic models provide quantitative information about prediction uncertainty which is crucial to assess the reliability of model predictions and improves the usefulness of model results.
Dong, Ni; Huang, Helai; Zheng, Liang
2015-09-01
In zone-level crash prediction, accounting for spatial dependence has become an extensively studied topic. This study proposes Support Vector Machine (SVM) model to address complex, large and multi-dimensional spatial data in crash prediction. Correlation-based Feature Selector (CFS) was applied to evaluate candidate factors possibly related to zonal crash frequency in handling high-dimension spatial data. To demonstrate the proposed approaches and to compare them with the Bayesian spatial model with conditional autoregressive prior (i.e., CAR), a dataset in Hillsborough county of Florida was employed. The results showed that SVM models accounting for spatial proximity outperform the non-spatial model in terms of model fitting and predictive performance, which indicates the reasonableness of considering cross-zonal spatial correlations. The best model predictive capability, relatively, is associated with the model considering proximity of the centroid distance by choosing the RBF kernel and setting the 10% of the whole dataset as the testing data, which further exhibits SVM models' capacity for addressing comparatively complex spatial data in regional crash prediction modeling. Moreover, SVM models exhibit the better goodness-of-fit compared with CAR models when utilizing the whole dataset as the samples. A sensitivity analysis of the centroid-distance-based spatial SVM models was conducted to capture the impacts of explanatory variables on the mean predicted probabilities for crash occurrence. While the results conform to the coefficient estimation in the CAR models, which supports the employment of the SVM model as an alternative in regional safety modeling. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nedic, Vladimir, E-mail: vnedic@kg.ac.rs; Despotovic, Danijela, E-mail: ddespotovic@kg.ac.rs; Cvetanovic, Slobodan, E-mail: slobodan.cvetanovic@eknfak.ni.ac.rs
2014-11-15
Traffic is the main source of noise in urban environments and significantly affects human mental and physical health and labor productivity. Therefore it is very important to model the noise produced by various vehicles. Techniques for traffic noise prediction are mainly based on regression analysis, which generally is not good enough to describe the trends of noise. In this paper the application of artificial neural networks (ANNs) for the prediction of traffic noise is presented. As input variables of the neural network, the proposed structure of the traffic flow and the average speed of the traffic flow are chosen. Themore » output variable of the network is the equivalent noise level in the given time period L{sub eq}. Based on these parameters, the network is modeled, trained and tested through a comparative analysis of the calculated values and measured levels of traffic noise using the originally developed user friendly software package. It is shown that the artificial neural networks can be a useful tool for the prediction of noise with sufficient accuracy. In addition, the measured values were also used to calculate equivalent noise level by means of classical methods, and comparative analysis is given. The results clearly show that ANN approach is superior in traffic noise level prediction to any other statistical method. - Highlights: • We proposed an ANN model for prediction of traffic noise. • We developed originally designed user friendly software package. • The results are compared with classical statistical methods. • The results are much better predictive capabilities of ANN model.« less
Antiferromagnetic nano-oscillator in external magnetic fields
NASA Astrophysics Data System (ADS)
Checiński, Jakub; Frankowski, Marek; Stobiecki, Tomasz
2017-11-01
We describe the dynamics of an antiferromagnetic nano-oscillator in an external magnetic field of any given time distribution. The oscillator is powered by a spin current originating from spin-orbit effects in a neighboring heavy metal layer and is capable of emitting a THz signal in the presence of an additional easy-plane anisotropy. We derive an analytical formula describing the interaction between such a system and an external field, which can affect the output signal character. Interactions with magnetic pulses of different shapes, with a sinusoidal magnetic field and with a sequence of rapidly changing magnetic fields are discussed. We also perform numerical simulations based on the Landau-Lifshitz-Gilbert equation with spin-transfer torque effects to verify the obtained results and find a very good quantitative agreement between analytical and numerical predictions.
Strange and non-strange particle production in antiproton-nucleus collisions in the UrQMD model
NASA Astrophysics Data System (ADS)
Limphirat, Ayut; Kobdaj, Chinorat; Bleicher, Marcus; Yan, Yupeng; Stöcker, Horst
2009-06-01
The capabilities of the ultra-relativistic quantum molecular dynamics (UrQMD) model in describing antiproton-nucleus collisions are presented. The model provides a good description of the experimental data on multiplicities, transverse momentum distributions and rapidity distributions in antiproton-nucleus collisions. Special emphasis is put on the comparison of strange particles in reactions with nuclear targets ranging from 7Li, 12C, 32S, 64Cu to 131Xe because of the important role of strangeness for the exploration of hypernuclei at PANDA-FAIR. The productions of the double strange baryons Ξ- and \\bar{\\Xi}^+ , which may be used to produce double Λ hypernuclei, are predicted in this work for the reactions \\skew2\\bar{p} + 24Mg, 64Cu and 197Au.
A new analytical compact model for two-dimensional finger photodiodes
NASA Astrophysics Data System (ADS)
Naeve, T.; Hohenbild, M.; Seegebrecht, P.
2008-02-01
A new physically based circuit simulation model for finger photodiodes has been proposed. The approach is based on the solution of transport and continuity equation for generated carriers within the two-dimensional structure. As an example we present results of a diode consisting of N+-fingers located in a P-well on top of a N-type buried layer integrated in a P-type silicon substrate (N+/PW/NBL/Psub finger photodiode). The model is capable to predict the sensitivity of the diode in a wide spectral range very accurately. The structure under consideration was fabricated in an industrial 0.6 μm BiCMOS process. The good agreement of simulated sensitivity data with results of measurements and numerical simulations demonstrate the high quality of our model.
SSME thrust chamber simulation using Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Przekwas, A. J.; Singhal, A. K.; Tam, L. T.
1984-01-01
The capability of the PHOENICS fluid dynamics code in predicting two-dimensional, compressible, and reacting flow in the combustion chamber and nozzle of the space shuttle main engine (SSME) was evaluated. A non-orthogonal body fitted coordinate system was used to represent the nozzle geometry. The Navier-Stokes equations were solved for the entire nozzle with a turbulence model. The wall boundary conditions were calculated based on the wall functions which account for pressure gradients. Results of the demonstration test case reveal all expected features of the transonic nozzle flows. Of particular interest are the locations of normal and barrel shocks, and regions of highest temperature gradients. Calculated performance (global) parameters such as thrust chamber flow rate, thrust, and specific impulse are also in good agreement with available data.
Abilities of helium immobilization by the UO2 surface using the “ab initio” method
NASA Astrophysics Data System (ADS)
Dąbrowski, Ludwik; Szuta, Marcin
2016-09-01
We present density functional theory calculation results concerning the uranium dioxide crystals with a helium atom incorporated in the octahedral sites on a nano superficial layer of UO2 fuel element. In order to quantify the capability of helium immobilization we propose a quantum model of adsorption and desorption which we compare with the classical model of Langmuir. Significant differences between the models are maintained in a wide temperature range including high temperatures of the order of 1000 K. By the proposed method of quantum isotherms it was established that the octahedral positions near the metal surface are good traps for helium atoms. While in a temperature close to 1089 K it predicts an intensive release of helium, which is consistent with the experimental results.
Hill, K W; Bitter, M L; Scott, S D; Ince-Cushman, A; Reinke, M; Rice, J E; Beiersdorfer, P; Gu, M-F; Lee, S G; Broennimann, Ch; Eikenberry, E F
2008-10-01
A new spatially resolving x-ray crystal spectrometer capable of measuring continuous spatial profiles of high resolution spectra (lambda/d lambda>6000) of He-like and H-like Ar K alpha lines with good spatial (approximately 1 cm) and temporal (approximately 10 ms) resolutions has been installed on the Alcator C-Mod tokamak. Two spherically bent crystals image the spectra onto four two-dimensional Pilatus II pixel detectors. Tomographic inversion enables inference of local line emissivity, ion temperature (T(i)), and toroidal plasma rotation velocity (upsilon(phi)) from the line Doppler widths and shifts. The data analysis techniques, T(i) and upsilon(phi) profiles, analysis of fusion-neutron background, and predictions of performance on other tokamaks, including ITER, will be presented.
Groenendijk, Piet; Heinen, Marius; Klammler, Gernot; Fank, Johann; Kupfersberger, Hans; Pisinaras, Vassilios; Gemitzi, Alexandra; Peña-Haro, Salvador; García-Prats, Alberto; Pulido-Velazquez, Manuel; Perego, Alessia; Acutis, Marco; Trevisan, Marco
2014-11-15
The agricultural sector faces the challenge of ensuring food security without an excessive burden on the environment. Simulation models provide excellent instruments for researchers to gain more insight into relevant processes and best agricultural practices and provide tools for planners for decision making support. The extent to which models are capable of reliable extrapolation and prediction is important for exploring new farming systems or assessing the impacts of future land and climate changes. A performance assessment was conducted by testing six detailed state-of-the-art models for simulation of nitrate leaching (ARMOSA, COUPMODEL, DAISY, EPIC, SIMWASER/STOTRASIM, SWAP/ANIMO) for lysimeter data of the Wagna experimental field station in Eastern Austria, where the soil is highly vulnerable to nitrate leaching. Three consecutive phases were distinguished to gain insight in the predictive power of the models: 1) a blind test for 2005-2008 in which only soil hydraulic characteristics, meteorological data and information about the agricultural management were accessible; 2) a calibration for the same period in which essential information on field observations was additionally available to the modellers; and 3) a validation for 2009-2011 with the corresponding type of data available as for the blind test. A set of statistical metrics (mean absolute error, root mean squared error, index of agreement, model efficiency, root relative squared error, Pearson's linear correlation coefficient) was applied for testing the results and comparing the models. None of the models performed good for all of the statistical metrics. Models designed for nitrate leaching in high-input farming systems had difficulties in accurately predicting leaching in low-input farming systems that are strongly influenced by the retention of nitrogen in catch crops and nitrogen fixation by legumes. An accurate calibration does not guarantee a good predictive power of the model. Nevertheless all models were able to identify years and crops with high- and low-leaching rates. Copyright © 2014 Elsevier B.V. All rights reserved.
A Quantitative Model of Expert Transcription Typing
1993-03-08
side of pure psychology, several researchers have argued that transcription typing is a particularly good activity for the study of human skilled...phenomenon with a quantitative METT prediction. The first, quick and dirty analysis gives a good prediction of the copy span, in fact, it is even...typing, it should be demonstrated that the mechanism of the model does not get in the way of good predictions. If situations occur where the entire
Analysis of the impact of safeguards criteria
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mullen, M.F.; Reardon, P.T.
As part of the US Program of Technical Assistance to IAEA Safeguards, the Pacific Northwest Laboratory (PNL) was asked to assist in developing and demonstrating a model for assessing the impact of setting criteria for the application of IAEA safeguards. This report presents the results of PNL's work on the task. The report is in three parts. The first explains the technical approach and methodology. The second contains an example application of the methodology. The third presents the conclusions of the study. PNL used the model and computer programs developed as part of Task C.5 (Estimation of Inspection Efforts) ofmore » the Program of Technical Assistance. The example application of the methodology involves low-enriched uranium conversion and fuel fabrication facilities. The effects of variations in seven parameters are considered: false alarm probability, goal probability of detection, detection goal quantity, the plant operator's measurement capability, the inspector's variables measurement capability, the inspector's attributes measurement capability, and annual plant throughput. Among the key results and conclusions of the analysis are the following: the variables with the greatest impact on the probability of detection are the inspector's measurement capability, the goal quantity, and the throughput; the variables with the greatest impact on inspection costs are the throughput, the goal quantity, and the goal probability of detection; there are important interactions between variables. That is, the effects of a given variable often depends on the level or value of some other variable. With the methodology used in this study, these interactions can be quantitatively analyzed; reasonably good approximate prediction equations can be developed using the methodology described here.« less
Prediction of circulation control performance characteristics for Super STOL and STOL applications
NASA Astrophysics Data System (ADS)
Naqvi, Messam Abbas
The rapid air travel growth during the last three decades, has resulted in runway congestion at major airports. The current airports infrastructure will not be able to support the rapid growth trends expected in the next decade. Changes or upgrades in infrastructure alone would not be able to satisfy the growth requirements, and new airplane concepts such as the NASA proposed Super Short Takeoff and Landing and Extremely Short Takeoff & Landing (ESTOL) are being vigorously pursued. Aircraft noise pollution during Takeoff & Landing is another serious concern and efforts are aimed to reduce the airframe noise produced by Conventional High Lift Devices during Takeoff & Landing. Circulation control technology has the prospect of being a good alternative to resolve both the aforesaid issues. Circulation control airfoils are not only capable of producing very high values of lift (Cl values in excess of 8.0) at zero degree angle of attack, but also eliminate the noise generated by the conventional high lift devices and their associated weight penalty as well as their complex operation and storage. This will ensure not only satisfying the small takeoff and landing distances, but minimal acoustic signature in accordance with FAA requirements. The Circulation Control relies on the tendency of an emanating wall jet to independently control the circulation and lift on an airfoil. Unlike, conventional airfoil where rear stagnation point is located at the sharp trailing edge, circulation control airfoils possess a round trailing edge, therefore the rear stagnation point is free to move. The location of rear stagnation point is controlled by the blown jet momentum. This provides a secondary control in the form of jet momentum with which the lift generated can be controlled rather the only available control of incidence (angle of attack) in case of conventional airfoils. The use of Circulation control despite its promising potential has been limited only to research applications due to the lack of a simple prediction capability. This research effort was focused on the creation of a rapid prediction capability of Circulation Control Aerodynamic Characteristics which could help designers with rapid performance estimates for design space exploration. A morphological matrix was created with the available set of options which could be chosen to create this prediction capability starting with purely analytical physics based modeling to high fidelity CFD codes. Based on the available constraints, and desired accuracy meta-models have been created around the two dimensional circulation control performance results computed using Navier Stokes Equations (Computational Fluid Dynamics). DSS2, a two dimensional RANS code written by Professor Lakshmi Sankar was utilized for circulation control airfoil characteristics. The CFD code was first applied to the NCCR 1510-7607N airfoil to validate the model with available experimental results. It was then applied to compute the results of a fractional factorial design of experiments array. Metamodels were formulated using the neural networks to the results obtained from the Design of Experiments. Additional validation runs were performed to validate the model predictions. Metamodels are not only capable of rapid performance prediction, but also help generate the relation trends of response matrices with control variables and capture the complex interactions between control variables. Quantitative as well as qualitative assessments of results were performed by computation of aerodynamic forces & moments and flow field visualizations. Wing characteristics in three dimensions were obtained by integration over the whole wing using Prandtl's Wing Theory. The baseline Super STOL configuration [3] was then analyzed with the application of circulation control technology. The desired values of lift and drag to achieve the target values of Takeoff & Landing performance were compared with the optimal configurations obtained by the model. The same optimal configurations were then subjected to Super STOL cruise conditions to perform a trade off analysis between Takeoff and Cruise Performance. Supercritical airfoils modified for circulation control were also thoroughly analyzed for Takeoff and Cruise performance and may constitute a viable option for Super STOL & STOL Designs. The prediction capability produced by this research effort can be integrated with the current conceptual aircraft modeling & simulation framework. The prediction tool is applicable within the selected ranges of each variable, but methodology and formulation scheme adopted can be applied to any other design space exploration.
Sun, Gang; Hoff, Steven J; Zelle, Brian C; Nelson, Minda A
2008-12-01
It is vital to forecast gas and particle matter concentrations and emission rates (GPCER) from livestock production facilities to assess the impact of airborne pollutants on human health, ecological environment, and global warming. Modeling source air quality is a complex process because of abundant nonlinear interactions between GPCER and other factors. The objective of this study was to introduce statistical methods and radial basis function (RBF) neural network to predict daily source air quality in Iowa swine deep-pit finishing buildings. The results show that four variables (outdoor and indoor temperature, animal units, and ventilation rates) were identified as relative important model inputs using statistical methods. It can be further demonstrated that only two factors, the environment factor and the animal factor, were capable of explaining more than 94% of the total variability after performing principal component analysis. The introduction of fewer uncorrelated variables to the neural network would result in the reduction of the model structure complexity, minimize computation cost, and eliminate model overfitting problems. The obtained results of RBF network prediction were in good agreement with the actual measurements, with values of the correlation coefficient between 0.741 and 0.995 and very low values of systemic performance indexes for all the models. The good results indicated the RBF network could be trained to model these highly nonlinear relationships. Thus, the RBF neural network technology combined with multivariate statistical methods is a promising tool for air pollutant emissions modeling.
African Union: Towards Good Governance, Peace and Security
2012-03-15
While good governance, peace and security remain individual state responsibilities, they constitute the main pillars of Africa’s political stability and...African national leadership to drive this process.2 Political stability itself requires strong state institutional capabilities to ensure continuity
NASA Astrophysics Data System (ADS)
Pham, Binh Thai; Tien Bui, Dieu; Pourghasemi, Hamid Reza; Indra, Prakash; Dholakia, M. B.
2017-04-01
The objective of this study is to make a comparison of the prediction performance of three techniques, Functional Trees (FT), Multilayer Perceptron Neural Networks (MLP Neural Nets), and Naïve Bayes (NB) for landslide susceptibility assessment at the Uttarakhand Area (India). Firstly, a landslide inventory map with 430 landslide locations in the study area was constructed from various sources. Landslide locations were then randomly split into two parts (i) 70 % landslide locations being used for training models (ii) 30 % landslide locations being employed for validation process. Secondly, a total of eleven landslide conditioning factors including slope angle, slope aspect, elevation, curvature, lithology, soil, land cover, distance to roads, distance to lineaments, distance to rivers, and rainfall were used in the analysis to elucidate the spatial relationship between these factors and landslide occurrences. Feature selection of Linear Support Vector Machine (LSVM) algorithm was employed to assess the prediction capability of these conditioning factors on landslide models. Subsequently, the NB, MLP Neural Nets, and FT models were constructed using training dataset. Finally, success rate and predictive rate curves were employed to validate and compare the predictive capability of three used models. Overall, all the three models performed very well for landslide susceptibility assessment. Out of these models, the MLP Neural Nets and the FT models had almost the same predictive capability whereas the MLP Neural Nets (AUC = 0.850) was slightly better than the FT model (AUC = 0.849). The NB model (AUC = 0.838) had the lowest predictive capability compared to other models. Landslide susceptibility maps were final developed using these three models. These maps would be helpful to planners and engineers for the development activities and land-use planning.
New smoke predictions for Alaska in NOAA’s National Air Quality Forecast Capability
NASA Astrophysics Data System (ADS)
Davidson, P. M.; Ruminski, M.; Draxler, R.; Kondragunta, S.; Zeng, J.; Rolph, G.; Stajner, I.; Manikin, G.
2009-12-01
Smoke from wildfire is an important component of fine particle pollution, which is responsible for tens of thousands of premature deaths each year in the US. In Alaska, wildfire smoke is the leading cause of poor air quality in summer. Smoke forecast guidance helps air quality forecasters and the public take steps to limit exposure to airborne particulate matter. A new smoke forecast guidance tool, built by a cross-NOAA team, leverages efforts of NOAA’s partners at the USFS on wildfire emissions information, and with EPA, in coordinating with state/local air quality forecasters. Required operational deployment criteria, in categories of objective verification, subjective feedback, and production readiness, have been demonstrated in experimental testing during 2008-2009, for addition to the operational products in NOAA's National Air Quality Forecast Capability. The Alaska smoke forecast tool is an adaptation of NOAA’s smoke predictions implemented operationally for the lower 48 states (CONUS) in 2007. The tool integrates satellite information on location of wildfires with weather (North American mesoscale model) and smoke dispersion (HYSPLIT) models to produce daily predictions of smoke transport for Alaska, in binary and graphical formats. Hour-by hour predictions at 12km grid resolution of smoke at the surface and in the column are provided each day by 13 UTC, extending through midnight next day. Forecast accuracy and reliability are monitored against benchmark criteria for accuracy and reliability. While wildfire activity in the CONUS is year-round, the intense wildfire activity in AK is limited to the summer. Initial experimental testing during summer 2008 was hindered by unusually limited wildfire activity and very cloudy conditions. In contrast, heavier than average wildfire activity during summer 2009 provided a representative basis (more than 60 days of wildfire smoke) for demonstrating required prediction accuracy. A new satellite observation product was developed for routine near-real time verification of these predictions. The footprint of the predicted smoke from identified fires is verified with satellite observations of the spatial extent of smoke aerosols (5km resolution). Based on geostationary aerosol optical depth measurements that provide good time resolution of the horizontal spatial extent of the plumes, these observations do not yield quantitative concentrations of smoke particles at the surface. Predicted surface smoke concentrations are consistent with the limited number of in situ observations of total fine particle mass from all sources; however they are much higher than predicted for most CONUS fires. To assess uncertainty associated with fire emissions estimates, sensitivity analyses are in progress.
Scarlata, Simone; Palermo, Patrizio; Candoli, Piero; Tofani, Ariela; Petitti, Tommasangelo; Corbetta, Lorenzo
2017-04-01
Linear endobronchial ultrasound transbronchial needle aspiration (EBUS-TBNA) represents a pivotal innovation in interventional pulmonology; determining the best approach to guarantee systematic and efficient training is expected to become a main issue in the forthcoming years. Virtual reality simulators have been proposed as potential EBUS-TBNA training instruments, to avoid unskilled beginners practicing directly in real-life settings. A validated and perfected simulation program could be used before allowing beginners to practice on patients. Our goal was to test the reliability of the EBUS-Skills and Task Assessment Tool (STAT) and its subscores for measuring the competence of experienced bronchoscopists approaching EBUS-guided TBNA, using only the virtual reality simulator as both a training and an assessment tool. Fifteen experienced bronchoscopists, with poor or no experience in EBUS-TBNA, participated in this study. They were all administered the Italian version of the EBUS-STAT evaluation tool, during a high-fidelity virtual reality simulation. This was followed by a single 7-hour theoretical and practical (on simulators) session on EBUS-TBNA, at the end of which their skills were reassessed by EBUS-STAT. An overall, significant improvement in EBUS-TBNA skills was observed, thereby confirming that (a) virtual reality simulation can facilitate practical learning among practitioners, and (b) EBUS-STAT is capable of detecting these improvements. The test's overall ability to detect differences was negatively influenced by the minimal variation of the scores relating to items 1 and 2, was not influenced by the training, and improved significantly when the 2 items were not considered. Apart from these 2 items, all the remaining subscores were equally capable of revealing improvements in the learner. Lastly, we found that trainees with presimulation EBUS-STAT scores above 79 did not show any significant improvement after virtual reality training, suggesting that this score represents a cutoff value capable of predicting the likelihood that simulation can be beneficial. Virtual reality simulation is capable of providing a practical learning tool for practitioners with previous experience in flexible bronchoscopy, and the EBUS-STAT questionnaire is capable of detecting these changes. A pretraining EBUS-STAT score below 79 is a good indicator of those candidates who will benefit from the simulation training. Further studies are needed to verify whether a modified version of the questionnaire would be capable of improving its performance among experienced bronchoscopists.
A Review of Crashworthiness of Composite Aircraft Structures
1990-02-01
proprietary, or other reaons . Details on the availability of these publications may be obtained from: Graphics Section, National Research Council Canada...bottoming out, good energy-absorbing and load-limiting ability, good post-crushing structural integrity and no significant load rate sensitivity. In a... good energy absorption capability under compressive loadings. However, under tensile or bending conditions, structural integrity may be lost at initial
What do we gain with Probabilistic Flood Loss Models?
NASA Astrophysics Data System (ADS)
Schroeter, K.; Kreibich, H.; Vogel, K.; Merz, B.; Lüdtke, S.
2015-12-01
The reliability of flood loss models is a prerequisite for their practical usefulness. Oftentimes, traditional uni-variate damage models as for instance depth-damage curves fail to reproduce the variability of observed flood damage. Innovative multi-variate probabilistic modelling approaches are promising to capture and quantify the uncertainty involved and thus to improve the basis for decision making. In this study we compare the predictive capability of two probabilistic modelling approaches, namely Bagging Decision Trees and Bayesian Networks and traditional stage damage functions which are cast in a probabilistic framework. For model evaluation we use empirical damage data which are available from computer aided telephone interviews that were respectively compiled after the floods in 2002, 2005, 2006 and 2013 in the Elbe and Danube catchments in Germany. We carry out a split sample test by sub-setting the damage records. One sub-set is used to derive the models and the remaining records are used to evaluate the predictive performance of the model. Further we stratify the sample according to catchments which allows studying model performance in a spatial transfer context. Flood damage estimation is carried out on the scale of the individual buildings in terms of relative damage. The predictive performance of the models is assessed in terms of systematic deviations (mean bias), precision (mean absolute error) as well as in terms of reliability which is represented by the proportion of the number of observations that fall within the 95-quantile and 5-quantile predictive interval. The reliability of the probabilistic predictions within validation runs decreases only slightly and achieves a very good coverage of observations within the predictive interval. Probabilistic models provide quantitative information about prediction uncertainty which is crucial to assess the reliability of model predictions and improves the usefulness of model results.
Singh, Kunwar P; Gupta, Shikha; Rai, Premanjali
2013-09-01
The research aims to develop global modeling tools capable of categorizing structurally diverse chemicals in various toxicity classes according to the EEC and European Community directives, and to predict their acute toxicity in fathead minnow using set of selected molecular descriptors. Accordingly, artificial intelligence approach based classification and regression models, such as probabilistic neural networks (PNN), generalized regression neural networks (GRNN), multilayer perceptron neural network (MLPN), radial basis function neural network (RBFN), support vector machines (SVM), gene expression programming (GEP), and decision tree (DT) were constructed using the experimental toxicity data. Diversity and non-linearity in the chemicals' data were tested using the Tanimoto similarity index and Brock-Dechert-Scheinkman statistics. Predictive and generalization abilities of various models constructed here were compared using several statistical parameters. PNN and GRNN models performed relatively better than MLPN, RBFN, SVM, GEP, and DT. Both in two and four category classifications, PNN yielded a considerably high accuracy of classification in training (95.85 percent and 90.07 percent) and validation data (91.30 percent and 86.96 percent), respectively. GRNN rendered a high correlation between the measured and model predicted -log LC50 values both for the training (0.929) and validation (0.910) data and low prediction errors (RMSE) of 0.52 and 0.49 for two sets. Efficiency of the selected PNN and GRNN models in predicting acute toxicity of new chemicals was adequately validated using external datasets of different fish species (fathead minnow, bluegill, trout, and guppy). The PNN and GRNN models showed good predictive and generalization abilities and can be used as tools for predicting toxicities of structurally diverse chemical compounds. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, Wei; Pourghasemi, Hamid Reza; Panahi, Mahdi; Kornejady, Aiding; Wang, Jiale; Xie, Xiaoshen; Cao, Shubo
2017-11-01
The spatial prediction of landslide susceptibility is an important prerequisite for the analysis of landslide hazards and risks in any area. This research uses three data mining techniques, such as an adaptive neuro-fuzzy inference system combined with frequency ratio (ANFIS-FR), a generalized additive model (GAM), and a support vector machine (SVM), for landslide susceptibility mapping in Hanyuan County, China. In the first step, in accordance with a review of the previous literature, twelve conditioning factors, including slope aspect, altitude, slope angle, topographic wetness index (TWI), plan curvature, profile curvature, distance to rivers, distance to faults, distance to roads, land use, normalized difference vegetation index (NDVI), and lithology, were selected. In the second step, a collinearity test and correlation analysis between the conditioning factors and landslides were applied. In the third step, we used three advanced methods, namely, ANFIS-FR, GAM, and SVM, for landslide susceptibility modeling. Subsequently, the results of their accuracy were validated using a receiver operating characteristic curve. The results showed that all three models have good prediction capabilities, while the SVM model has the highest prediction rate of 0.875, followed by the ANFIS-FR and GAM models with prediction rates of 0.851 and 0.846, respectively. Thus, the landslide susceptibility maps produced in the study area can be applied for management of hazards and risks in landslide-prone Hanyuan County.
Specification and Prediction of the Radiation Environment Using Data Assimilative VERB code
NASA Astrophysics Data System (ADS)
Shprits, Yuri; Kellerman, Adam
2016-07-01
We discuss how data assimilation can be used for the reconstruction of long-term evolution, bench-marking of the physics based codes and used to improve the now-casting and focusing of the radiation belts and ring current. We also discuss advanced data assimilation methods such as parameter estimation and smoothing. We present a number of data assimilation applications using the VERB 3D code. The 3D data assimilative VERB allows us to blend together data from GOES, RBSP A and RBSP B. 1) Model with data assimilation allows us to propagate data to different pitch angles, energies, and L-shells and blends them together with the physics-based VERB code in an optimal way. We illustrate how to use this capability for the analysis of the previous events and for obtaining a global and statistical view of the system. 2) The model predictions strongly depend on initial conditions that are set up for the model. Therefore, the model is as good as the initial conditions that it uses. To produce the best possible initial conditions, data from different sources (GOES, RBSP A, B, our empirical model predictions based on ACE) are all blended together in an optimal way by means of data assimilation, as described above. The resulting initial conditions do not have gaps. This allows us to make more accurate predictions. Real-time prediction framework operating on our website, based on GOES, RBSP A, B and ACE data, and 3D VERB, is presented and discussed.
Pollock, Benjamin D; Hu, Tian; Chen, Wei; Harville, Emily W; Li, Shengxu; Webber, Larry S; Fonseca, Vivian; Bazzano, Lydia A
2017-01-01
To evaluate several adult diabetes risk calculation tools for predicting the development of incident diabetes and pre-diabetes in a bi-racial, young adult population. Surveys beginning in young adulthood (baseline age ≥18) and continuing across multiple decades for 2122 participants of the Bogalusa Heart Study were used to test the associations of five well-known adult diabetes risk scores with incident diabetes and pre-diabetes using separate Cox models for each risk score. Racial differences were tested within each model. Predictive utility and discrimination were determined for each risk score using the Net Reclassification Index (NRI) and Harrell's c-statistic. All risk scores were strongly associated (p<.0001) with incident diabetes and pre-diabetes. The Wilson model indicated greater risk of diabetes for blacks versus whites with equivalent risk scores (HR=1.59; 95% CI 1.11-2.28; p=.01). C-statistics for the diabetes risk models ranged from 0.79 to 0.83. Non-event NRIs indicated high specificity (non-event NRIs: 76%-88%), but poor sensitivity (event NRIs: -23% to -3%). Five diabetes risk scores established in middle-aged, racially homogenous adult populations are generally applicable to younger adults with good specificity but poor sensitivity. The addition of race to these models did not result in greater predictive capabilities. A more sensitive risk score to predict diabetes in younger adults is needed. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Izhari, F.; Dhany, H. W.; Zarlis, M.; Sutarman
2018-03-01
A good age in optimizing aspects of development is at the age of 4-6 years, namely with psychomotor development. Psychomotor is broader, more difficult to monitor but has a meaningful value for the child's life because it directly affects his behavior and deeds. Therefore, there is a problem to predict the child's ability level based on psychomotor. This analysis uses backpropagation method analysis with artificial neural network to predict the ability of the child on the psychomotor aspect by generating predictions of the child's ability on psychomotor and testing there is a mean squared error (MSE) value at the end of the training of 0.001. There are 30% of children aged 4-6 years have a good level of psychomotor ability, excellent, less good, and good enough.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friedrichs, D.R.; Cole, C.R.; Arnett, R.C.
The Hanford Pathline Calculational Program (HPCP) is a numerical model developed to predict the movement of fluid particles from one location to another within the Hanford or similar groundwater systems. As such it can be considered a simple transport model wherein only advective changes are considered. Application of the numerical HPCP to test cases for which semianalytical results are obtainable showed that with reasonable time steps and the grid spacing requirements HPCP give good agreement with the semianalytical solution. The accuracy of the HPCP results is most sensitive in areas near steep or rapidly changing potential gradients and may requiremore » finer grid spacing in those areas than for the groundwater system as a whole. Initial applications of HPCP to the Hanford groundwater flow regime show that significant differences (improvements) in the predictions of fluid particle movement are obtainable with the pathline approach (changing groundwater potential or water table surface) as opposed to the streamline approach (unchanging potential or water table surface) used in past Hanford groundwater analyses. This report documents capability developed for estimating groundwater travel times from the Hanford high-level waste areas to the Columbia River at different water table levels.« less
Multi-Scale Computational Modeling of Two-Phased Metal Using GMC Method
NASA Technical Reports Server (NTRS)
Moghaddam, Masoud Ghorbani; Achuthan, A.; Bednacyk, B. A.; Arnold, S. M.; Pineda, E. J.
2014-01-01
A multi-scale computational model for determining plastic behavior in two-phased CMSX-4 Ni-based superalloys is developed on a finite element analysis (FEA) framework employing crystal plasticity constitutive model that can capture the microstructural scale stress field. The generalized method of cells (GMC) micromechanics model is used for homogenizing the local field quantities. At first, GMC as stand-alone is validated by analyzing a repeating unit cell (RUC) as a two-phased sample with 72.9% volume fraction of gamma'-precipitate in the gamma-matrix phase and comparing the results with those predicted by finite element analysis (FEA) models incorporating the same crystal plasticity constitutive model. The global stress-strain behavior and the local field quantity distributions predicted by GMC demonstrated good agreement with FEA. High computational saving, at the expense of some accuracy in the components of local tensor field quantities, was obtained with GMC. Finally, the capability of the developed multi-scale model linking FEA and GMC to solve real life sized structures is demonstrated by analyzing an engine disc component and determining the microstructural scale details of the field quantities.
Minimum-dissipation scalar transport model for large-eddy simulation of turbulent flows
NASA Astrophysics Data System (ADS)
Abkar, Mahdi; Bae, Hyun J.; Moin, Parviz
2016-08-01
Minimum-dissipation models are a simple alternative to the Smagorinsky-type approaches to parametrize the subfilter turbulent fluxes in large-eddy simulation. A recently derived model of this type for subfilter stress tensor is the anisotropic minimum-dissipation (AMD) model [Rozema et al., Phys. Fluids 27, 085107 (2015), 10.1063/1.4928700], which has many desirable properties. It is more cost effective than the dynamic Smagorinsky model, it appropriately switches off in laminar and transitional flows, and it is consistent with the exact subfilter stress tensor on both isotropic and anisotropic grids. In this study, an extension of this approach to modeling the subfilter scalar flux is proposed. The performance of the AMD model is tested in the simulation of a high-Reynolds-number rough-wall boundary-layer flow with a constant and uniform surface scalar flux. The simulation results obtained from the AMD model show good agreement with well-established empirical correlations and theoretical predictions of the resolved flow statistics. In particular, the AMD model is capable of accurately predicting the expected surface-layer similarity profiles and power spectra for both velocity and scalar concentration.
Li, Yanpeng; Li, Xiang; Wang, Hongqiang; Chen, Yiping; Zhuang, Zhaowen; Cheng, Yongqiang; Deng, Bin; Wang, Liandong; Zeng, Yonghu; Gao, Lei
2014-01-01
This paper offers a compacted mechanism to carry out the performance evaluation work for an automatic target recognition (ATR) system: (a) a standard description of the ATR system's output is suggested, a quantity to indicate the operating condition is presented based on the principle of feature extraction in pattern recognition, and a series of indexes to assess the output in different aspects are developed with the application of statistics; (b) performance of the ATR system is interpreted by a quality factor based on knowledge of engineering mathematics; (c) through a novel utility called “context-probability” estimation proposed based on probability, performance prediction for an ATR system is realized. The simulation result shows that the performance of an ATR system can be accounted for and forecasted by the above-mentioned measures. Compared to existing technologies, the novel method can offer more objective performance conclusions for an ATR system. These conclusions may be helpful in knowing the practical capability of the tested ATR system. At the same time, the generalization performance of the proposed method is good. PMID:24967605
Monitoring multiple components in vinegar fermentation using Raman spectroscopy.
Uysal, Reyhan Selin; Soykut, Esra Acar; Boyaci, Ismail Hakki; Topcu, Ali
2013-12-15
In this study, the utility of Raman spectroscopy (RS) with chemometric methods for quantification of multiple components in the fermentation process was investigated. Vinegar, the product of a two stage fermentation, was used as a model and glucose and fructose consumption, ethanol production and consumption and acetic acid production were followed using RS and the partial least squares (PLS) method. Calibration of the PLS method was performed using model solutions. The prediction capability of the method was then investigated with both model and real samples. HPLC was used as a reference method. The results from comparing RS-PLS and HPLC with each other showed good correlations were obtained between predicted and actual sample values for glucose (R(2)=0.973), fructose (R(2)=0.988), ethanol (R(2)=0.996) and acetic acid (R(2)=0.983). In conclusion, a combination of RS with chemometric methods can be applied to monitor multiple components of the fermentation process from start to finish with a single measurement in a short time. Copyright © 2013 Elsevier Ltd. All rights reserved.
3D Structure of the Inverse Karman Vortex Street in the Wake of a Flapping Foil
NASA Astrophysics Data System (ADS)
Bozkurttas, Meliha; Mittal, Rajat; Dong, Haibo
2004-11-01
Flapping foils are being considered for lift generation and/or propulsion in Micro Aerial Vehicles (MAVs) and Autonomous Underwater Vehicles (AUVs). In the present study, a DNS/LES solver that is capable of simulating these flows in all their complexity will be used. The flow around a NACA 0012 foil undergoing pitch oscillation at a chord Reynolds number of 12600 has been investigated and the comparison of mean thrust coefficient results with the experiment has indicated significant under-prediction of the thrust although good match is observed with a 2D RANS calculation. This discrepancy could be related to the absence of 3D effects in both numerical simulations. Although this conclusion has also been reached in other studies, the details of the physical mechanism that lead to inaccurate prediction of surface pressure and ultimately to thrust force for pitching and heaving flapping foils have not been clarified yet. In this study, the streamwise (secondary) vortical structures in the inverse Karman Vortex Street generated in the wake of a thrust producing flapping foil will be studied.
A multimedia fate and chemical transport modeling system for pesticides: II. Model evaluation
NASA Astrophysics Data System (ADS)
Li, Rong; Scholtz, M. Trevor; Yang, Fuquan; Sloan, James J.
2011-07-01
Pesticides have adverse health effects and can be transported over long distances to contaminate sensitive ecosystems. To address problems caused by environmental pesticides we developed a multimedia multi-pollutant modeling system, and here we present an evaluation of the model by comparing modeled results against measurements. The modeled toxaphene air concentrations for two sites, in Louisiana (LA) and Michigan (MI), are in good agreement with measurements (average concentrations agree to within a factor of 2). Because the residue inventory showed no soil residues at these two sites, resulting in no emissions, the concentrations must be caused by transport; the good agreement between the modeled and measured concentrations suggests that the model simulates atmospheric transport accurately. Compared to the LA and MI sites, the measured air concentrations at two other sites having toxaphene soil residues leading to emissions, in Indiana and Arkansas, showed more pronounced seasonal variability (higher in warmer months); this pattern was also captured by the model. The model-predicted toxaphene concentration fraction on particles (0.5-5%) agrees well with measurement-based estimates (3% or 6%). There is also good agreement between modeled and measured dry (1:1) and wet (within a factor of less than 2) depositions in Lake Ontario. Additionally this study identified erroneous soil residue data around a site in Texas in a published US toxaphene residue inventory, which led to very low modeled air concentrations at this site. Except for the erroneous soil residue data around this site, the good agreement between the modeled and observed results implies that both the US and Mexican toxaphene soil residue inventories are reasonably good. This agreement also suggests that the modeling system is capable of simulating the important physical and chemical processes in the multimedia compartments.
High Fidelity Ion Beam Simulation of High Dose Neutron Irradiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Was, Gary; Wirth, Brian; Motta, Athur
The objective of this proposal is to demonstrate the capability to predict the evolution of microstructure and properties of structural materials in-reactor and at high doses, using ion irradiation as a surrogate for reactor irradiations. “Properties” includes both physical properties (irradiated microstructure) and the mechanical properties of the material. Demonstration of the capability to predict properties has two components. One is ion irradiation of a set of alloys to yield an irradiated microstructure and corresponding mechanical behavior that are substantially the same as results from neutron exposure in the appropriate reactor environment. Second is the capability to predict the irradiatedmore » microstructure and corresponding mechanical behavior on the basis of improved models, validated against both ion and reactor irradiations and verified against ion irradiations. Taken together, achievement of these objectives will yield an enhanced capability for simulating the behavior of materials in reactor irradiations.« less
NASA Technical Reports Server (NTRS)
Burkhardt, Z.; Ramachandran, N.; Majumdar, A.
2017-01-01
Fluid Transient analysis is important for the design of spacecraft propulsion system to ensure structural stability of the system in the event of sudden closing or opening of the valve. Generalized Fluid System Simulation Program (GFSSP), a general purpose flow network code developed at NASA/MSFC is capable of simulating pressure surge due to sudden opening or closing of valve when thermodynamic properties of real fluid are available for the entire range of simulation. Specifically GFSSP needs an accurate representation of pressure-density relationship in order to predict pressure surge during a fluid transient. Unfortunately, the available thermodynamic property programs such as REFPROP, GASP or GASPAK does not provide the thermodynamic properties of Monomethylhydrazine (MMH). This paper will illustrate the process used for building a customized table of properties of state variables from available properties and speed of sound that is required by GFSSP for simulation. Good agreement was found between the simulations and measured data. This method can be adopted for modeling flow networks and systems with other fluids whose properties are not known in detail in order to obtain general technical insight. Rigorous code validation of this approach will be done and reported at a future date.
NASA Astrophysics Data System (ADS)
Donoval, Daniel; Vrbicky, Andrej; Marek, Juraj; Chvala, Ales; Beno, Peter
2008-06-01
High-voltage power MOSFETs have been widely used in switching mode power supply circuits as output drivers for industrial and automotive electronic control systems. However, as the device size is reduced, the energy handling capability is becoming a very important issue to be addressed together with the trade-off between the series on-resistance RON and breakdown voltage VBR. Unclamped inductive switching (UIS) condition represents the circuit switching operation for evaluating the "ruggedness", which characterizes the device capability to handle high avalanche currents during the applied stress. In this paper we present an experimental method which modifies the standard UIS test and allows extraction of the maximum device temperature after the applied standard stress pulse vanishes. Corresponding analysis and non-destructive prediction of the ruggedness of power DMOSFETs devices supported by advanced 2-D mixed mode electro-thermal device and circuit simulation under UIS conditions using calibrated physical models is provided also. The results of numerical simulation are in a very good correlation with experimental characteristics and contribute to their physical interpretation by identification of the mechanism of heat generation and heat source location and continuous temperature extraction.
Mechanism Underlying the Nucleobase-Distinguishing Ability of Benzopyridopyrimidine (BPP).
Kochman, Michał A; Bil, Andrzej; Miller, R J Dwayne
2017-11-02
Benzopyridopyrimidine (BPP) is a fluorescent nucleobase analogue capable of forming base pairs with adenine (A) and guanine (G) at different sites. When incorporated into oligodeoxynucleotides, it is capable of differentiating between the two purine nucleobases by virtue of the fact that its fluorescence is largely quenched when it is base-paired to guanine, whereas base-pairing to adenine causes only a slight reduction of the fluorescence quantum yield. In the present article, the photophysics of BPP is investigated through computer simulations. BPP is found to be a good charge acceptor, as demonstrated by its positive and appreciably large electron affinity. The selective quenching process is attributed to charge transfer (CT) from the purine nucleobase, which is predicted to be efficient in the BPP-G base pair, but essentially inoperative in the BPP-A base pair. The CT process owes its high selectivity to a combination of two factors: the ionization potential of guanine is lower than that of adenine, and less obviously, the site occupied by guanine enables a greater stabilization of the CT state through electrostatic interactions than the one occupied by adenine. The case of BPP illustrates that molecular recognition via hydrogen bonding can enhance the selectivity of photoinduced CT processes.
Analytical determination of propeller performance degradation due to ice accretion
NASA Technical Reports Server (NTRS)
Miller, T. L.
1986-01-01
A computer code has been developed which is capable of computing propeller performance for clean, glaze, or rime iced propeller configurations, thereby providing a mechanism for determining the degree of performance degradation which results from a given icing encounter. The inviscid, incompressible flow field at each specified propeller radial location is first computed using the Theodorsen transformation method of conformal mapping. A droplet trajectory computation then calculates droplet impingement points and airfoil collection efficiency for each radial location, at which point several user-selectable empirical correlations are available for determining the aerodynamic penalities which arise due to the ice accretion. Propeller performance is finally computed using strip analysis for either the clean or iced propeller. In the iced mode, the differential thrust and torque coefficient equations are modified by the drag and lift coefficient increments due to ice to obtain the appropriate iced values. Comparison with available experimental propeller icing data shows good agreement in several cases. The code's capability to properly predict iced thrust coefficient, power coefficient, and propeller efficiency is shown to be dependent on the choice of empirical correlation employed as well as proper specification of radial icing extent.
Capabilities for Constrained Military Operations
2016-12-01
capabilities that have low technology risk and accomplish all of this on a short timeline. I fully endorse all of the recommendations contained in...for the U.S. to address such conflicts. The good news is that The DoD can prevail with inexpensive capabilities that have low technology risk and on a...future actions. The Study took a three-pronged approach to countering potential adversaries’ strategies for waging long-term campaigns for
Inhalable Ipratropium Bromide Particle Engineering with Multicriteria Optimization.
Vinjamuri, Bhavani Prasad; Haware, Rahul V; Stagner, William C
2017-08-01
Spray-dried ipratropium bromide (IPB) microspheres for oral inhalation were engineered using Quality by Design. The interrogation of material properties, process parameters, and critical product quality attributes interplay enabled rational product design. A 2 7-3 screening design exhibited the Maillard reaction between L-leucine (LL) and lactose at studied outlet temperatures (OT) >130°C. A response surface custom design was used in conjunction with multicriteria optimization to determine the operating design space to achieve inhalable microparticles. Statistically significant predictive models were developed for volume median diameter (p = 0.0001, adjusted R 2 = 0.9938), span (p = 0.0278, adjusted R 2 = 0.7912), yield (p = 0.0020, adjusted R 2 = 0.9320), and OT (p = 0.0082, adjusted R 2 = 0.8768). An independent verification batch confirmed the model's predictive capability. The prediction and actual values were in good agreement. Particle size and span were 3.32 ± 0.09 μm and 1.71 ± 0.18, which were 4.7 and 5.3% higher than the predicted values. The process yield was 50.3%, compared to the predicted value of 65.3%. The OT was 100°C versus the predicted value of 105°C. The label strength of IPB microparticles was 99.0 to 105.9% w/w suggesting that enrichment occurred during the spray-drying process. The present study can be utilized to initiate the design of the first commercial IPB dry powder inhaler.
Comparing multiple statistical methods for inverse prediction in nuclear forensics applications
Lewis, John R.; Zhang, Adah; Anderson-Cook, Christine Michaela
2017-10-29
Forensic science seeks to predict source characteristics using measured observables. Statistically, this objective can be thought of as an inverse problem where interest is in the unknown source characteristics or factors ( X) of some underlying causal model producing the observables or responses (Y = g ( X) + error). Here, this paper reviews several statistical methods for use in inverse problems and demonstrates that comparing results from multiple methods can be used to assess predictive capability. Motivation for assessing inverse predictions comes from the desired application to historical and future experiments involving nuclear material production for forensics research inmore » which inverse predictions, along with an assessment of predictive capability, are desired.« less
Comparing multiple statistical methods for inverse prediction in nuclear forensics applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, John R.; Zhang, Adah; Anderson-Cook, Christine Michaela
Forensic science seeks to predict source characteristics using measured observables. Statistically, this objective can be thought of as an inverse problem where interest is in the unknown source characteristics or factors ( X) of some underlying causal model producing the observables or responses (Y = g ( X) + error). Here, this paper reviews several statistical methods for use in inverse problems and demonstrates that comparing results from multiple methods can be used to assess predictive capability. Motivation for assessing inverse predictions comes from the desired application to historical and future experiments involving nuclear material production for forensics research inmore » which inverse predictions, along with an assessment of predictive capability, are desired.« less
NASA Astrophysics Data System (ADS)
Melchiorre, C.; Castellanos Abella, E. A.; van Westen, C. J.; Matteucci, M.
2011-04-01
This paper describes a procedure for landslide susceptibility assessment based on artificial neural networks, and focuses on the estimation of the prediction capability, robustness, and sensitivity of susceptibility models. The study is carried out in the Guantanamo Province of Cuba, where 186 landslides were mapped using photo-interpretation. Twelve conditioning factors were mapped including geomorphology, geology, soils, landuse, slope angle, slope direction, internal relief, drainage density, distance from roads and faults, rainfall intensity, and ground peak acceleration. A methodology was used that subdivided the database in 3 subsets. A training set was used for updating the weights. A validation set was used to stop the training procedure when the network started losing generalization capability, and a test set was used to calculate the performance of the network. A 10-fold cross-validation was performed in order to show that the results are repeatable. The prediction capability, the robustness analysis, and the sensitivity analysis were tested on 10 mutually exclusive datasets. The results show that by means of artificial neural networks it is possible to obtain models with high prediction capability and high robustness, and that an exploration of the effect of the individual variables is possible, even if they are considered as a black-box model.
A publicly available toxicogenomics capability for supporting predictive toxicology and meta-analysis depends on availability of gene expression data for chemical treatment scenarios, the ability to locate and aggregate such information by chemical, and broad data coverage within...
Good Laboratory Practices of Materials Testing at NASA White Sands Test Facility
NASA Technical Reports Server (NTRS)
Hirsch, David; Williams, James H.
2005-01-01
An approach to good laboratory practices of materials testing at NASA White Sands Test Facility is presented. The contents include: 1) Current approach; 2) Data analysis; and 3) Improvements sought by WSTF to enhance the diagnostic capability of existing methods.
Development of thermal models of footwear using finite element analysis.
Covill, D; Guan, Z W; Bailey, M; Raval, H
2011-03-01
Thermal comfort is increasingly becoming a crucial factor to be considered in footwear design. The climate inside a shoe is controlled by thermal and moisture conditions and is crucial to attain comfort. Research undertaken has shown that thermal conditions play a dominant role in shoe climate. Development of thermal models that are capable of predicting in-shoe temperature distributions is an effective way forward to undertake extensive parametric studies to assist optimized design. In this paper, two-dimensional and three-dimensional thermal models of in-shoe climate were developed using finite element analysis through commercial code Abaqus. The thermal material properties of the upper shoe, sole, and air were considered. Dry heat flux from the foot was calculated on the basis of typical blood flow in the arteries on the foot. Using the thermal models developed, in-shoe temperatures were predicted to cover various locations for controlled ambient temperatures of 15, 25, and 35 degrees C respectively. The predicted temperatures were compared with multipoint measured temperatures through microsensor technology. Reasonably good correlation was obtained, with averaged errors of 6, 2, and 1.5 per cent, based on the averaged in-shoe temperature for the above three ambient temperatures. The models can be further used to help design shoes with optimized thermal comfort.
Methods to improve traffic flow and noise exposure estimation on minor roads.
Morley, David W; Gulliver, John
2016-09-01
Address-level estimates of exposure to road traffic noise for epidemiological studies are dependent on obtaining data on annual average daily traffic (AADT) flows that is both accurate and with good geographical coverage. National agencies often have reliable traffic count data for major roads, but for residential areas served by minor roads, especially at national scale, such information is often not available or incomplete. Here we present a method to predict AADT at the national scale for minor roads, using a routing algorithm within a geographical information system (GIS) to rank roads by importance based on simulated journeys through the road network. From a training set of known minor road AADT, routing importance is used to predict AADT on all UK minor roads in a regression model along with the road class, urban or rural location and AADT on the nearest major road. Validation with both independent traffic counts and noise measurements show that this method gives a considerable improvement in noise prediction capability when compared to models that do not give adequate consideration to minor road variability (Spearman's rho. increases from 0.46 to 0.72). This has significance for epidemiological cohort studies attempting to link noise exposure to adverse health outcomes. Copyright © 2016 Elsevier Ltd. All rights reserved.
Using Data Assimilation Methods of Prediction of Solar Activity
NASA Technical Reports Server (NTRS)
Kitiashvili, Irina N.; Collins, Nancy S.
2017-01-01
The variable solar magnetic activity known as the 11-year solar cycle has the longest history of solar observations. These cycles dramatically affect conditions in the heliosphere and the Earth's space environment. Our current understanding of the physical processes that make up global solar dynamics and the dynamo that generates the magnetic fields is sketchy, resulting in unrealistic descriptions in theoretical and numerical models of the solar cycles. The absence of long-term observations of solar interior dynamics and photospheric magnetic fields hinders development of accurate dynamo models and their calibration. In such situations, mathematical data assimilation methods provide an optimal approach for combining the available observational data and their uncertainties with theoretical models in order to estimate the state of the solar dynamo and predict future cycles. In this presentation, we will discuss the implementation and performance of an Ensemble Kalman Filter data assimilation method based on the Parker migratory dynamo model, complemented by the equation of magnetic helicity conservation and long-term sunspot data series. This approach has allowed us to reproduce the general properties of solar cycles and has already demonstrated a good predictive capability for the current cycle, 24. We will discuss further development of this approach, which includes a more sophisticated dynamo model, synoptic magnetogram data, and employs the DART Data Assimilation Research Testbed.
Creep and stress relaxation modeling of polycrystalline ceramic fibers
NASA Technical Reports Server (NTRS)
Dicarlo, James A.; Morscher, Gregory N.
1994-01-01
A variety of high performance polycrystalline ceramic fibers are currently being considered as reinforcement for high temperature ceramic matrix composites. However, under mechanical loading about 800 C, these fibers display creep related instabilities which can result in detrimental changes in composite dimensions, strength, and internal stress distributions. As a first step toward understanding these effects, this study examines the validity of a mechanism-based empirical model which describes primary stage tensile creep and stress relaxation of polycrystalline ceramic fibers as independent functions of time, temperature, and applied stress or strain. To verify these functional dependencies, a simple bend test is used to measure stress relaxation for four types of commercial ceramic fibers for which direct tensile creep data are available. These fibers include both nonoxide (SCS-6, Nicalon) and oxide (PRD-166, FP) compositions. The results of the Bend Stress Relaxation (BSR) test not only confirm the stress, time, and temperature dependencies predicted by the model, but also allow measurement of model empirical parameters for the four fiber types. In addition, comparison of model tensile creep predictions based on the BSR test results with the literature data show good agreement, supporting both the predictive capability of the model and the use of the BSR text as a simple method for parameter determination for other fibers.
Predictors of future success in otolaryngology residency applicants.
Chole, Richard A; Ogden, M Allison
2012-08-01
To evaluate the information available about otolaryngology residency applicants for factors that may predict future success as an otolaryngologist. Retrospective review of residency applications; survey of resident graduates and otolaryngology clinical faculty. Otolaryngology residency program. Otolaryngology program graduates from 2001 to 2010 and current clinical faculty from Barnes-Jewish Hospital/Washington University School of Medicine. Overall ratings of the otolaryngology graduates by clinical faculty (on a 5-point scale) were compared with the resident application attributes that might predict success. The application factors studied are United States Medical Licensing Examination part 1 score, Alpha Omega Alpha Honor Medical Society election, medical school grades, letter of recommendation, rank of the medical school, extracurricular activities, residency interview, experience with acting intern, and extracurricular activities. Forty-six graduates were included in the study. The overall faculty rating of the residents showed good interrater reliability. The objective factors, letters of recommendation, experience as an acting intern, and musical excellence showed no correlation with higher faculty rating. Rank of the medical school and faculty interview weakly correlated with faculty rating. Having excelled in a team sport correlated with higher faculty rating. Many of the application factors typically used during otolaryngology residency candidate selection may not be predictive of future capabilities as a clinician. Prior excellence in a team sport may suggest continued success in the health care team.
Creep and stress relaxation modeling of polycrystalline ceramic fibers
NASA Technical Reports Server (NTRS)
Dicarlo, James A.; Morscher, Gregory N.
1991-01-01
A variety of high performance polycrystalline ceramic fibers are currently being considered as reinforcement for high temperature ceramic matrix composites. However, under mechanical loading above 800 C, these fibers display creep-related instabilities which can result in detrimental changes in composite dimensions, strength, and internal stress distributions. As a first step toward understanding these effects, this study examines the validity of mechanistic-based empirical model which describes primary stage tensile creep and stress relaxation of polycrystalline ceramic fibers as independent functions of time, temperature, and applied stress or strain. To verify these functional dependencies, a simple bend test is used to measure stress relaxation for four types of commercial ceramic fibers for which direct tensile creep data are available. These fibers include both nonoxide (SCS-6, Nicalon) and oxide (PRD-166, FP) compositions. The results of the bend stress relaxation (BSR) test not only confirm the stress, time, and temperature dependencies predicted by the model but also allow measurement of model empirical parameters for the four fiber types. In addition, comparison of model predictions and BSR test results with the literature tensile creep data show good agreement, supporting both the predictive capability of the model and the use of the BSR test as a simple method for parameter determination for other fibers.
Dunn, Aaron; Dingreville, Remi; Capolungo, Laurent
2015-11-27
A hierarchical methodology is introduced to predict the effects of radiation damage and irradiation conditions on the yield stress and internal stress heterogeneity developments in polycrystalline α-Fe. Simulations of defect accumulation under displacement cascade damage conditions are performed using spatially resolved stochastic cluster dynamics. The resulting void and dislocation loop concentrations and average sizes are then input into a crystal plasticity formulation that accounts for the change in critical resolved shear stress due to the presence of radiation induced defects. The simulated polycrystalline tensile tests show a good match to experimental hardening data over a wide range of irradiation doses.more » With this capability, stress heterogeneity development and the effect of dose rate on hardening is investigated. The model predicts increased hardening at higher dose rates for low total doses. By contrast, at doses above 10 –2 dpa when cascade overlap becomes significant, the model does not predict significantly different hardening for different dose rates. In conclusion, the development of such a model enables simulation of radiation damage accumulation and associated hardening without relying on experimental data as an input under a wide range of irradiation conditions such as dose, dose rate, and temperature.« less
A correlational approach to predicting operator status
NASA Technical Reports Server (NTRS)
Shingledecker, Clark A.
1988-01-01
This paper discusses a research approach for identifying and validating candidate physiological and behavioral parameters which can be used to predict the performance capabilities of aircrew and other system operators. In this methodology, concurrent and advance correlations are computed between predictor values and criterion performance measures. Continuous performance and sleep loss are used as stressors to promote performance variation. Preliminary data are presented which suggest dependence of prediction capability on the resource allocation policy of the operator.
Health Capability: Conceptualization and Operationalization
2010-01-01
Current theoretical approaches to bioethics and public health ethics propose varied justifications as the basis for health care and public health, yet none captures a fundamental reality: people seek good health and the ability to pursue it. Existing models do not effectively address these twin goals. The approach I espouse captures both of these orientations through a concept here called health capability. Conceptually, health capability illuminates the conditions that affect health and one's ability to make health choices. By respecting the health consequences individuals face and their health agency, health capability offers promise for finding a balance between paternalism and autonomy. I offer a conceptual model of health capability and present a health capability profile to identify and address health capability gaps. PMID:19965570
NOAA Climate Program Office Contributions to National ESPC
NASA Astrophysics Data System (ADS)
Higgins, W.; Huang, J.; Mariotti, A.; Archambault, H. M.; Barrie, D.; Lucas, S. E.; Mathis, J. T.; Legler, D. M.; Pulwarty, R. S.; Nierenberg, C.; Jones, H.; Cortinas, J. V., Jr.; Carman, J.
2016-12-01
NOAA is one of five federal agencies (DOD, DOE, NASA, NOAA, and NSF) which signed an updated charter in 2016 to partner on the National Earth System Prediction Capability (ESPC). Situated within NOAA's Office of Oceanic and Atmospheric Research (OAR), NOAA Climate Program Office (CPO) programs contribute significantly to the National ESPC goals and activities. This presentation will provide an overview of CPO contributions to National ESPC. First, we will discuss selected CPO research and transition activities that directly benefit the ESPC coupled model prediction capability, including The North American Multi-Model Ensemble (NMME) seasonal prediction system The Subseasonal Experiment (SubX) project to test real-time subseasonal ensemble prediction systems. Improvements to the NOAA operational Climate Forecast System (CFS), including software infrastructure and data assimilation. Next, we will show how CPO's foundational research activities are advancing future ESPC capabilities. Highlights will include: The Tropical Pacific Observing System (TPOS) to provide the basis for predicting climate on subseasonal to decadal timescales. Subseasonal-to-Seasonal (S2S) processes and predictability studies to improve understanding, modeling and prediction of the MJO. An Arctic Research Program to address urgent needs for advancing monitoring and prediction capabilities in this major area of concern. Advances towards building an experimental multi-decadal prediction system through studies on the Atlantic Meridional Overturning Circulation (AMOC). Finally, CPO has embraced Integrated Information Systems (IIS's) that build on the innovation of programs such as the National Integrated Drought Information System (NIDIS) to develop and deliver end to end environmental information for key societal challenges (e.g. extreme heat; coastal flooding). These contributions will help the National ESPC better understand and address societal needs and decision support requirements.
USM3D Analysis of Low Boom Configuration
NASA Technical Reports Server (NTRS)
Carter, Melissa B.; Campbell, Richard L.; Nayani, Sudheer N.
2011-01-01
In the past few years considerable improvement was made in NASA's in house boom prediction capability. As part of this improved capability, the USM3D Navier-Stokes flow solver, when combined with a suitable unstructured grid, went from accurately predicting boom signatures at 1 body length to 10 body lengths. Since that time, the research emphasis has shifted from analysis to the design of supersonic configurations with boom signature mitigation In order to design an aircraft, the techniques for accurately predicting boom and drag need to be determined. This paper compares CFD results with the wind tunnel experimental results conducted on a Gulfstream reduced boom and drag configuration. Two different wind-tunnel models were designed and tested for drag and boom data. The goal of this study was to assess USM3D capability for predicting both boom and drag characteristics. Overall, USM3D coupled with a grid that was sheared and stretched was able to reasonably predict boom signature. The computational drag polar matched the experimental results for a lift coefficient above 0.1 despite some mismatch in the predicted lift-curve slope.
NASA Astrophysics Data System (ADS)
Li, Zhi'ang; Wang, Jianlin; Liu, Min; Chen, Tong; Chen, Jifang; Ge, Wen; Fu, Zhengping; Peng, Ranran; Zhai, Xiaofang; Lu, Yalin
2018-04-01
Residues of organic dye in industrial effluents cause severe water system pollution. Although several methods, such as biodegradation and activated carbon adsorption, are available for treating these effluents before their discharge into waterbodies, secondary pollution by adsorbents and degrading products remains an issue. Therefore, new materials should be identified to solve this problem. In this work, CoFe2O4-SiO2 core-shell structures were synthesized using an improved Stöber method by coating mesoporous silica onto CoFe2O4 nanoparticles. The specific surface areas of the synthesized particles range from 30 m2/g to 150 m2/g and vary according to the dosage amount of tetraethoxysilane. Such core-shelled nanoparticles have the following advantages for treating industrial effluents mixed with dye: good adsorption capability, above-room-temperature magnetic recycling capability, and heat-enduring stability. Through adsorption of methylene blue, a typical dyeing material, the core-shell-structured particles show a good adsorption capability of approximately 33 mg/L. The particles are easily and completely collected by magnets, which is possible due to the magnetic property of core CoFe2O4. Heat treatment can burn out the adsorbed dyes and good adsorption performance is sustained even after several heat-treating loops. This property overcomes the common problem of particles with Fe3O4 as a core, by which Fe3O4 is oxidized to nonmagnetic α-Fe2O3 at the burning temperature. We also designed a miniature of effluent-treating pipeline, which demonstrates the potential of the application.
Predicting Story Goodness Performance from Cognitive Measures Following Traumatic Brain Injury
ERIC Educational Resources Information Center
Le, Karen; Coelho, Carl; Mozeiko, Jennifer; Krueger, Frank; Grafman, Jordan
2012-01-01
Purpose: This study examined the prediction of performance on measures of the Story Goodness Index (SGI; Le, Coelho, Mozeiko, & Grafman, 2011) from executive function (EF) and memory measures following traumatic brain injury (TBI). It was hypothesized that EF and memory measures would significantly predict SGI outcomes. Method: One hundred…
Development of clinical decision rules to predict recurrent shock in dengue
2013-01-01
Introduction Mortality from dengue infection is mostly due to shock. Among dengue patients with shock, approximately 30% have recurrent shock that requires a treatment change. Here, we report development of a clinical rule for use during a patient’s first shock episode to predict a recurrent shock episode. Methods The study was conducted in Center for Preventive Medicine in Vinh Long province and the Children’s Hospital No. 2 in Ho Chi Minh City, Vietnam. We included 444 dengue patients with shock, 126 of whom had recurrent shock (28%). Univariate and multivariate analyses and a preprocessing method were used to evaluate and select 14 clinical and laboratory signs recorded at shock onset. Five variables (admission day, purpura/ecchymosis, ascites/pleural effusion, blood platelet count and pulse pressure) were finally trained and validated by a 10-fold validation strategy with 10 times of repetition, using a logistic regression model. Results The results showed that shorter admission day (fewer days prior to admission), purpura/ecchymosis, ascites/pleural effusion, low platelet count and narrow pulse pressure were independently associated with recurrent shock. Our logistic prediction model was capable of predicting recurrent shock when compared to the null method (P < 0.05) and was not outperformed by other prediction models. Our final scoring rule provided relatively good accuracy (AUC, 0.73; sensitivity and specificity, 68%). Score points derived from the logistic prediction model revealed identical accuracy with AUCs at 0.73. Using a cutoff value greater than −154.5, our simple scoring rule showed a sensitivity of 68.3% and a specificity of 68.2%. Conclusions Our simple clinical rule is not to replace clinical judgment, but to help clinicians predict recurrent shock during a patient’s first dengue shock episode. PMID:24295509
Diagnostic Capability of Spectral Domain Optical Coherence Tomography for Glaucoma
Wu, Huijuan; de Boer, Johannes F.; Chen, Teresa C.
2012-01-01
Purpose To determine the diagnostic capability of spectral domain optical coherence tomography (OCT) in glaucoma patients with visual field (VF) defects. Design Prospective, cross-sectional study. Methods Setting Participants were recruited from a university hospital clinic. Study Population One eye of 85 normal subjects and 61 glaucoma patients [with average VF mean deviation (MD) of -9.61 ± 8.76 dB] were randomly selected for the study. A subgroup of the glaucoma patients with early VF defects was calculated separately. Observation Procedures Spectralis OCT circular scans were performed to obtain peripapillary retinal nerve fiber layer (RNFL) thicknesses. The RNFL diagnostic parameters based on the normative database were used alone or in combination for identifying glaucomatous RNFL thinning. Main Outcome Measures To evaluate diagnostic performance, calculations included areas under the receiver operating characteristic curve (AROC), sensitivity, specificity, positive predictive value, negative predictive value, positive likelihood ratio, and negative likelihood ratio. Results Overall RNFL thickness had the highest AROC value (0.952 for all patients, 0.895 for the early glaucoma subgroup). For all patients, the highest sensitivity (98.4%, CI 96.3-100%) was achieved by using two criteria: ≥1 RNFL sectors being abnormal at the < 5% level, and overall classification of borderline or outside normal limits, with specificities of 88.9% (CI 84.0-94.0%) and 87.1% (CI 81.6-92.5%) respectively for these two criteria. Conclusions Statistical parameters for evaluating the diagnostic performance of the Spectralis spectral domain OCT were good for early perimetric glaucoma and excellent for moderately-advanced perimetric glaucoma. PMID:22265147
Mechanical behaviour of TWIP steel under shear loading
NASA Astrophysics Data System (ADS)
Vincze, G.; Butuc, M. C.; Barlat, F.
2016-08-01
Twinning induced plasticity steels (TWIP) are very good candidate for automotive industry applications because they potentially offer large energy absorption before failure due to their exceptional strain hardening capability and high strength. However, their behaviour is drastically influenced by the loading conditions. In this work, the mechanical behaviour of a TWIP steel sheet sample was investigated at room temperature under monotonic and reverse simple shear loading. It was shown that all the expected features of load reversal such as Bauschinger effect, transient strain hardening with high rate and permanent softening, depend on the prestrain level. This is in agreement with the fact that these effects, which occur during reloading, are related to the rearrangement of the dislocation structure induced during the predeformation. The homogeneous anisotropic hardening (HAH) approach proposed by Barlat et al. (2011) [1] was successfully employed to predict the experimental results.
Measurements and Calculations of Halfraum Radiation Drives at the Omega Laser
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacLaren, S A; Back, C A; Hammer, J H
2005-01-06
Thin walled gold halfraums are a common choice for producing x-ray drives in experiments at high-power laser facilities. At the Omega Laser, we use 10 kJ of laser energy in a two-pulse sequence to generate halfraum drive temperatures of 160-190 eV for {approx}3ns. This type of drive is well characterized and reproducible, with characterization of the drive radiation temperature typically performed using the Dante diagnostic. Additionally, calibrated Photoconductive Diamond Detectors (PCDs) are used to measure the drive when it is desirable to utilize the Dante elsewhere in the experiment. Measurements of halfraum drives from both Dante and PCDs are comparedmore » with calculations, with good agreement. This agreement lends the calculations a predictive capability in designing further experiments utilizing halfraum drives.« less
Laminar Heating Validation of the OVERFLOW Code
NASA Technical Reports Server (NTRS)
Lillard, Randolph P.; Dries, Kevin M.
2005-01-01
OVERFLOW, a structured finite difference code, was applied to the solution of hypersonic laminar flow over several configurations assuming perfect gas chemistry. By testing OVERFLOW's capabilities over several configurations encompassing a variety of flow physics a validated laminar heating was produced. Configurations tested were a flat plate at 0 degrees incidence, a sphere, a compression ramp, and the X-38 re-entry vehicle. This variety of test cases shows the ability of the code to predict boundary layer flow, stagnation heating, laminar separation with re-attachment heating, and complex flow over a three-dimensional body. In addition, grid resolutions studies were done to give recommendations for the correct number of off-body points to be applied to generic problems and for wall-spacing values to capture heat transfer and skin friction. Numerical results show good comparison to the test data for all the configurations.
Gao, Li; Shigeta, Kazuki; Vazquez-Guardado, Abraham; Progler, Christopher J; Bogart, Gregory R; Rogers, John A; Chanda, Debashis
2014-06-24
We report advances in materials, designs, and fabrication schemes for large-area negative index metamaterials (NIMs) in multilayer "fishnet" layouts that offer negative index behavior at wavelengths into the visible regime. A simple nanoimprinting scheme capable of implementation using standard, widely available tools followed by a subtractive, physical liftoff step provides an enabling route for the fabrication. Computational analysis of reflection and transmission measurements suggests that the resulting structures offer negative index of refraction that spans both the visible wavelength range (529-720 nm) and the telecommunication band (1.35-1.6 μm). The data reveal that these large (>75 cm(2)) imprinted NIMs have predictable behaviors, good spatial uniformity in properties, and figures of merit as high as 4.3 in the visible range.
An Assessment of Current Fan Noise Prediction Capability
NASA Technical Reports Server (NTRS)
Envia, Edmane; Woodward, Richard P.; Elliott, David M.; Fite, E. Brian; Hughes, Christopher E.; Podboy, Gary G.; Sutliff, Daniel L.
2008-01-01
In this paper, the results of an extensive assessment exercise carried out to establish the current state of the art for predicting fan noise at NASA are presented. Representative codes in the empirical, analytical, and computational categories were exercised and assessed against a set of benchmark acoustic data obtained from wind tunnel tests of three model scale fans. The chosen codes were ANOPP, representing an empirical capability, RSI, representing an analytical capability, and LINFLUX, representing a computational aeroacoustics capability. The selected benchmark fans cover a wide range of fan pressure ratios and fan tip speeds, and are representative of modern turbofan engine designs. The assessment results indicate that the ANOPP code can predict fan noise spectrum to within 4 dB of the measurement uncertainty band on a third-octave basis for the low and moderate tip speed fans except at extreme aft emission angles. The RSI code can predict fan broadband noise spectrum to within 1.5 dB of experimental uncertainty band provided the rotor-only contribution is taken into account. The LINFLUX code can predict interaction tone power levels to within experimental uncertainties at low and moderate fan tip speeds, but could deviate by as much as 6.5 dB outside the experimental uncertainty band at the highest tip speeds in some case.
1979-06-11
has been conducted into the use of diamond as a TWT helix support material to increase the average output power capability of broadband high frequency...unifilar helix is the one TWT circuit capable of broadband operation with good efficiency, methods to increase jT its power dissipation capability are of...BIBLIOGRAPHY IRa D> AE .,L,-,# ACot .,i n iv 4 I IPT-5413 LIST OF ILLUSTRATIONS Figure No. Title 1 Temperature Differences in a PPM Focused Helix TWT
Predictive value and construct validity of the work functioning screener-healthcare (WFS-H).
Boezeman, Edwin J; Nieuwenhuijsen, Karen; Sluiter, Judith K
2016-05-25
To test the predictive value and convergent construct validity of a 6-item work functioning screener (WFS-H). Healthcare workers (249 nurses) completed a questionnaire containing the work functioning screener (WFS-H) and a work functioning instrument (NWFQ) measuring the following: cognitive aspects of task execution and general incidents, avoidance behavior, conflicts and irritation with colleagues, impaired contact with patients and their family, and level of energy and motivation. Productivity and mental health were also measured. Negative and positive predictive values, AUC values, and sensitivity and specificity were calculated to examine the predictive value of the screener. Correlation analysis was used to examine the construct validity. The screener had good predictive value, since the results showed that a negative screener score is a strong indicator of work functioning not hindered by mental health problems (negative predictive values: 94%-98%; positive predictive values: 21%-36%; AUC:.64-.82; sensitivity: 42%-76%; and specificity 85%-87%). The screener has good construct validity due to moderate, but significant (p<.001), associations with productivity (r=.51), mental health (r=.48), and distress (r=.47). The screener (WFS-H) had good predictive value and good construct validity. Its score offers occupational health professionals a helpful preliminary insight into the work functioning of healthcare workers.
Breast cancer prognosis by combinatorial analysis of gene expression data.
Alexe, Gabriela; Alexe, Sorin; Axelrod, David E; Bonates, Tibérius O; Lozina, Irina I; Reiss, Michael; Hammer, Peter L
2006-01-01
The potential of applying data analysis tools to microarray data for diagnosis and prognosis is illustrated on the recent breast cancer dataset of van 't Veer and coworkers. We re-examine that dataset using the novel technique of logical analysis of data (LAD), with the double objective of discovering patterns characteristic for cases with good or poor outcome, using them for accurate and justifiable predictions; and deriving novel information about the role of genes, the existence of special classes of cases, and other factors. Data were analyzed using the combinatorics and optimization-based method of LAD, recently shown to provide highly accurate diagnostic and prognostic systems in cardiology, cancer proteomics, hematology, pulmonology, and other disciplines. LAD identified a subset of 17 of the 25,000 genes, capable of fully distinguishing between patients with poor, respectively good prognoses. An extensive list of 'patterns' or 'combinatorial biomarkers' (that is, combinations of genes and limitations on their expression levels) was generated, and 40 patterns were used to create a prognostic system, shown to have 100% and 92.9% weighted accuracy on the training and test sets, respectively. The prognostic system uses fewer genes than other methods, and has similar or better accuracy than those reported in other studies. Out of the 17 genes identified by LAD, three (respectively, five) were shown to play a significant role in determining poor (respectively, good) prognosis. Two new classes of patients (described by similar sets of covering patterns, gene expression ranges, and clinical features) were discovered. As a by-product of the study, it is shown that the training and the test sets of van 't Veer have differing characteristics. The study shows that LAD provides an accurate and fully explanatory prognostic system for breast cancer using genomic data (that is, a system that, in addition to predicting good or poor prognosis, provides an individualized explanation of the reasons for that prognosis for each patient). Moreover, the LAD model provides valuable insights into the roles of individual and combinatorial biomarkers, allows the discovery of new classes of patients, and generates a vast library of biomedical research hypotheses.
Fire spread probabilities for experimental beds composed of mixedwood boreal forest fuels
M.B. Dickinson; E.A. Johnson; R. Artiaga
2013-01-01
Although fuel characteristics are assumed to have an important impact on fire regimes through their effects on extinction dynamics, limited capabilities exist for predicting whether a fire will spread in mixedwood boreal forest surface fuels. To improve predictive capabilities, we conducted 347 no-wind, laboratory test burns in surface fuels collected from the mixed-...
Evaluating the habitat capability model for Merriam's turkeys
Mark A. Rumble; Stanley H. Anderson
1995-01-01
Habitat capability (HABCAP) models for wildlife assist land managers in predicting the consequences of their management decisions. Models must be tested and refined prior to using them in management planning. We tested the predicted patterns of habitat selection of the R2 HABCAP model using observed patterns of habitats selected by radio-marked Merriamâs turkey (
Collective action problem in heterogeneous groups
Gavrilets, Sergey
2015-01-01
I review the theoretical and experimental literature on the collective action problem in groups whose members differ in various characteristics affecting individual costs, benefits and preferences in collective actions. I focus on evolutionary models that predict how individual efforts and fitnesses, group efforts and the amount of produced collective goods depend on the group's size and heterogeneity, as well as on the benefit and cost functions and parameters. I consider collective actions that aim to overcome the challenges from nature or win competition with neighbouring groups of co-specifics. I show that the largest contributors towards production of collective goods will typically be group members with the highest stake in it or for whom the effort is least costly, or those who have the largest capability or initial endowment. Under some conditions, such group members end up with smaller net pay-offs than the rest of the group. That is, they effectively behave as altruists. With weak nonlinearity in benefit and cost functions, the group effort typically decreases with group size and increases with within-group heterogeneity. With strong nonlinearity in benefit and cost functions, these patterns are reversed. I discuss the implications of theoretical results for animal behaviour, human origins and psychology. PMID:26503689
Recursive feature selection with significant variables of support vectors.
Tsai, Chen-An; Huang, Chien-Hsun; Chang, Ching-Wei; Chen, Chun-Houh
2012-01-01
The development of DNA microarray makes researchers screen thousands of genes simultaneously and it also helps determine high- and low-expression level genes in normal and disease tissues. Selecting relevant genes for cancer classification is an important issue. Most of the gene selection methods use univariate ranking criteria and arbitrarily choose a threshold to choose genes. However, the parameter setting may not be compatible to the selected classification algorithms. In this paper, we propose a new gene selection method (SVM-t) based on the use of t-statistics embedded in support vector machine. We compared the performance to two similar SVM-based methods: SVM recursive feature elimination (SVMRFE) and recursive support vector machine (RSVM). The three methods were compared based on extensive simulation experiments and analyses of two published microarray datasets. In the simulation experiments, we found that the proposed method is more robust in selecting informative genes than SVMRFE and RSVM and capable to attain good classification performance when the variations of informative and noninformative genes are different. In the analysis of two microarray datasets, the proposed method yields better performance in identifying fewer genes with good prediction accuracy, compared to SVMRFE and RSVM.
Vector Adaptive/Predictive Encoding Of Speech
NASA Technical Reports Server (NTRS)
Chen, Juin-Hwey; Gersho, Allen
1989-01-01
Vector adaptive/predictive technique for digital encoding of speech signals yields decoded speech of very good quality after transmission at coding rate of 9.6 kb/s and of reasonably good quality at 4.8 kb/s. Requires 3 to 4 million multiplications and additions per second. Combines advantages of adaptive/predictive coding, and code-excited linear prediction, yielding speech of high quality but requires 600 million multiplications and additions per second at encoding rate of 4.8 kb/s. Vector adaptive/predictive coding technique bridges gaps in performance and complexity between adaptive/predictive coding and code-excited linear prediction.
Assessment of CFD capability for prediction of hypersonic shock interactions
NASA Astrophysics Data System (ADS)
Knight, Doyle; Longo, José; Drikakis, Dimitris; Gaitonde, Datta; Lani, Andrea; Nompelis, Ioannis; Reimann, Bodo; Walpot, Louis
2012-01-01
The aerothermodynamic loadings associated with shock wave boundary layer interactions (shock interactions) must be carefully considered in the design of hypersonic air vehicles. The capability of Computational Fluid Dynamics (CFD) software to accurately predict hypersonic shock wave laminar boundary layer interactions is examined. A series of independent computations performed by researchers in the US and Europe are presented for two generic configurations (double cone and cylinder) and compared with experimental data. The results illustrate the current capabilities and limitations of modern CFD methods for these flows.
Control Architecture for Robotic Agent Command and Sensing
NASA Technical Reports Server (NTRS)
Huntsberger, Terrance; Aghazarian, Hrand; Estlin, Tara; Gaines, Daniel
2008-01-01
Control Architecture for Robotic Agent Command and Sensing (CARACaS) is a recent product of a continuing effort to develop architectures for controlling either a single autonomous robotic vehicle or multiple cooperating but otherwise autonomous robotic vehicles. CARACaS is potentially applicable to diverse robotic systems that could include aircraft, spacecraft, ground vehicles, surface water vessels, and/or underwater vessels. CARACaS incudes an integral combination of three coupled agents: a dynamic planning engine, a behavior engine, and a perception engine. The perception and dynamic planning en - gines are also coupled with a memory in the form of a world model. CARACaS is intended to satisfy the need for two major capabilities essential for proper functioning of an autonomous robotic system: a capability for deterministic reaction to unanticipated occurrences and a capability for re-planning in the face of changing goals, conditions, or resources. The behavior engine incorporates the multi-agent control architecture, called CAMPOUT, described in An Architecture for Controlling Multiple Robots (NPO-30345), NASA Tech Briefs, Vol. 28, No. 11 (November 2004), page 65. CAMPOUT is used to develop behavior-composition and -coordination mechanisms. Real-time process algebra operators are used to compose a behavior network for any given mission scenario. These operators afford a capability for producing a formally correct kernel of behaviors that guarantee predictable performance. By use of a method based on multi-objective decision theory (MODT), recommendations from multiple behaviors are combined to form a set of control actions that represents their consensus. In this approach, all behaviors contribute simultaneously to the control of the robotic system in a cooperative rather than a competitive manner. This approach guarantees a solution that is good enough with respect to resolution of complex, possibly conflicting goals within the constraints of the mission to be accomplished by the vehicle(s).
The Superior Lambert Algorithm
NASA Astrophysics Data System (ADS)
der, G.
2011-09-01
Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most numerical integration methods.
Analysis of the Salvation Army World Service Offices Disaster Relief Capabilities
2017-03-01
AOR based primarily on their financial revenues, since revenue is a prerequisite enabling mechanism for the delivery of goods and services . The...are from government sources, whereas contributions include cash and dollar value of in-kind services and goods . Investment revenues largely consist of...taking action among those most in need of assistance offers a compelling and admirable example of the good a religious organization can accomplish
The Dilution of Field Artillery Capabilities
2008-02-01
December 2007, 139. 5William H. McMichael, “Mullen gets earful: More dwell needed; 12 months ‘ not good enough,’ young captains tell Joint Chiefs...Mullen gets earful: More dwell needed; 12 months ‘ not good enough,’ young captains tell Joint Chiefs chairman.” The Army Times. 5 November 2007
Hou, Qi; Bing, Zhi-Tong; Hu, Cheng; Li, Mao-Yin; Yang, Ke-Hu; Mo, Zu; Xie, Xiang-Wei; Liao, Ji-Lin; Lu, Yan; Horie, Shigeo; Lou, Ming-Wu
2018-06-01
Prostate cancer (PCa) is the most commonly diagnosed cancer in males in the Western world. Although prostate-specific antigen (PSA) has been widely used as a biomarker for PCa diagnosis, its results can be controversial. Therefore, new biomarkers are needed to enhance the clinical management of PCa. From publicly available microarray data, differentially expressed genes (DEGs) were identified by meta-analysis with RankProd. Genetic algorithm optimized artificial neural network (GA-ANN) was introduced to establish a diagnostic prediction model and to filter candidate genes. The diagnostic and prognostic capability of the prediction model and candidate genes were investigated in both GEO and TCGA datasets. Candidate genes were further validated by qPCR, Western Blot and Tissue microarray. By RankProd meta-analyses, 2306 significantly up- and 1311 down-regulated probes were found in 133 cases and 30 controls microarray data. The overall accuracy rate of the PCa diagnostic prediction model, consisting of a 15-gene signature, reached up to 100% in both the training and test dataset. The prediction model also showed good results for the diagnosis (AUC = 0.953) and prognosis (AUC of 5 years overall survival time = 0.808) of PCa in the TCGA database. The expression levels of three genes, FABP5, C1QTNF3 and LPHN3, were validated by qPCR. C1QTNF3 high expression was further validated in PCa tissue by Western Blot and Tissue microarray. In the GEO datasets, C1QTNF3 was a good predictor for the diagnosis of PCa (GSE6956: AUC = 0.791; GSE8218: AUC = 0.868; GSE26910: AUC = 0.972). In the TCGA database, C1QTNF3 was significantly associated with PCa patient recurrence free survival (P < .001, AUC = 0.57). In this study, we have developed a diagnostic and prognostic prediction model for PCa. C1QTNF3 was revealed as a promising biomarker for PCa. This approach can be applied to other high-throughput data from different platforms for the discovery of oncogenes or biomarkers in different kinds of diseases. Copyright © 2018. Published by Elsevier B.V.
2018-01-01
Background Sen’s capability approach is underspecified; one decision left to those operationalising the approach is how to identify sets of relevant and important capabilities. Sen has suggested that lists be developed for specific policy or research objectives through a process of public reasoning and discussion. Robeyns offers further guidance in support of Sen’s position, suggesting that lists should be explicit, discussed and defended; methods be openly scrutinised; lists be considered both in terms of what is ideal and what is practical (‘generality’); and that lists be exhaustive. Here, the principles suggested by Robeyns are operationalised to facilitate external scrutiny of a list of capabilities identified for use in the evaluation of supportive end of life care. Methods This work started with an existing list of seven capabilities (the ICECAP-SCM), identified as being necessary for a person to experience a good death. Semi-structured qualitative interviews were conducted with 20 experts in economics, psychology, ethics and palliative care, to facilitate external scrutiny of the developed list. Interviews were recorded, transcribed and analysed using constant comparison. Results The seven capabilities were found to encompass concepts identified as important by expert stakeholders (to be exhaustive) and the measure was considered feasible for use with patients receiving care at the end of life. Conclusion The rigorous development of lists of capabilities using both initial participatory approaches with affected population groups, and subsequent assessment by experts, strengthens their democratic basis and may encourage their use in policy contexts. PMID:29466414
Enhancing Pro-Public-Good Professionalism in Technical Studies
ERIC Educational Resources Information Center
Boni-Aristizábal, Alejandra; Calabuig-Tormo, Carola
2016-01-01
In a university environment dominated by a traditional way of understanding knowledge, we argue that it is possible and necessary to foster capabilities among engineering students. Capabilities are understood as reasoned and substantive freedoms to lead the kind of life that people value, within a framework of respect for the core values of human…
NASA Technical Reports Server (NTRS)
Schubert, Siegfried
2011-01-01
Drought is fundamentally the result of an extended period of reduced precipitation lasting anywhere from a few weeks to decades and even longer. As such, addressing drought predictability and prediction in a changing climate requires foremost that we make progress on the ability to predict precipitation anomalies on subseasonal and longer time scales. From the perspective of the users of drought forecasts and information, drought is however most directly viewed through its impacts (e.g., on soil moisture, streamflow, crop yields). As such, the question of the predictability of drought must extend to those quantities as well. In order to make progress on these issues, the WCRP drought information group (DIG), with the support of WCRP, the Catalan Institute of Climate Sciences, the La Caixa Foundation, the National Aeronautics and Space Administration, the National Oceanic and Atmospheric Administration, and the National Science Foundation, has organized a workshop to focus on: 1. User requirements for drought prediction information on sub-seasonal to centennial time scales 2. Current understanding of the mechanisms and predictability of drought on sub-seasonal to centennial time scales 3. Current drought prediction/projection capabilities on sub-seasonal to centennial time scales 4. Advancing regional drought prediction capabilities for variables and scales most relevant to user needs on sub-seasonal to centennial time scales. This introductory talk provides an overview of these goals, and outlines the occurrence and mechanisms of drought world-wide.
Equation of state of Mo from shock compression experiments on preheated samples
NASA Astrophysics Data System (ADS)
Fat'yanov, O. V.; Asimow, P. D.
2017-03-01
We present a reanalysis of reported Hugoniot data for Mo, including both experiments shocked from ambient temperature (T) and those preheated to 1673 K, using the most general methods of least-squares fitting to constrain the Grüneisen model. This updated Mie-Grüneisen equation of state (EOS) is used to construct a family of maximum likelihood Hugoniots of Mo from initial temperatures of 298 to 2350 K and a parameterization valid over this range. We adopted a single linear function at each initial temperature over the entire range of particle velocities considered. Total uncertainties of all the EOS parameters and correlation coefficients for these uncertainties are given. The improved predictive capabilities of our EOS for Mo are confirmed by (1) better agreement between calculated bulk sound speeds and published measurements along the principal Hugoniot, (2) good agreement between our Grüneisen data and three reported high-pressure γ ( V ) functions obtained from shock-compression of porous samples, and (3) very good agreement between our 1 bar Grüneisen values and γ ( T ) at ambient pressure recalculated from reported experimental data on the adiabatic bulk modulus K s ( T ) . Our analysis shows that an EOS constructed from shock compression data allows a much more accurate prediction of γ ( T ) values at 1 bar than those based on static compression measurements or first-principles calculations. Published calibrations of the Mie-Grüneisen EOS for Mo using static compression measurements only do not reproduce even low-pressure asymptotic values of γ ( T ) at 1 bar, where the most accurate experimental data are available.
NASA Technical Reports Server (NTRS)
Liou, J. C.
2012-01-01
Presentation outlne: (1) The NASA Orbital Debris (OD) Engineering Model -- A mathematical model capable of predicting OD impact risks for the ISS and other critical space assets (2) The NASA OD Evolutionary Model -- A physical model capable of predicting future debris environment based on user-specified scenarios (3) The NASA Standard Satellite Breakup Model -- A model describing the outcome of a satellite breakup (explosion or collision)
THETRIS: A MICRO-SCALE TEMPERATURE AND GAS RELEASE MODEL FOR TRISO FUEL
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. Ortensi; A.M. Ougouag
2011-12-01
The dominating mechanism in the passive safety of gas-cooled, graphite-moderated, high-temperature reactors (HTRs) is the Doppler feedback effect. These reactor designs are fueled with sub-millimeter sized kernels formed into TRISO particles that are imbedded in a graphite matrix. The best spatial and temporal representation of the feedback effect is obtained from an accurate approximation of the fuel temperature. Most accident scenarios in HTRs are characterized by large time constants and slow changes in the fuel and moderator temperature fields. In these situations a meso-scale, pebble and compact scale, solution provides a good approximation of the fuel temperature. Micro-scale models aremore » necessary in order to obtain accurate predictions in faster transients or when parameters internal to the TRISO are needed. Since these coated particles constitute one of the fundamental design barriers for the release of fission products, it becomes important to understand the transient behavior inside this containment system. An explicit TRISO fuel temperature model named THETRIS has been developed and incorporated into the CYNOD-THERMIX-KONVEK suite of coupled codes. The code includes gas release models that provide a simple predictive capability of the internal pressure during transients. The new model yields similar results to those obtained with other micro-scale fuel models, but with the added capability to analyze gas release, internal pressure buildup, and effects of a gap in the TRISO. The analyses show the instances when the micro-scale models improve the predictions of the fuel temperature and Doppler feedback. In addition, a sensitivity study of the potential effects on the transient behavior of high-temperature reactors due to the presence of a gap is included. Although the formation of a gap occurs under special conditions, its consequences on the dynamic behavior of the reactor can cause unexpected responses during fast transients. Nevertheless, the strong Doppler feedback forces the reactor to quickly stabilize.« less
Lightning Strike Induced Damage Mechanisms of Carbon Fiber Composites
NASA Astrophysics Data System (ADS)
Kawakami, Hirohide
Composite materials have a wide application in aerospace, automotive, and other transportation industries, because of the superior structural and weight performances. Since carbon fiber reinforced polymer composites possess a much lower electrical conductivity as compared to traditional metallic materials utilized for aircraft structures, serious concern about damage resistance/tolerance against lightning has been rising. Main task of this study is to clarify the lightning damage mechanism of carbon fiber reinforced epoxy polymer composites to help further development of lightning strike protection. The research on lightning damage to carbon fiber reinforced polymer composites is quite challenging, and there has been little study available until now. In order to tackle this issue, building block approach was employed. The research was started with the development of supporting technologies such as a current impulse generator to simulate a lightning strike in a laboratory. Then, fundamental electrical properties and fracture behavior of CFRPs exposed to high and low level current impulse were investigated using simple coupon specimens, followed by extensive parametric investigations in terms of different prepreg materials frequently used in aerospace industry, various stacking sequences, different lightning intensity, and lightning current waveforms. It revealed that the thermal resistance capability of polymer matrix was one of the most influential parameters on lightning damage resistance of CFRPs. Based on the experimental findings, the semi-empirical analysis model for predicting the extent of lightning damage was established. The model was fitted through experimental data to determine empirical parameters and, then, showed a good capability to provide reliable predictions for other test conditions and materials. Finally, structural element level lightning tests were performed to explore more practical situations. Specifically, filled-hole CFRP plates and patch-repaired CFRP plates were selected as structural elements likely to be susceptible to lightning event. This study forms a solid foundation for the understanding of lightning damage mechanism of CFRPs, and become an important first step toward building a practical damage prediction tool of lighting event.
NASA Astrophysics Data System (ADS)
Wu, Yenan; Zhong, Ping-an; Xu, Bin; Zhu, Feilin; Fu, Jisi
2017-06-01
Using climate models with high performance to predict the future climate changes can increase the reliability of results. In this paper, six kinds of global climate models that selected from the Coupled Model Intercomparison Project Phase 5 (CMIP5) under Representative Concentration Path (RCP) 4.5 scenarios were compared to the measured data during baseline period (1960-2000) and evaluate the simulation performance on precipitation. Since the results of single climate models are often biased and highly uncertain, we examine the back propagation (BP) neural network and arithmetic mean method in assembling the precipitation of multi models. The delta method was used to calibrate the result of single model and multimodel ensembles by arithmetic mean method (MME-AM) during the validation period (2001-2010) and the predicting period (2011-2100). We then use the single models and multimodel ensembles to predict the future precipitation process and spatial distribution. The result shows that BNU-ESM model has the highest simulation effect among all the single models. The multimodel assembled by BP neural network (MME-BP) has a good simulation performance on the annual average precipitation process and the deterministic coefficient during the validation period is 0.814. The simulation capability on spatial distribution of precipitation is: calibrated MME-AM > MME-BP > calibrated BNU-ESM. The future precipitation predicted by all models tends to increase as the time period increases. The order of average increase amplitude of each season is: winter > spring > summer > autumn. These findings can provide useful information for decision makers to make climate-related disaster mitigation plans.
Data assimialation for real-time prediction and reanalysis
NASA Astrophysics Data System (ADS)
Shprits, Y.; Kellerman, A. C.; Podladchikova, T.; Kondrashov, D. A.; Ghil, M.
2015-12-01
We discuss the how data assimilation can be used for the analysis of individual satellite anomalies, development of long-term evolution reconstruction that can be used for the specification models, and use of data assimilation to improve the now-casting and focusing of the radiation belts. We also discuss advanced data assimilation methods such as parameter estimation and smoothing.The 3D data assimilative VERB allows us to blend together data from GOES, RBSP A and RBSP B. Real-time prediction framework operating on our web site based on GOES, RBSP A, B and ACE data and 3D VERB is presented and discussed. In this paper we present a number of application of the data assimilation with the VERB 3D code. 1) Model with data assimilation allows to propagate data to different pitch angles, energies, and L-shells and blends them together with the physics based VERB code in an optimal way. We illustrate how we use this capability for the analysis of the previous events and for obtaining a global and statistical view of the system. 2) The model predictions strongly depend on initial conditions that are set up for the model. Therefore the model is as good as the initial conditions that it uses. To produce the best possible initial condition data from different sources ( GOES, RBSP A, B, our empirical model predictions based on ACE) are all blended together in an optimal way by means of data assimilation as described above. The resulting initial condition does not have gaps. That allows us to make a more accurate predictions.
Sun, Wan; O'Dwyer, Peter J; Finn, Richard S; Ruiz-Garcia, Ana; Shapiro, Geoffrey I; Schwartz, Gary K; DeMichele, Angela; Wang, Diane
2017-09-01
Neutropenia is the most commonly reported hematologic toxicity following treatment with palbociclib, a cyclin-dependent kinase 4/6 inhibitor approved for metastatic breast cancer. Using data from 185 advanced cancer patients receiving palbociclib in 3 clinical trials, a pharmacokinetic-pharmacodynamic model was developed to describe the time course of absolute neutrophil count (ANC) and quantify the exposure-response relationship for neutropenia. These analyses help in understanding neutropenia associated with palbociclib and its comparison with chemotherapy-induced neutropenia. In the model, palbociclib plasma concentration was related to its antiproliferative effect on precursor cells through drug-related parameters (ie, maximum estimated drug effect and concentration corresponding to 50% of the maximum effect), and neutrophil physiology was mimicked through system-related parameters (ie, mean transit time, baseline ANC, and feedback parameter). Sex and baseline albumin level were significant covariates for baseline ANC. It was demonstrated by different model evaluation approaches (eg, prediction-corrected visual predictive check and standardized visual predictive check) that the final model adequately described longitudinal ANC with good predictive capability. The established model suggested that higher palbociclib exposure was associated with lower longitudinal neutrophil counts. The ANC nadir was reached approximately 21 days after palbociclib treatment initiation. Consistent with their mechanisms of action, neutropenia associated with palbociclib (cytostatic) was rapidly reversible and noncumulative, with a notably weaker antiproliferative effect on precursor cells relative to chemotherapies (cytotoxic). This pharmacokinetic-pharmacodynamic model aids in predicting neutropenia and optimizing dosing for future palbociclib trials with different dosing regimen combinations. © 2017, The American College of Clinical Pharmacology.
NASA Astrophysics Data System (ADS)
Di Lorenzo, R.; Ingarao, G.; Fonti, V.
2007-05-01
The crucial task in the prevention of ductile fracture is the availability of a tool for the prediction of such defect occurrence. The technical literature presents a wide investigation on this topic and many contributions have been given by many authors following different approaches. The main class of approaches regards the development of fracture criteria: generally, such criteria are expressed by determining a critical value of a damage function which depends on stress and strain paths: ductile fracture is assumed to occur when such critical value is reached during the analysed process. There is a relevant drawback related to the utilization of ductile fracture criteria; in fact each criterion usually has good performances in the prediction of fracture for particular stress - strain paths, i.e. it works very well for certain processes but may provide no good results for other processes. On the other hand, the approaches based on damage mechanics formulation are very effective from a theoretical point of view but they are very complex and their proper calibration is quite difficult. In this paper, two different approaches are investigated to predict fracture occurrence in cold forming operations. The final aim of the proposed method is the achievement of a tool which has a general reliability i.e. it is able to predict fracture for different forming processes. The proposed approach represents a step forward within a research project focused on the utilization of innovative predictive tools for ductile fracture. The paper presents a comparison between an artificial neural network design procedure and an approach based on statistical tools; both the approaches were aimed to predict fracture occurrence/absence basing on a set of stress and strain paths data. The proposed approach is based on the utilization of experimental data available, for a given material, on fracture occurrence in different processes. More in detail, the approach consists in the analysis of experimental tests in which fracture occurs followed by the numerical simulations of such processes in order to track the stress-strain paths in the workpiece region where fracture is expected. Such data are utilized to build up a proper data set which was utilized both to train an artificial neural network and to perform a statistical analysis aimed to predict fracture occurrence. The developed statistical tool is properly designed and optimized and is able to recognize the fracture occurrence. The reliability and predictive capability of the statistical method were compared with the ones obtained from an artificial neural network developed to predict fracture occurrence. Moreover, the approach is validated also in forming processes characterized by a complex fracture mechanics.
Provocative work experiences predict the acquired capability for suicide in physicians.
Fink-Miller, Erin L
2015-09-30
The interpersonal psychological theory of suicidal behavior (IPTS) offers a potential means to explain suicide in physicians. The IPTS posits three necessary and sufficient precursors to death by suicide: thwarted belongingness, perceived burdensomeness, and acquired capability. The present study sought to examine whether provocative work experiences unique to physicians (e.g., placing sutures, withdrawing life support) would predict levels of acquired capability, while controlling for gender and painful and provocative experiences outside the work environment. Data were obtained from 376 of 7723 recruited physicians. Study measures included the Acquired Capability for Suicide Scale, the Interpersonal Needs Questionnaire, the Painful and Provocative Events Scale, and the Life Events Scale-Medical Doctors Version. Painful and provocative events outside of work predicted acquired capability (β=0.23, t=3.82, p<0.001, f(2)=0.09) as did provocative work experiences (β=0.12, t=2.05, p<0.05, f(2)=0.07). This represents the first study assessing the potential impact of unique work experiences on suicidality in physicians. Limitations include over-representation of Caucasian participants, limited representation from various specialties of medicine, and lack of information regarding individual differences. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Comprehensive Micromechanics-Analysis Code - Version 4.0
NASA Technical Reports Server (NTRS)
Arnold, S. M.; Bednarcyk, B. A.
2005-01-01
Version 4.0 of the Micromechanics Analysis Code With Generalized Method of Cells (MAC/GMC) has been developed as an improved means of computational simulation of advanced composite materials. The previous version of MAC/GMC was described in "Comprehensive Micromechanics-Analysis Code" (LEW-16870), NASA Tech Briefs, Vol. 24, No. 6 (June 2000), page 38. To recapitulate: MAC/GMC is a computer program that predicts the elastic and inelastic thermomechanical responses of continuous and discontinuous composite materials with arbitrary internal microstructures and reinforcement shapes. The predictive capability of MAC/GMC rests on a model known as the generalized method of cells (GMC) - a continuum-based model of micromechanics that provides closed-form expressions for the macroscopic response of a composite material in terms of the properties, sizes, shapes, and responses of the individual constituents or phases that make up the material. Enhancements in version 4.0 include a capability for modeling thermomechanically and electromagnetically coupled ("smart") materials; a more-accurate (high-fidelity) version of the GMC; a capability to simulate discontinuous plies within a laminate; additional constitutive models of materials; expanded yield-surface-analysis capabilities; and expanded failure-analysis and life-prediction capabilities on both the microscopic and macroscopic scales.
Hochard, Kevin D; Heym, Nadja; Townsend, Ellen
2017-06-01
Heightened arousal significantly interacts with acquired capability to predict suicidality. We explore this interaction with insomnia and nightmares independently of waking state arousal symptoms, and test predictions of the Interpersonal Theory of Suicide (IPTS) and Escape Theory in relation to these sleep arousal symptoms. Findings from our e-survey (n = 540) supported the IPTS over models of Suicide as Escape. Sleep-specific measurements of arousal (insomnia and nightmares) showed no main effect, yet interacted with acquired capability to predict increased suicidality. The explained variance in suicidality by the interaction (1%-2%) using sleep-specific measures was comparable to variance explained by interactions previously reported in the literature using measurements composed of a mix of waking and sleep state arousal symptoms. Similarly, when entrapment (inability to escape) was included in models, main effects of sleep symptoms arousal were not detected yet interacted with entrapment to predict suicidality. We discuss findings in relation to treatment options suggesting that sleep-specific interventions be considered for the long-term management of at-risk individuals. © 2016 The American Association of Suicidology.
Sakaguchi, Hitoshi; Ryan, Cindy; Ovigne, Jean-Marc; Schroeder, Klaus R; Ashikaga, Takao
2010-09-01
Regulatory policies in Europe prohibited the testing of cosmetic ingredients in animals for a number of toxicological endpoints. Currently no validated non-animal test methods exist for skin sensitization. Evaluation of changes in cell surface marker expression in dendritic cell (DC)-surrogate cell lines represents one non-animal approach. The human Cell Line Activation Test (h-CLAT) examines the level of CD86 and CD54 expression on the surface of THP-1 cells, a human monocytic leukemia cell line, following 24h of chemical exposure. To examine protocol transferability, between-lab reproducibility, and predictive capacity, the h-CLAT has been evaluated by five independent laboratories in several ring trials (RTs) coordinated by the European Cosmetics Association (COLIPA). The results of the first and second RTs demonstrated that the protocol was transferable and basically had good between-lab reproducibility and predictivity, but there were some false negative data. To improve performance, protocol and prediction model were modified. Using the modified prediction model in the first and second RT, accuracy was improved. However, about 15% of the outcomes were not correctly identified, which exposes some of the limitations of the assay. For the chemicals evaluated, the limitation may due to chemical being a weak allergen or having low solubility (ex. alpha-hexylcinnamaldehyde). The third RT evaluated the modified prediction model and satisfactory results were obtained. From the RT data, the feasibility of utilizing cell lines as surrogate DC in development of in vitro skin sensitization methods shows promise. The data also support initiating formal pre-validation of the h-CLAT in order to fully understand the capabilities and limitations of the assay. Copyright 2010 Elsevier Ltd. All rights reserved.
Sensitivity assessment of freshwater macroinvertebrates to pesticides using biological traits.
Ippolito, A; Todeschini, R; Vighi, M
2012-03-01
Assessing the sensitivity of different species to chemicals is one of the key points in predicting the effects of toxic compounds in the environment. Trait-based predicting methods have proved to be extremely efficient for assessing the sensitivity of macroinvertebrates toward compounds with non specific toxicity (narcotics). Nevertheless, predicting the sensitivity of organisms toward compounds with specific toxicity is much more complex, since it depends on the mode of action of the chemical. The aim of this work was to predict the sensitivity of several freshwater macroinvertebrates toward three classes of plant protection products: organophosphates, carbamates and pyrethroids. Two databases were built: one with sensitivity data (retrieved, evaluated and selected from the U.S. Environmental Protection Agency ECOTOX database) and the other with biological traits. Aside from the "traditional" traits usually considered in ecological analysis (i.e. body size, respiration technique, feeding habits, etc.), multivariate analysis was used to relate the sensitivity of organisms to some other characteristics which may be involved in the process of intoxication. Results confirmed that, besides traditional biological traits, related to uptake capability (e.g. body size and body shape) some traits more related to particular metabolic characteristics or patterns have a good predictive capacity on the sensitivity to these kinds of toxic substances. For example, behavioral complexity, assumed as an indicator of nervous system complexity, proved to be an important predictor of sensitivity towards these compounds. These results confirm the need for more complex traits to predict effects of highly specific substances. One key point for achieving a complete mechanistic understanding of the process is the choice of traits, whose role in the discrimination of sensitivity should be clearly interpretable, and not only statistically significant.
ECOSAR model performance with a large test set of industrial chemicals.
Reuschenbach, Peter; Silvani, Maurizio; Dammann, Martina; Warnecke, Dietmar; Knacker, Thomas
2008-05-01
The widely used ECOSAR computer programme for QSAR prediction of chemical toxicity towards aquatic organisms was evaluated by using large data sets of industrial chemicals with varying molecular structures. Experimentally derived toxicity data covering acute effects on fish, Daphnia and green algae growth inhibition of in total more than 1,000 randomly selected substances were compared to the prediction results of the ECOSAR programme in order (1) to assess the capability of ECOSAR to correctly classify the chemicals into defined classes of aquatic toxicity according to rules of EU regulation and (2) to determine the number of correct predictions within tolerance factors from 2 to 1,000. Regarding ecotoxicity classification, 65% (fish), 52% (Daphnia) and 49% (algae) of the substances were correctly predicted into the classes "not harmful", "harmful", "toxic" and "very toxic". At all trophic levels about 20% of the chemicals were underestimated in their toxicity. The class of "not harmful" substances (experimental LC/EC(50)>100 mg l(-1)) represents nearly half of the whole data set. The percentages for correct predictions of toxic effects on fish, Daphnia and algae growth inhibition were 69%, 64% and 60%, respectively, when a tolerance factor of 10 was allowed. Focussing on those experimental results which were verified by analytically measured concentrations, the predictability for Daphnia and algae toxicity was improved by approximately three percentage points, whereas for fish no improvement was determined. The calculated correlation coefficients demonstrated poor correlation when the complete data set was taken, but showed good results for some of the ECOSAR chemical classes. The results are discussed in the context of literature data on the performance of ECOSAR and other QSAR models.
Cloud-Based Numerical Weather Prediction for Near Real-Time Forecasting and Disaster Response
NASA Technical Reports Server (NTRS)
Molthan, Andrew; Case, Jonathan; Venners, Jason; Schroeder, Richard; Checchi, Milton; Zavodsky, Bradley; Limaye, Ashutosh; O'Brien, Raymond
2015-01-01
The use of cloud computing resources continues to grow within the public and private sector components of the weather enterprise as users become more familiar with cloud-computing concepts, and competition among service providers continues to reduce costs and other barriers to entry. Cloud resources can also provide capabilities similar to high-performance computing environments, supporting multi-node systems required for near real-time, regional weather predictions. Referred to as "Infrastructure as a Service", or IaaS, the use of cloud-based computing hardware in an on-demand payment system allows for rapid deployment of a modeling system in environments lacking access to a large, supercomputing infrastructure. Use of IaaS capabilities to support regional weather prediction may be of particular interest to developing countries that have not yet established large supercomputing resources, but would otherwise benefit from a regional weather forecasting capability. Recently, collaborators from NASA Marshall Space Flight Center and Ames Research Center have developed a scripted, on-demand capability for launching the NOAA/NWS Science and Training Resource Center (STRC) Environmental Modeling System (EMS), which includes pre-compiled binaries of the latest version of the Weather Research and Forecasting (WRF) model. The WRF-EMS provides scripting for downloading appropriate initial and boundary conditions from global models, along with higher-resolution vegetation, land surface, and sea surface temperature data sets provided by the NASA Short-term Prediction Research and Transition (SPoRT) Center. This presentation will provide an overview of the modeling system capabilities and benchmarks performed on the Amazon Elastic Compute Cloud (EC2) environment. In addition, the presentation will discuss future opportunities to deploy the system in support of weather prediction in developing countries supported by NASA's SERVIR Project, which provides capacity building activities in environmental monitoring and prediction across a growing number of regional hubs throughout the world. Capacity-building applications that extend numerical weather prediction to developing countries are intended to provide near real-time applications to benefit public health, safety, and economic interests, but may have a greater impact during disaster events by providing a source for local predictions of weather-related hazards, or impacts that local weather events may have during the recovery phase.
NASA Astrophysics Data System (ADS)
Chen, Zhaohui
During the past decade, the search for better electrode materials for Li-ion batteries has been of a great commercial interest, especially since Li-ion technology has become a major rechargeable battery technology with a market value of $3 billion US dollars per year. This thesis focuses on improving two positive electrode materials: one is a traditional positive electrode material--LiCoO2; the other is a new positive electrode material--LiFePO 4. Cho et al. reported that coating LiCoO2 with oxides can improve the capacity retention of LiCoO2 cycled to 4.4 V. The study of coatings in this thesis confirms this effect and shows that further improvement (30% higher energy density than that used in a commercial cell with excellent capacity retention) can be obtained. An in-situ XRD study proves that the mechanism of the improvement in capacity retention by coating proposed by Cho et al. is incorrect. Further experiments identify the suppression of impedance growth in the cell as the key reason for the improvement caused by coating. Based on this, other methods to improve the energy density of LiCoO2, without sacrificing capacity retention, are also developed. Using an XRD study, the structure of the phase between the O3-phase Li 1-xCoO2 (x > 0.5) and the O1 phase CoO2 was measured experimentally for the first time. XRD results confirmed the prediction of an H1-3 phase by Ceder's group. Apparently, because of the structural changes between the O3 phase and the H1-3 phase, good capacity retention cannot be attained for cycling LiCoO2 to 4.6 V with respect to Li metal. An effort was also made to reduce the carbon content in a LiFePO 4/C composite without sacrificing its rate capability. It was found that about 3% carbon by weight maintains both a good rate capability and a high pellet density for the composite.
The Contribution of Vocational Students' Learning Discipline, Motivation and Learning Results
ERIC Educational Resources Information Center
Yussi; Syaad; Purnomo
2017-01-01
A good vocational high school prepares students for developing capability of working independently, demonstrating professional attitude at work, and being productive which that require good learning results for the realization thereof. the learning results serve as the yardstick of students' success. The purpose of this article is to find out the…
Applications of LANCE Data at SPoRT
NASA Technical Reports Server (NTRS)
Molthan, Andrew
2014-01-01
Short term Prediction Research and Transition (SPoRT) Center: Mission: Apply NASA and NOAA measurement systems and unique Earth science research to improve the accuracy of short term weather prediction at the regional/local scale. Goals: Evaluate and assess the utility of NASA and NOAA Earth science data and products and unique research capabilities to address operational weather forecast problems; Provide an environment which enables the development and testing of new capabilities to improve short term weather forecasts on a regional scale; Help ensure successful transition of new capabilities to operational weather entities for the benefit of society
Aircraft noise prediction program user's manual
NASA Technical Reports Server (NTRS)
Gillian, R. E.
1982-01-01
The Aircraft Noise Prediction Program (ANOPP) predicts aircraft noise with the best methods available. This manual is designed to give the user an understanding of the capabilities of ANOPP and to show how to formulate problems and obtain solutions by using these capabilities. Sections within the manual document basic ANOPP concepts, ANOPP usage, ANOPP functional modules, ANOPP control statement procedure library, and ANOPP permanent data base. appendixes to the manual include information on preparing job decks for the operating systems in use, error diagnostics and recovery techniques, and a glossary of ANOPP terms.
Evaluation of a habitat capability model for nongame birds in the Black Hills, South Dakota
Todd R. Mills; Mark A. Rumble; Lester D. Flake
1996-01-01
Habitat models, used to predict consequences of land management decisions on wildlife, can have considerable economic effect on management decisions. The Black Hills National Forest uses such a habitat capability model (HABCAP), but its accuracy is largely unknown. We tested this modelâs predictive accuracy for nongame birds in 13 vegetative structural stages of...
Design of the Next Generation Aircraft Noise Prediction Program: ANOPP2
NASA Technical Reports Server (NTRS)
Lopes, Leonard V., Dr.; Burley, Casey L.
2011-01-01
The requirements, constraints, and design of NASA's next generation Aircraft NOise Prediction Program (ANOPP2) are introduced. Similar to its predecessor (ANOPP), ANOPP2 provides the U.S. Government with an independent aircraft system noise prediction capability that can be used as a stand-alone program or within larger trade studies that include performance, emissions, and fuel burn. The ANOPP2 framework is designed to facilitate the combination of acoustic approaches of varying fidelity for the analysis of noise from conventional and unconventional aircraft. ANOPP2 integrates noise prediction and propagation methods, including those found in ANOPP, into a unified system that is compatible for use within general aircraft analysis software. The design of the system is described in terms of its functionality and capability to perform predictions accounting for distributed sources, installation effects, and propagation through a non-uniform atmosphere including refraction and the influence of terrain. The philosophy of mixed fidelity noise prediction through the use of nested Ffowcs Williams and Hawkings surfaces is presented and specific issues associated with its implementation are identified. Demonstrations for a conventional twin-aisle and an unconventional hybrid wing body aircraft configuration are presented to show the feasibility and capabilities of the system. Isolated model-scale jet noise predictions are also presented using high-fidelity and reduced order models, further demonstrating ANOPP2's ability to provide predictions for model-scale test configurations.
Acoustic Prediction State of the Art Assessment
NASA Technical Reports Server (NTRS)
Dahl, Milo D.
2007-01-01
The acoustic assessment task for both the Subsonic Fixed Wing and the Supersonic projects under NASA s Fundamental Aeronautics Program was designed to assess the current state-of-the-art in noise prediction capability and to establish baselines for gauging future progress. The documentation of our current capabilities included quantifying the differences between predictions of noise from computer codes and measurements of noise from experimental tests. Quantifying the accuracy of both the computed and experimental results further enhanced the credibility of the assessment. This presentation gives sample results from codes representative of NASA s capabilities in aircraft noise prediction both for systems and components. These include semi-empirical, statistical, analytical, and numerical codes. System level results are shown for both aircraft and engines. Component level results are shown for a landing gear prototype, for fan broadband noise, for jet noise from a subsonic round nozzle, and for propulsion airframe aeroacoustic interactions. Additional results are shown for modeling of the acoustic behavior of duct acoustic lining and the attenuation of sound in lined ducts with flow.
Vorstenbosch, Joshua; Islur, Avi
2017-06-01
Breast augmentation is among the most frequently performed cosmetic plastic surgeries. Providing patients with "realistic" 3D simulations of breast augmentation outcomes is becoming increasingly common. Until recently, such programs were costly and required significant equipment, training, and office space. New simple user-friendly cloud-based programs have been developed, but to date there remains a paucity of objective evidence comparing these 3D simulations with the post-operative outcomes. To determine the aesthetic similarity between pre-operative 3D simulation generated by Crisalix and real post-operative outcomes. A retrospective review of 20 patients receiving bilateral breast augmentation was conducted comparing 6-month post-operative outcomes with 3D simulation using Crisalix software. Similarities between post-operative and simulated images were measured by three attending plastic surgeons and ten plastic surgery residents using a series of parameters. Assessment reveals similarity between the 3D simulation and 6-month post-operative images for overall appearance, breast height, breast width, breast volume, breast projection, and nipple correction. Crisalix software generated more representative simulations for symmetric breasts than for tuberous or ptotic breasts. Comparison of overall aesthetic outcome to simulation showed that the post-operative outcome was more appealing for the symmetric and tuberous breasts and less appealing for the ptotic breasts. Our data suggest that Crisalix offers a good overall 3D simulated image of post-operative breast augmentation outcomes. Improvements to the simulation of the post-operative outcomes for ptotic and tuberous breasts would result in greater predictive capabilities of Crisalix. Collectively, Crisalix offers good predictive simulations for symmetric breasts. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors http://www.springer.com/00266 .
Progressive Damage and Failure Analysis of Composite Laminates
NASA Astrophysics Data System (ADS)
Joseph, Ashith P. K.
Composite materials are widely used in various industries for making structural parts due to higher strength to weight ratio, better fatigue life, corrosion resistance and material property tailorability. To fully exploit the capability of composites, it is required to know the load carrying capacity of the parts made of them. Unlike metals, composites are orthotropic in nature and fails in a complex manner under various loading conditions which makes it a hard problem to analyze. Lack of reliable and efficient failure analysis tools for composites have led industries to rely more on coupon and component level testing to estimate the design space. Due to the complex failure mechanisms, composite materials require a very large number of coupon level tests to fully characterize the behavior. This makes the entire testing process very time consuming and costly. The alternative is to use virtual testing tools which can predict the complex failure mechanisms accurately. This reduces the cost only to it's associated computational expenses making significant savings. Some of the most desired features in a virtual testing tool are - (1) Accurate representation of failure mechanism: Failure progression predicted by the virtual tool must be same as those observed in experiments. A tool has to be assessed based on the mechanisms it can capture. (2) Computational efficiency: The greatest advantages of a virtual tools are the savings in time and money and hence computational efficiency is one of the most needed features. (3) Applicability to a wide range of problems: Structural parts are subjected to a variety of loading conditions including static, dynamic and fatigue conditions. A good virtual testing tool should be able to make good predictions for all these different loading conditions. The aim of this PhD thesis is to develop a computational tool which can model the progressive failure of composite laminates under different quasi-static loading conditions. The analysis tool is validated by comparing the simulations against experiments for a selected number of quasi-static loading cases.
Development of a jet pump-assisted arterial heat pipe
NASA Technical Reports Server (NTRS)
Bienert, W. B.; Ducao, A. S.; Trimmer, D. S.
1977-01-01
The development of a jet pump assisted arterial heat pipe is described. The concept utilizes a built-in capillary driven jet pump to remove vapor and gas from the artery and to prime it. The continuous pumping action also prevents depriming during operation of the heat pipe. The concept is applicable to fixed conductance and gas loaded variable conductance heat pipes. A theoretical model for the jet pump assisted arterial heat pipe is presented. The model was used to design a prototype for laboratory demonstration. The 1.2 m long heat pipe was designed to transport 500 watts and to prime at an adverse elevation of up to 1.3 cm. The test results were in good agreement with the theoretical predictions. The heat pipe carried as much as 540 watts and was able to prime up to 1.9 cm. Introduction of a considerable amount of noncondensible gas had no adverse effect on the priming capability.
Facing the future: Memory as an evolved system for planning future acts
Klein, Stanley B.; Robertson, Theresa E.; Delton, Andrew W.
2013-01-01
All organisms capable of long-term memory are necessarily oriented toward the future. We propose that one of the most important adaptive functions of long-term episodic memory is to store information about the past in the service of planning for the personal future. Because a system should have especially efficient performance when engaged in a task that makes maximal use of its evolved machinery, we predicted that future-oriented planning would result in especially good memory relative to other memory tasks. We tested recall performance of a word list, using encoding tasks with different temporal perspectives (e.g., past, future) but a similar context. Consistent with our hypothesis, future-oriented encoding produced superior recall. We discuss these findings in light of their implications for the thesis that memory evolved to enable its possessor to anticipate and respond to future contingencies that cannot be known with certainty. PMID:19966234
The investigation of a compact auto-connected wire-wrapped pulsed transformer
NASA Astrophysics Data System (ADS)
Wang, Yuwei; Zhang, Jiande; Chen, Dongqun; Cao, Shengguang; Li, Da; Zhang, Tianyang
2012-05-01
For the power conditioning circuit used to deliver power efficiently from flux compression generator (FCG) to the load with high impedance, an air-cored and wire-wrapped transformer convenient in coaxial connection to the other parts is investigated. To reduce the size and enhance the performance, an auto-connection is adopted. A fast and simple model is used to calculate the electrical parameters of the transformer. To evaluate the high voltage capability, the voltages across turns and the electric field distribution in the transformer are investigated. The calculated and the measured electrical parameters of the transformer show good agreements. And the safe operating voltage is predicted to exceed 500 kV. In the preliminary experiments, the transformer is tested in a power conditioning circuit with a capacitive power supply. It is demonstrated that the output voltage of the transformer reaches -342 kV under the input voltage of -81 kV.
Development of advanced high-temperature heat flux sensors. Phase 2: Verification testing
NASA Technical Reports Server (NTRS)
Atkinson, W. H.; Cyr, M. A.; Strange, R. R.
1985-01-01
A two-phase program is conducted to develop heat flux sensors capable of making heat flux measurements throughout the hot section of gas turbine engines. In Phase 1, three types of heat flux sensors are selected; embedded thermocouple, laminated, and Gardon gauge sensors. A demonstration of the ability of these sensors to operate in an actual engine environment is reported. A segmented liner of each of two combustors being used in the Broad Specification Fuels Combustor program is instrumented with the three types of heat flux sensors then tested in a high pressure combustor rig. Radiometer probes are also used to measure the radiant heat loads to more fully characterize the combustor environment. Test results show the heat flux sensors to be in good agreement with radiometer probes and the predicted data trends. In general, heat flux sensors have strong potential for use in combustor development programs.
NASA Astrophysics Data System (ADS)
Rahimi, H.; Hartvelt, M.; Peinke, J.; Schepers, J. G.
2016-09-01
The aim of this work is to investigate the capabilities of current engineering tools based on Blade Element Momentum (BEM) and free vortex wake codes for the prediction of key aerodynamic parameters of wind turbines in yawed flow. Axial induction factor and aerodynamic loads of three wind turbines (NREL VI, AVATAR and INNWIND.EU) were investigated using wind tunnel measurements and numerical simulations for 0 and 30 degrees of yaw. Results indicated that for axial conditions there is a good agreement between all codes in terms of mean values of aerodynamic parameters, however in yawed flow significant deviations were observed. This was due to unsteady phenomena such as advancing & retreating and skewed wake effect. These deviations were more visible in aerodynamic parameters in comparison to the rotor azimuthal angle for the sections at the root and tip where the skewed wake effect plays a major role.
Sui, Sai; Ma, Hua; Lv, Yueguang; Wang, Jiafu; Li, Zhiqiang; Zhang, Jieqiu; Xu, Zhuo; Qu, Shaobo
2018-01-22
Arbitrary control of electromagnetic waves remains a significant challenge although it promises many important applications. Here, we proposed a fast optimization method of designing a wideband metasurface without using the Pancharatnam-Berry (PB) phase, of which the elements are non-absorptive and capable of predicting the wideband and smooth phase-shift. In our design method, the metasurface is composed of low-Q-factor resonant elements without using the PB phase, and is optimized by the genetic algorithm and nonlinear fitting method, having the advantages that the far field scattering patterns can be quickly synthesized by the hybrid array patterns. To validate the design method, a wideband low radar cross section metasurface is demonstrated, showing good feasibility and performance of wideband RCS reduction. This work reveals an opportunity arising from a metasurface in effective manipulation of microwave and flexible fast optimal design method.
The investigation of a compact auto-connected wire-wrapped pulsed transformer.
Wang, Yuwei; Zhang, Jiande; Chen, Dongqun; Cao, Shengguang; Li, Da; Zhang, Tianyang
2012-05-01
For the power conditioning circuit used to deliver power efficiently from flux compression generator (FCG) to the load with high impedance, an air-cored and wire-wrapped transformer convenient in coaxial connection to the other parts is investigated. To reduce the size and enhance the performance, an auto-connection is adopted. A fast and simple model is used to calculate the electrical parameters of the transformer. To evaluate the high voltage capability, the voltages across turns and the electric field distribution in the transformer are investigated. The calculated and the measured electrical parameters of the transformer show good agreements. And the safe operating voltage is predicted to exceed 500 kV. In the preliminary experiments, the transformer is tested in a power conditioning circuit with a capacitive power supply. It is demonstrated that the output voltage of the transformer reaches -342 kV under the input voltage of -81 kV.
NASA Technical Reports Server (NTRS)
Gea, L. M.; Vicker, D.
2006-01-01
The primary objective of this paper is to demonstrate the capability of computational fluid dynamics (CFD) to simulate a very complicated flow field encountered during the space shuttle ascent. The flow field features nozzle plumes from booster separation motor (BSM) and reaction control system (RCS) jets with a supersonic incoming cross flow at speed of Mach 4. The overset Navier-Stokes code OVERFLOW, was used to simulate the flow field surrounding the entire space shuttle launch vehicle (SSLV) with high geometric fidelity. The variable gamma option was chosen due to the high temperature nature of nozzle flows and different plume species. CFD predicted Mach contours are in good agreement with the schlieren photos from wind tunnel test. Flow fields are discussed in detail and the results are used to support the debris analysis for the space shuttle Return To Flight (RTF) task.
NASA Technical Reports Server (NTRS)
Gea, L. M.; Vicker, D.
2006-01-01
The primary objective of this paper is to demonstrate the capability of computational fluid dynamics (CFD) to simulate a very complicated flow field encountered during the space shuttle ascent. The flow field features nozzle plumes from booster separation motor (BSM) and reaction control system (RCS) jets with a supersonic incoming cross flow at speed of Mach 4. The overset Navier-Stokes code OVERFLOW, was used to simulate the flow field surrounding the entire space shuttle launch vehicle (SSLV) with high geometric fidelity. The variable gamma option was chosen due to the high temperature nature of nozzle flows and different plume species. CFD predicted Mach contours are in good agreement with the schlieren photos from wind tunnel test. Flow fields are discussed in detail and the results are used to support the debris analysis for the space shuttle Return To Flight (RTF) task.
NASA Astrophysics Data System (ADS)
Zhai, Mengting; Chen, Yan; Li, Jing; Zhou, Jun
2017-12-01
The molecular electrongativity distance vector (MEDV-13) was used to describe the molecular structure of benzyl ether diamidine derivatives in this paper, Based on MEDV-13, The three-parameter (M 3, M 15, M 47) QSAR model of insecticidal activity (pIC 50) for 60 benzyl ether diamidine derivatives was constructed by leaps-and-bounds regression (LBR) . The traditional correlation coefficient (R) and the cross-validation correlation coefficient (R CV ) were 0.975 and 0.971, respectively. The robustness of the regression model was validated by Jackknife method, the correlation coefficient R were between 0.971 and 0.983. Meanwhile, the independent variables in the model were tested to be no autocorrelation. The regression results indicate that the model has good robust and predictive capabilities. The research would provide theoretical guidance for the development of new generation of anti African trypanosomiasis drugs with efficiency and low toxicity.
Interferometric imaging using Si3N4 photonic integrated circuits for a SPIDER imager.
Su, Tiehui; Liu, Guangyao; Badham, Katherine E; Thurman, Samuel T; Kendrick, Richard L; Duncan, Alan; Wuchenich, Danielle; Ogden, Chad; Chriqui, Guy; Feng, Shaoqi; Chun, Jaeyi; Lai, Weicheng; Yoo, S J B
2018-05-14
This paper reports design, fabrication, and experimental demonstration of a silicon nitride photonic integrated circuit (PIC). The PIC is capable of conducting one-dimensional interferometric imaging with twelve baselines near λ = 1100-1600 nm. The PIC consists of twelve waveguide pairs, each leading to a multi-mode interferometer (MMI) that forms broadband interference fringes or each corresponding pair of the waveguides. Then an 18 channel arrayed waveguide grating (AWG) separates the combined signal into 18 signals of different wavelengths. A total of 103 sets of fringes are collected by the detector array at the output of the PIC. We keep the optical path difference (OPD) of each interferometer baseline to within 1 µm to maximize the visibility of the interference measurement. We also constructed a testbed to utilize the PIC for two-dimension complex visibility measurement with various targets. The experiment shows reconstructed images in good agreement with theoretical predictions.
CFD Modeling of Helium Pressurant Effects on Cryogenic Tank Pressure Rise Rates in Normal Gravity
NASA Technical Reports Server (NTRS)
Grayson, Gary; Lopez, Alfredo; Chandler, Frank; Hastings, Leon; Hedayat, Ali; Brethour, James
2007-01-01
A recently developed computational fluid dynamics modeling capability for cryogenic tanks is used to simulate both self-pressurization from external heating and also depressurization from thermodynamic vent operation. Axisymmetric models using a modified version of the commercially available FLOW-3D software are used to simulate actual physical tests. The models assume an incompressible liquid phase with density that is a function of temperature only. A fully compressible formulation is used for the ullage gas mixture that contains both condensable vapor and a noncondensable gas component. The tests, conducted at the NASA Marshall Space Flight Center, include both liquid hydrogen and nitrogen in tanks with ullage gas mixtures of each liquid's vapor and helium. Pressure and temperature predictions from the model are compared to sensor measurements from the tests and a good agreement is achieved. This further establishes the accuracy of the developed FLOW-3D based modeling approach for cryogenic systems.
Orthorexia nervosa: validation of a diagnosis questionnaire.
Donini, L M; Marsili, D; Graziani, M P; Imbriale, M; Cannella, C
2005-06-01
To validate a questionnaire for the diagnosis of orhorexia oervosa, an eating disorder defined as "maniacal obsession for healthy food". 525 subjects were enrolled. Then they were randomized into two samples (sample of 404 subjects for the construction of the test for the diagnosis of orthorexia ORTO-15; sample of 121 subjects for the validation of the test). The ORTO-15 questionnaire, validated for the diagnosis of orthorexia, is made-up of 15 multiple-choice items. The test we proposed for the diagnosis of orthorexia (ORTO 15) showed a good predictive capability at a threshold value of 40 (efficacy 73.8%, sensitivity 55.6% and specificity 75.8%) also on verification with a control sample. However, it has a limit in identifying the obsessive disorder. For this reason we maintain that further investigation is necessary and that new questions useful for the evaluation of the obsessive-compulsive behavior should be added to the ORTO-15 questionnaire.
NASA Technical Reports Server (NTRS)
Mehta, Manish; Seaford, Mark; Kovarik, Brian; Dufrene, Aaron; Solly, Nathan
2014-01-01
ATA-002 Technical Team has successfully designed, developed, tested and assessed the SLS Pathfinder propulsion systems for the Main Base Heating Test Program. Major Outcomes of the Pathfinder Test Program: Reach 90% of full-scale chamber pressure Achieved all engine/motor design parameter requirements Reach steady plume flow behavior in less than 35 msec Steady chamber pressure for 60 to 100 msec during engine/motor operation Similar model engine/motor performance to full-scale SLS system Mitigated nozzle throat and combustor thermal erosion Test data shows good agreement with numerical prediction codes Next phase of the ATA-002 Test Program Design & development of the SLS OML for the Main Base Heating Test Tweak BSRM design to optimize performance Tweak CS-REM design to increase robustness MSFC Aerosciences and CUBRC have the capability to develop sub-scale propulsion systems to meet desired performance requirements for short-duration testing.
Simulation of the radiation from the hot spot of an X-pinch
NASA Astrophysics Data System (ADS)
Oreshkin, V. I.; Artyomov, A. P.; Chaikovsky, S. A.; Oreshkin, E. V.; Rousskikh, A. G.
2017-01-01
The results of X-pinch experiments performed using a small-sized pulse generator are analyzed. The generator, capable of producing a 200-kA, 180-ns current, was loaded with an X-pinch made of four 35-μm-diameter aluminum wires. The analysis consists of a one-dimensional radiation magnetohydrodynamic simulation of the formation of a hot spot in an X-pinch, taking into account the outflow of material from the neck region. The radiation loss and the ion species composition of the pinch plasma are calculated based on a stationary collisional-radiative model, including balance equations for the populations of individual levels. With this model, good agreement between simulation predictions and experimental data has been achieved: the experimental and the calculated radiation power and pulse duration differ by no more than twofold. It has been shown that the x-ray pulse is formed in the radiative collapse region, near its boundary.
Wang, Peifang; Liu, Cui; Yao, Yu; Wang, Chao; Wang, Teng; Yuan, Ye; Hou, Jun
2017-05-01
To assess the capabilities of the different techniques in predicting Cadmium (Cd) bioavailability in Cd-contaminated soils with the addition of Zn, one in situ technique (diffusive gradients in thin films; DGT) was compared with soil solution concentration and four widely used single-step extraction methods (acetic acid, EDTA, sodium acetate and CaCl 2 ). Wheat and maize were selected as tested species. The results demonstrated that single Cd-polluted soils inhibited the growth of wheat and maize significantly compared with control plants; the shoot and root biomasses of the plants both dropped significantly (P < 0.05). The addition of Zn exhibited a strong antagonism to the physiological toxicity induced by Cd. The Pearson correlation coefficient presented positive correlations (P < 0.01, R > 0.9) between Cd concentrations in two plants and Cd bioavailability indicated by each method in soils. Consequently, the results indicated that the DGT technique could be regarded as a good predictor of Cd bioavailability to plants, comparable to soil solution concentration and the four single-step extraction methods. Because the DGT technique can offer in situ data, it is expected to be widely used in more areas.
NASA Astrophysics Data System (ADS)
Mehdipour, R.; Baniamerian, Z.; Delauré, Y.
2016-05-01
An accurate knowledge of heat transfer and temperature distribution in vehicle engines is essential to have a good management of heat transfer performance in combustion engines. This may be achieved by numerical simulation of flow through the engine cooling passages; but the task becomes particularly challenging when boiling occurs. Neglecting two phase flow processes in the simulation would however result in significant inaccuracy in the predictions. In this study a three dimensional numerical model is proposed using Fluent 6.3 to simulate heat transfer of fluid flowing through channels of conventional size. Results of the present theoretical and numerical model are then compared with some empirical results. For high fluid flow velocities, departure between experimental and numerical results is about 9 %, while for lower velocity conditions, the model inaccuracy increases to 18 %. One of the outstanding capabilities of the present model, beside its ability to simulate two phase fluid flow and heat transfer in three dimensions, is the prediction of the location of bubble formation and condensation which can be a key issue in the evaluation of the engine performance and thermal stresses.
Computational Analysis of Advanced Shape-Memory Alloy Devices Through a Robust Modeling Framework
NASA Astrophysics Data System (ADS)
Scalet, Giulia; Conti, Michele; Auricchio, Ferdinando
2017-06-01
Shape-memory alloys (SMA) provide significant advantages in various industrial fields, but their manufacturing and commercialization are currently hindered. This is attributed mainly to the poor knowledge of material behavior and the lack of standards in its mechanical characterization. SMA products are usually developed by trial-and-error testing to address specific design requirements, thus increasing costs and time. The development of simulation tools offers a possible solution to assist engineers and designers and allows to better understand SMA transformation phenomena. Accordingly, the purpose of the present paper is to numerically analyze and predict the response of spring-like actuators and septal occluders, which are industrial components exploiting the shape-memory and pseudoelastic properties of SMAs, respectively. The methodology includes two main stages: the implementation of the three-dimensional phenomenological model known as Souza- Auricchio model and the finite element modeling of the device. A discussion about the steps of each stage, as parameter identification and model generalizations, is provided. Validation results are presented through a comparison with the results of a performed experimental campaign. The framework proves good prediction capabilities and allows to reduce the number of experimental tests in the future.
A frequency-domain approach to improve ANNs generalization quality via proper initialization.
Chaari, Majdi; Fekih, Afef; Seibi, Abdennour C; Hmida, Jalel Ben
2018-08-01
The ability to train a network without memorizing the input/output data, thereby allowing a good predictive performance when applied to unseen data, is paramount in ANN applications. In this paper, we propose a frequency-domain approach to evaluate the network initialization in terms of quality of training, i.e., generalization capabilities. As an alternative to the conventional time-domain methods, the proposed approach eliminates the approximate nature of network validation using an excess of unseen data. The benefits of the proposed approach are demonstrated using two numerical examples, where two trained networks performed similarly on the training and the validation data sets, yet they revealed a significant difference in prediction accuracy when tested using a different data set. This observation is of utmost importance in modeling applications requiring a high degree of accuracy. The efficiency of the proposed approach is further demonstrated on a real-world problem, where unlike other initialization methods, a more conclusive assessment of generalization is achieved. On the practical front, subtle methodological and implementational facets are addressed to ensure reproducibility and pinpoint the limitations of the proposed approach. Copyright © 2018 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zecevic, Miroslav; Lebensohn, Ricardo A.; McCabe, Rodney J.
In this paper, the recently established methodology to use known algorithmic expressions of the second moments of the stress field in the grains of a polycrystalline aggregate for calculating average fluctuations of lattice rotation rates and the associated average intragranular misorientation distributions using the mean-field viscoplastic self-consistent (VPSC) formulation is extended to solve the coupled problem of considering the effect of intragranular misorientations on stress and rotation rate fluctuations. In turn, these coupled expressions are used to formulate and implement a grain fragmentation (GF) model in VPSC. Case studies, including tension and plane-strain compression of face-centered cubic polycrystals are usedmore » to illustrate the capabilities of the new model. GF-VPSC predictions of intragranular misorientation distributions and texture evolution are compared with experiments and full-field numerical simulations, showing good agreement. In particular, the inclusion of misorientation spreads reduced the intensity of the deformed texture and thus improved the texture predictions. Finally and moreover, considering that intragranular misorientations act as driving forces for recrystallization, the new GF-VPSC formulation is shown to enable modeling of microstructure evolution during deformation and recrystallization, in a computationally efficient manner.« less
Coelho, Antonio Augusto Rodrigues
2016-01-01
This paper introduces the Fuzzy Logic Hypercube Interpolator (FLHI) and demonstrates applications in control of multiple-input single-output (MISO) and multiple-input multiple-output (MIMO) processes with Hammerstein nonlinearities. FLHI consists of a Takagi-Sugeno fuzzy inference system where membership functions act as kernel functions of an interpolator. Conjunction of membership functions in an unitary hypercube space enables multivariable interpolation of N-dimensions. Membership functions act as interpolation kernels, such that choice of membership functions determines interpolation characteristics, allowing FLHI to behave as a nearest-neighbor, linear, cubic, spline or Lanczos interpolator, to name a few. The proposed interpolator is presented as a solution to the modeling problem of static nonlinearities since it is capable of modeling both a function and its inverse function. Three study cases from literature are presented, a single-input single-output (SISO) system, a MISO and a MIMO system. Good results are obtained regarding performance metrics such as set-point tracking, control variation and robustness. Results demonstrate applicability of the proposed method in modeling Hammerstein nonlinearities and their inverse functions for implementation of an output compensator with Model Based Predictive Control (MBPC), in particular Dynamic Matrix Control (DMC). PMID:27657723
Hou, X; Chen, X; Zhang, M; Yan, A
2016-01-01
Plasmodium falciparum, the most fatal parasite that causes malaria, is responsible for over one million deaths per year. P. falciparum dihydroorotate dehydrogenase (PfDHODH) has been validated as a promising drug development target for antimalarial therapy since it catalyzes the rate-limiting step for DNA and RNA biosynthesis. In this study, we investigated the quantitative structure-activity relationships (QSAR) of the antimalarial activity of PfDHODH inhibitors by generating four computational models using a multilinear regression (MLR) and a support vector machine (SVM) based on a dataset of 255 PfDHODH inhibitors. All the models display good prediction quality with a leave-one-out q(2) >0.66, a correlation coefficient (r) >0.85 on both training sets and test sets, and a mean square error (MSE) <0.32 on training sets and <0.37 on test sets, respectively. The study indicated that the hydrogen bonding ability, atom polarizabilities and ring complexity are predominant factors for inhibitors' antimalarial activity. The models are capable of predicting inhibitors' antimalarial activity and the molecular descriptors for building the models could be helpful in the development of new antimalarial drugs.
NASA Astrophysics Data System (ADS)
Tsamopoulos, John; Varchanis, Stylianos; Dimakopoulos, Yiannis
2017-11-01
Blood plasma is a dilute aquatic solution that contains proteins and hormones such as fibrinogen, cholesterol, etc. Many studies have assumed that it behaves rheologically like a Newtonian fluid. However, more recent experimental observations (Brust et al., 2013) suggest that it exhibits significant viscoelastic effects. Understanding plasma's rheology is of crucial importance as it is well-known that deviations of plasma's shear viscosity from physiological values can indicate serious diseases. In addition, the viscoelastic character of the blood solvent should be taken into consideration as it can have a great impact on hemodynamics, especially in very narrow or stenotic microvessels. We investigate the capability of e-PTT model, which is a widely used constitutive model for macromolecular solutions, to predict inhomogeneous flows of plasma in 1) a capillary breakup extensional rheometer (CABER), using a 2D axisymmetric model and 2) a microfluidic contraction-expansion device, solving the full 3D transient governing equations. Although we use a single-mode approximation, the results are in very good agreement with the experiments, because they predict important features of blood plasma's flow, such as the bead-on-a-string formation in CABER and elongational thinning in the 3D flow. LIMMAT Foundation.
Development of a Corrosion Sensor for AN Aircraft Vehicle Health Monitoring System
NASA Astrophysics Data System (ADS)
Scott, D. A.; Price, D. C.; Edwards, G. C.; Batten, A. B.; Kolmeder, J.; Muster, T. H.; Corrigan, P.; Cole, I. S.
2010-02-01
A Rayleigh-wave-based sensor has been developed to measure corrosion damage in aircraft. This sensor forms an important part of a corrosion monitoring system being developed for a major aircraft manufacturer. This system measures the corrosion rate at the location of its sensors, and through a model predicts the corrosion rates in nearby places on an aircraft into which no sensors can be placed. In order to calibrate this model, which yields corrosion rates rather than the accumulated effect, an absolute measure of the damage is required. In this paper the development of a surface wave sensor capable of measuring accumulated damage will be described in detail. This sensor allows the system to measure material loss due to corrosion regardless of the possible loss of historical corrosion rate data, and can provide, at any stage, a benchmark for the predictive model that would allow a good estimate of the accumulated corrosion damage in similar locations on an aircraft. This system may obviate the need for costly inspection of difficult-to-access places in aircraft, where presently the only way to check for corrosion is by periodic dismantling and reassembly.
An improved kinetics approach to describe the physical stability of amorphous solid dispersions.
Yang, Jiao; Grey, Kristin; Doney, John
2010-01-15
The recrystallization of amorphous solid dispersions may lead to a loss in the dissolution rate, and consequently reduce bioavailability. The purpose of this work is to understand factors governing the recrystallization of amorphous drug-polymer solid dispersions, and develop a kinetics model capable of accurately predicting their physical stability. Recrystallization kinetics was measured using differential scanning calorimetry for initially amorphous efavirenz-polyvinylpyrrolidone solid dispersions stored at controlled temperature and relative humidity. The experimental measurements were fitted by a new kinetic model to estimate the recrystallization rate constant and microscopic geometry of crystal growth. The new kinetics model was used to illustrate the governing factors of amorphous solid dispersions stability. Temperature was found to affect efavirenz recrystallization in an Arrhenius manner, while recrystallization rate constant was shown to increase linearly with relative humidity. Polymer content tremendously inhibited the recrystallization process by increasing the crystallization activation energy and decreasing the equilibrium crystallinity. The new kinetic model was validated by the good agreement between model fits and experiment measurements. A small increase in polyvinylpyrrolidone resulted in substantial stability enhancements of efavirenz amorphous solid dispersion. The new established kinetics model provided more accurate predictions than the Avrami equation.
Zecevic, Miroslav; Lebensohn, Ricardo A.; McCabe, Rodney J.; ...
2018-06-15
In this paper, the recently established methodology to use known algorithmic expressions of the second moments of the stress field in the grains of a polycrystalline aggregate for calculating average fluctuations of lattice rotation rates and the associated average intragranular misorientation distributions using the mean-field viscoplastic self-consistent (VPSC) formulation is extended to solve the coupled problem of considering the effect of intragranular misorientations on stress and rotation rate fluctuations. In turn, these coupled expressions are used to formulate and implement a grain fragmentation (GF) model in VPSC. Case studies, including tension and plane-strain compression of face-centered cubic polycrystals are usedmore » to illustrate the capabilities of the new model. GF-VPSC predictions of intragranular misorientation distributions and texture evolution are compared with experiments and full-field numerical simulations, showing good agreement. In particular, the inclusion of misorientation spreads reduced the intensity of the deformed texture and thus improved the texture predictions. Finally and moreover, considering that intragranular misorientations act as driving forces for recrystallization, the new GF-VPSC formulation is shown to enable modeling of microstructure evolution during deformation and recrystallization, in a computationally efficient manner.« less
The melting point of lithium: an orbital-free first-principles molecular dynamics study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Mohan; Hung, Linda; Huang, Chen
2013-08-25
The melting point of liquid lithium near zero pressure is studied with large-scale orbital-free first-principles molecular dynamics (OF-FPMD) in the isobaric-isothermal ensemble. Here, we adopt the Wang-Govind-Carter (WGC) functional as our kinetic energy density functional (KEDF) and construct a bulk-derived local pseudopotential (BLPS) for Li. Our simulations employ both the ‘heat-until-melts’ method and the coexistence method. We predict 465 K as an upper bound of the melting point of Li from the ‘heat-until-melts’ method, while we predict 434 K as the melting point of Li from the coexistence method. These values compare well with an experimental melting point of 453more » K at zero pressure. Furthermore, we calculate a few important properties of liquid Li including the diffusion coefficients, pair distribution functions, static structure factors, and compressibilities of Li at 470 K and 725 K in the canonical ensemble. This theoretically-obtained results show good agreement with known experimental results, suggesting that OF-FPMD using a non-local KEDF and a BLPS is capable of accurately describing liquid metals.« less
The Generation, Radiation and Prediction of Supersonic Jet Noise. Volume 1
1978-10-01
standard, Gaussian correlation function model can yield a good noise spectrum prediction (at 900), but the corresponding axial source distributions do not...forms for the turbulence cross-correlation function. Good agreement was obtained between measured and calculated far- field noise spectra. However, the...complementary error function profile (3.63) was found to provide a good fit to the axial velocity distribution tor a wide range of Mach numbers in the Initial
NASA Astrophysics Data System (ADS)
Alvarez-Llamas, C.; Pisonero, J.; Bordel, N.
2016-09-01
Direct solid determination of trace amounts of fluorine using Laser-Induced Breakdown Spectroscopy (LIBS) is a challenging task due to the low excitation efficiency of this element. Several strategies have been developed to improve the detection capabilities, including the use of LIBS in a He atmosphere to enhance the signal to background ratios of F atomic emission lines. An alternative method is based on the detection of the molecular compounds that are formed with fluorine in the LIBS plasma. In this work, the detection of CaF molecular emission bands is investigated to improve the analytical capabilities of atmospheric air LIBS for the determination of fluorine traces in solid samples. In particular, Cu matrix samples containing different fluorine concentration (between 50 and 600 μg/g), and variable amounts of Ca, are used to demonstrate the linear relationships between CaF emission signal and F concentration. Limits of detection for fluorine are improved by more than 1 order of magnitude using CaF emission bands versus F atomic lines, in atmospheric-air LIBS. Furthermore, a toothpaste powder sample is used to validate this analytical method. Good agreement is observed between the nominal and the predicted fluorine mass-content.
ρ-VOF: An interface sharpening method for gas-liquid flow simulation
NASA Astrophysics Data System (ADS)
Wang, Jiantao; Liu, Gang; Jiang, Xiong; Mou, Bin
2018-05-01
The study on simulation of compressible gas-liquid flow remains open. Popular methods are either confined to incompressible flow regime, or inevitably induce smear of the free interface. A new finite volume method for compressible two-phase flow simulation is contributed for this subject. First, the “heterogeneous equilibrium” assumption is introduced to the control volume, by hiring free interface reconstruction technology, the distribution of each component in the control volume is achieved. Next, AUSM+-up (advection upstream splitting method) scheme is employed to calculate the convective fluxes and pressure fluxes, with the contact discontinuity characteristic considered, followed by the update of the whole flow field. The new method features on density-based pattern and interface reconstruction technology from VOF (volume of fluid), thus we name it “ρ-VOF method”. Inherited from AUSM families and VOF, ρ-VOF behaves as an all-speed method, capable of simulating shock in gas-liquid flow, and preserving the sharpness of the free interface. Gas-liquid shock tube is simulated to evaluate the method, from which good agreement is obtained between the predicted results and those of the cited literature, meanwhile, sharper free interface is identified. Finally, the capability and validity of ρ-VOF method can be concluded in compressible gas-liquid flow simulation.
Numerical Simulation of Flow in a Whirling Annular Seal and Comparison with Experiments
NASA Technical Reports Server (NTRS)
Athavale, M. M.; Hendricks, R. C.; Steinetz, B. M.
1995-01-01
The turbulent flow field in a simulated annular seal with a large clearance/radius ratio (0.015) and a whirling rotor was simulated using an advanced 3D CFD code SCISEAL. A circular whirl orbit with synchronous whirl was imposed on the rotor center. The flow field was rendered quasi-steady by making a transformation to a totaling frame. Standard k-epsilon model with wall functions was used to treat the turbulence. Experimentally measured values of flow parameters were used to specify the seal inlet and exit boundary conditions. The computed flow-field in terms of the velocity and pressure is compared with the experimental measurements inside the seal. The agreement between the numerical results and experimental data with correction is fair to good. The capability of current advanced CFD methodology to analyze this complex flow field is demonstrated. The methodology can also be extended to other whirl frequencies. Half- (or sub-) synchronous (fluid film unstable motion) and synchronous (rotor centrifugal force unbalance) whirls are the most unstable whirl modes in turbomachinery seals, and the flow code capability of simulating the flows in steady as well as whirling seals will prove to be extremely useful in the design, analyses, and performance predictions of annular as well as other types of seals.
Hu, Jingwen; Klinich, Kathleen D; Reed, Matthew P; Kokkolaras, Michael; Rupp, Jonathan D
2012-06-01
In motor-vehicle crashes, young school-aged children restrained by vehicle seat belt systems often suffer from abdominal injuries due to submarining. However, the current anthropomorphic test device, so-called "crash dummy", is not adequate for proper simulation of submarining. In this study, a modified Hybrid-III six-year-old dummy model capable of simulating and predicting submarining was developed using MADYMO (TNO Automotive Safety Solutions). The model incorporated improved pelvis and abdomen geometry and properties previously tested in a modified physical dummy. The model was calibrated and validated against four sled tests under two test conditions with and without submarining using a multi-objective optimization method. A sensitivity analysis using this validated child dummy model showed that dummy knee excursion, torso rotation angle, and the difference between head and knee excursions were good predictors for submarining status. It was also shown that restraint system design variables, such as lap belt angle, D-ring height, and seat coefficient of friction (COF), may have opposite effects on head and abdomen injury risks; therefore child dummies and dummy models capable of simulating submarining are crucial for future restraint system design optimization for young school-aged children. Copyright © 2011 IPEM. Published by Elsevier Ltd. All rights reserved.
A review and analysis of neural networks for classification of remotely sensed multispectral imagery
NASA Technical Reports Server (NTRS)
Paola, Justin D.; Schowengerdt, Robert A.
1993-01-01
A literature survey and analysis of the use of neural networks for the classification of remotely sensed multispectral imagery is presented. As part of a brief mathematical review, the backpropagation algorithm, which is the most common method of training multi-layer networks, is discussed with an emphasis on its application to pattern recognition. The analysis is divided into five aspects of neural network classification: (1) input data preprocessing, structure, and encoding; (2) output encoding and extraction of classes; (3) network architecture, (4) training algorithms; and (5) comparisons to conventional classifiers. The advantages of the neural network method over traditional classifiers are its non-parametric nature, arbitrary decision boundary capabilities, easy adaptation to different types of data and input structures, fuzzy output values that can enhance classification, and good generalization for use with multiple images. The disadvantages of the method are slow training time, inconsistent results due to random initial weights, and the requirement of obscure initialization values (e.g., learning rate and hidden layer size). Possible techniques for ameliorating these problems are discussed. It is concluded that, although the neural network method has several unique capabilities, it will become a useful tool in remote sensing only if it is made faster, more predictable, and easier to use.
Development of Benchmark Examples for Delamination Onset and Fatigue Growth Prediction
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2011-01-01
An approach for assessing the delamination propagation and growth capabilities in commercial finite element codes was developed and demonstrated for the Virtual Crack Closure Technique (VCCT) implementations in ABAQUS. The Double Cantilever Beam (DCB) specimen was chosen as an example. First, benchmark results to assess delamination propagation capabilities under static loading were created using models simulating specimens with different delamination lengths. For each delamination length modeled, the load and displacement at the load point were monitored. The mixed-mode strain energy release rate components were calculated along the delamination front across the width of the specimen. A failure index was calculated by correlating the results with the mixed-mode failure criterion of the graphite/epoxy material. The calculated critical loads and critical displacements for delamination onset for each delamination length modeled were used as a benchmark. The load/displacement relationship computed during automatic propagation should closely match the benchmark case. Second, starting from an initially straight front, the delamination was allowed to propagate based on the algorithms implemented in the commercial finite element software. The load-displacement relationship obtained from the propagation analysis results and the benchmark results were compared. Good agreements could be achieved by selecting the appropriate input parameters, which were determined in an iterative procedure.
A nanoporous MXene film enables flexible supercapacitors with high energy storage.
Fan, Zhimin; Wang, Youshan; Xie, Zhimin; Xu, Xueqing; Yuan, Yin; Cheng, Zhongjun; Liu, Yuyan
2018-05-14
MXene films are attractive for use in advanced supercapacitor electrodes on account of their ultrahigh density and pseudocapacitive charge storage mechanism in sulfuric acid. However, the self-restacking of MXene nanosheets severely affects their rate capability and mass loading. Herein, a free-standing and flexible modified nanoporous MXene film is fabricated by incorporating Fe(OH)3 nanoparticles with diameters of 3-5 nm into MXene films and then dissolving the Fe(OH)3 nanoparticles, followed by low calcination at 200 °C, resulting in highly interconnected nanopore channels that promote efficient ion transport without compromising ultrahigh density. As a result, the modified nanoporous MXene film presents an attractive volumetric capacitance (1142 F cm-3 at 0.5 A g-1) and good rate capability (828 F cm-3 at 20 A g-1). Furthermore, it still displays a high volumetric capacitance of 749 F cm-3 and good flexibility even at a high mass loading of 11.2 mg cm-2. Therefore, this flexible and free-standing nanoporous MXene film is a promising electrode material for flexible, portable and compact storage devices. This study provides an efficient material design for flexible energy storage devices possessing high volumetric capacitance and good rate capability even at a high mass loading.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, Ba Nghiep; Holbery, Jim; Smith, Mark T.
2006-11-30
This report describes the status of the current process modeling approaches to predict the behavior and flow of fiber-filled thermoplastics under injection molding conditions. Previously, models have been developed to simulate the injection molding of short-fiber thermoplastics, and an as-formed composite part or component can then be predicted that contains a microstructure resulting from the constituents’ material properties and characteristics as well as the processing parameters. Our objective is to assess these models in order to determine their capabilities and limitations, and the developments needed for long-fiber injection-molded thermoplastics (LFTs). First, the concentration regimes are summarized to facilitate the understandingmore » of different types of fiber-fiber interaction that can occur for a given fiber volume fraction. After the formulation of the fiber suspension flow problem and the simplification leading to the Hele-Shaw approach, the interaction mechanisms are discussed. Next, the establishment of the rheological constitutive equation is presented that reflects the coupled flow/orientation nature. The decoupled flow/orientation approach is also discussed which constitutes a good simplification for many applications involving flows in thin cavities. Finally, before outlining the necessary developments for LFTs, some applications of the current orientation model and the so-called modified Folgar-Tucker model are illustrated through the fiber orientation predictions for selected LFT samples.« less
NASA Astrophysics Data System (ADS)
Cai, Jun; Wang, Kuaishe; Han, Yingying
2016-03-01
True stress and true strain values obtained from isothermal compression tests over a wide temperature range from 1,073 to 1,323 K and a strain rate range from 0.001 to 1 s-1 were employed to establish the constitutive equations based on Johnson Cook, modified Zerilli-Armstrong (ZA) and strain-compensated Arrhenius-type models, respectively, to predict the high-temperature flow behavior of Ti-6Al-4V alloy in α + β phase. Furthermore, a comparative study has been made on the capability of the three models to represent the elevated temperature flow behavior of Ti-6Al-4V alloy. Suitability of the three models was evaluated by comparing both the correlation coefficient R and the average absolute relative error (AARE). The results showed that the Johnson Cook model is inadequate to provide good description of flow behavior of Ti-6Al-4V alloy in α + β phase domain, while the predicted values of modified ZA model and the strain-compensated Arrhenius-type model could agree well with the experimental values except under some deformation conditions. Meanwhile, the modified ZA model could track the deformation behavior more accurately than other model throughout the entire temperature and strain rate range.
Degrande, G; Lombaert, G
2001-09-01
In Krylov's analytical prediction model, the free field vibration response during the passage of a train is written as the superposition of the effect of all sleeper forces, using Lamb's approximate solution for the Green's function of a halfspace. When this formulation is extended with the Green's functions of a layered soil, considerable computational effort is required if these Green's functions are needed in a wide range of source-receiver distances and frequencies. It is demonstrated in this paper how the free field response can alternatively be computed, using the dynamic reciprocity theorem, applied to moving loads. The formulation is based on the response of the soil due to the moving load distribution for a single axle load. The equations are written in the wave-number-frequency domain, accounting for the invariance of the geometry in the direction of the track. The approach allows for a very efficient calculation of the free field vibration response, distinguishing the quasistatic contribution from the effect of the sleeper passage frequency and its higher harmonics. The methodology is validated by means of in situ vibration measurements during the passage of a Thalys high-speed train on the track between Brussels and Paris. It is shown that the model has good predictive capabilities in the near field at low and high frequencies, but underestimates the response in the midfrequency band.
NASA Technical Reports Server (NTRS)
Bardina, J. E.
1994-01-01
A new computational efficient 3-D compressible Reynolds-averaged implicit Navier-Stokes method with advanced two equation turbulence models for high speed flows is presented. All convective terms are modeled using an entropy satisfying higher-order Total Variation Diminishing (TVD) scheme based on implicit upwind flux-difference split approximations and arithmetic averaging procedure of primitive variables. This method combines the best features of data management and computational efficiency of space marching procedures with the generality and stability of time dependent Navier-Stokes procedures to solve flows with mixed supersonic and subsonic zones, including streamwise separated flows. Its robust stability derives from a combination of conservative implicit upwind flux-difference splitting with Roe's property U to provide accurate shock capturing capability that non-conservative schemes do not guarantee, alternating symmetric Gauss-Seidel 'method of planes' relaxation procedure coupled with a three-dimensional two-factor diagonal-dominant approximate factorization scheme, TVD flux limiters of higher-order flux differences satisfying realizability, and well-posed characteristic-based implicit boundary-point a'pproximations consistent with the local characteristics domain of dependence. The efficiency of the method is highly increased with Newton Raphson acceleration which allows convergence in essentially one forward sweep for supersonic flows. The method is verified by comparing with experiment and other Navier-Stokes methods. Here, results of adiabatic and cooled flat plate flows, compression corner flow, and 3-D hypersonic shock-wave/turbulent boundary layer interaction flows are presented. The robust 3-D method achieves a better computational efficiency of at least one order of magnitude over the CNS Navier-Stokes code. It provides cost-effective aerodynamic predictions in agreement with experiment, and the capability of predicting complex flow structures in complex geometries with good accuracy.
Diagnostic capability of spectral-domain optical coherence tomography for glaucoma.
Wu, Huijuan; de Boer, Johannes F; Chen, Teresa C
2012-05-01
To determine the diagnostic capability of spectral-domain optical coherence tomography in glaucoma patients with visual field defects. Prospective, cross-sectional study. Participants were recruited from a university hospital clinic. One eye of 85 normal subjects and 61 glaucoma patients with average visual field mean deviation of -9.61 ± 8.76 dB was selected randomly for the study. A subgroup of the glaucoma patients with early visual field defects was calculated separately. Spectralis optical coherence tomography (Heidelberg Engineering, Inc) circular scans were performed to obtain peripapillary retinal nerve fiber layer (RNFL) thicknesses. The RNFL diagnostic parameters based on the normative database were used alone or in combination for identifying glaucomatous RNFL thinning. To evaluate diagnostic performance, calculations included areas under the receiver operating characteristic curve, sensitivity, specificity, positive predictive value, negative predictive value, positive likelihood ratio, and negative likelihood ratio. Overall RNFL thickness had the highest area under the receiver operating characteristic curve values: 0.952 for all patients and 0.895 for the early glaucoma subgroup. For all patients, the highest sensitivity (98.4%; 95% confidence interval, 96.3% to 100%) was achieved by using 2 criteria: ≥ 1 RNFL sectors being abnormal at the < 5% level and overall classification of borderline or outside normal limits, with specificities of 88.9% (95% confidence interval, 84.0% to 94.0%) and 87.1% (95% confidence interval, 81.6% to 92.5%), respectively, for these 2 criteria. Statistical parameters for evaluating the diagnostic performance of the Spectralis spectral-domain optical coherence tomography were good for early perimetric glaucoma and were excellent for moderately advanced perimetric glaucoma. Copyright © 2012 Elsevier Inc. All rights reserved.
PLS modelling of structure—activity relationships of catechol O-methyltransferase inhibitors
NASA Astrophysics Data System (ADS)
Lotta, Timo; Taskinen, Jyrki; Bäckström, Reijo; Nissinen, Erkki
1992-06-01
Quantitative structure-activity analysis was carried out for in vitro inhibition of rat brain soluble catechol O-methyltransferase by a series (N=99) of 1,5-substituted-3,4-dihydroxybenzenes using computational chemistry and multivariate PLS modelling of data sets. The molecular structural descriptors (N=19) associated with the electronics of the catecholic ring and sizes of substituents were derived theoretically. For the whole set of molecules two separate PLS models have to be used. A PLS model with two significant (crossvalidated) model dimensions describing 82.2% of the variance in inhibition activity data was capable of predicting all molecules except those having the largest R1 substituent or having a large R5 substituent compared to the NO2 group. The other PLS model with three significant (crossvalidated) model dimensions described 83.3% of the variance in inhibition activity data. This model could not handle compounds having a small R5 substituent, compared to the NO2 group, or the largest R1 substituent. The predictive capability of these PLS models was good. The models reveal that inhibition activity is nonlinearly related to the size of the R5 substituent. The analysis of the PLS models also shows that the binding affinity is greatly dependent on the electronic nature of both R1 and R5 substituents. The electron-withdrawing nature of the substituents enhances inhibition activity. In addition, the size of the R1 substituent and its lipophilicity are important in the binding of inhibitors. The size of the R1 substituent has an upper limit. On the other hand, ionized R1 substituents decrease inhibition activity.
NASA Astrophysics Data System (ADS)
Srivastava, Prashant K.; Petropoulos, George P.; Gupta, Manika; Islam, Tanvir
2015-04-01
Soil Moisture Deficit (SMD) is a key variable in the water and energy exchanges that occur at the land-surface/atmosphere interface. Monitoring SMD is an alternate method of irrigation scheduling and represents the use of the suitable quantity of water at the proper time by combining measurements of soil moisture deficit. In past it is found that LST has a strong relation to SMD, which can be estimated by MODIS or numerical weather prediction model such as WRF (Weather Research and Forecasting model). By looking into the importance of SMD, this work focused on the application of Artificial Neural Network (ANN) for evaluating its capabilities towards SMD estimation using the LST data estimated from MODIS and WRF mesoscale model. The benchmark SMD estimated from Probability Distribution Model (PDM) over the Brue catchment, Southwest of England, U.K. is used for all the calibration and validation experiments. The performances between observed and simulated SMD are assessed in terms of the Nash-Sutcliffe Efficiency (NSE), the Root Mean Square Error (RMSE) and the percentage of bias (%Bias). The application of the ANN confirmed a high capability WRF and MODIS LST for prediction of SMD. Performance during the ANN calibration and validation showed a good agreement between benchmark and estimated SMD with MODIS LST information with significantly higher performance than WRF simulated LST. The work presented showed the first comprehensive application of LST from MODIS and WRF mesoscale model for hydrological SMD estimation, particularly for the maritime climate. More studies in this direction are recommended to hydro-meteorological community, so that useful information will be accumulated in the technical literature domain for different geographical locations and climatic conditions. Keyword: WRF, Land Surface Temperature, MODIS satellite, Soil Moisture Deficit, Neural Network
Predictors of treatment failure in young patients undergoing in vitro fertilization.
Jacobs, Marni B; Klonoff-Cohen, Hillary; Agarwal, Sanjay; Kritz-Silverstein, Donna; Lindsay, Suzanne; Garzo, V Gabriel
2016-08-01
The purpose of the study was to evaluate whether routinely collected clinical factors can predict in vitro fertilization (IVF) failure among young, "good prognosis" patients predominantly with secondary infertility who are less than 35 years of age. Using de-identified clinic records, 414 women <35 years undergoing their first autologous IVF cycle were identified. Logistic regression was used to identify patient-driven clinical factors routinely collected during fertility treatment that could be used to model predicted probability of cycle failure. One hundred ninety-seven patients with both primary and secondary infertility had a failed IVF cycle, and 217 with secondary infertility had a successful live birth. None of the women with primary infertility had a successful live birth. The significant predictors for IVF cycle failure among young patients were fewer previous live births, history of biochemical pregnancies or spontaneous abortions, lower baseline antral follicle count, higher total gonadotropin dose, unknown infertility diagnosis, and lack of at least one fair to good quality embryo. The full model showed good predictive value (c = 0.885) for estimating risk of cycle failure; at ≥80 % predicted probability of failure, sensitivity = 55.4 %, specificity = 97.5 %, positive predictive value = 95.4 %, and negative predictive value = 69.8 %. If this predictive model is validated in future studies, it could be beneficial for predicting IVF failure in good prognosis women under the age of 35 years.
Material Stream Strategy for Lithium and Inorganics (U)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Safarik, Douglas Joseph; Dunn, Paul Stanton; Korzekwa, Deniece Rochelle
Design Agency Responsibilities: Manufacturing Support to meet Stockpile Stewardship goals for maintaining the nuclear stockpile through experimental and predictive modeling capability. Development and maintenance of Manufacturing Science expertise to assess material specifications and performance boundaries, and their relationship to processing parameters. Production Engineering Evaluations with competence in design requirements, material specifications, and manufacturing controls. Maintenance and enhancement of Aging Science expertise to support Stockpile Stewardship predictive science capability.
Progress in Finite Element Modeling of the Lower Extremities
2015-06-01
bending and subsequent injury , e.g., the distal tibia motion results in bending of the tibia rather than the tibia rotating about the knee joint...layers, rich anisotropy, and wide variability. Developing a model for predictive injury capability, therefore, needs to be versatile and flexible to... injury capability presents many challenges, the first of which is identifying the types of conditions where injury prediction is needed. Our focus
All-Data Approach to Assessing Financial Capability in People with Psychiatric Disabilities
Lazar, Christina M.; Black, Anne C.; McMahon, Thomas J.; Rosenheck, Robert A.; Ries, Richard; Ames, Donna; Rosen, Marc I.
2015-01-01
The goal of this project was to develop an evidence-based method to assess the ability of disabled persons to manage federal disability payments. This paper describes the development of the FISCAL (Financial Incapability Structured Clinical Assessment done Longitudinally) measure of financial capability. The FISCAL was developed by an iterative process of literature review, pilot testing, and expert consultation. Independent assessors used the FISCAL to rate the financial capability of 118 participants (57% female, 57% Caucasian) who: received Social Security disability payments, had recently been treated in acute care facilities for psychiatric disorders, and who did not have representative payees or conservators. Altogether, 48% of participants were determined financially incapable by the FISCAL, of whom 60% were incapable due to unmet basic needs, 91% were incapable due to spending that harmed them (e.g. on illicit drugs or alcohol), 56% were incapable due to both unmet needs and harmful spending, and 5% were incapable due to contextual factors. As expected, incapable individuals scored higher on a measure of money mismanagement (p < .001) compared to capable individuals. Inter-rater reliability for FISCAL capability determinations was very good (Kappa = .77) and inter-rater agreement was 89%. In this population the FISCAL had construct validity; ratings demonstrated good reliability and correlated with a related measure. Potentially, the FISCAL can be used to validate other measures of capability and to help understand how people on limited incomes manage their funds. PMID:26146947
All-data approach to assessing financial capability in people with psychiatric disabilities.
Lazar, Christina M; Black, Anne C; McMahon, Thomas J; Rosenheck, Robert A; Ries, Richard; Ames, Donna; Rosen, Marc I
2016-04-01
The goal of this project was to develop an evidence-based method to assess the ability of disabled persons to manage federal disability payments. This article describes the development of the Financial Incapability Structured Clinical Assessment done Longitudinally (FISCAL) measure of financial capability. The FISCAL was developed by an iterative process of literature review, pilot testing, and expert consultation. Independent assessors used the FISCAL to rate the financial capability of 118 participants (57% female, 58% Caucasian) who received Social Security disability payments, had recently been treated in acute care facilities for psychiatric disorders, and who did not have representative payees or conservators. Altogether, 48% of participants were determined financially incapable by the FISCAL, of whom 60% were incapable because of unmet basic needs, 91% were incapable because of spending that harmed them (e.g., on illicit drugs or alcohol), 56% were incapable because of both unmet needs and harmful spending, and 5% were incapable because of contextual factors. As expected, incapable individuals scored higher on a measure of money mismanagement (p < .001) compared with capable individuals. Interrater reliability for FISCAL capability determinations was very good (κ = .77) and interrater agreement was 89%. In this population, the FISCAL had construct validity; ratings demonstrated good reliability and correlated with a related measure. Potentially, the FISCAL can be used to validate other measures of capability and to help understand how people on limited incomes manage their funds. (c) 2016 APA, all rights reserved.
NASA Astrophysics Data System (ADS)
Zanino, R.; Bonifetto, R.; Brighenti, A.; Isono, T.; Ozeki, H.; Savoldi, L.
2018-07-01
The ITER toroidal field insert (TFI) coil is a single-layer Nb3Sn solenoid tested in 2016-2017 at the National Institutes for Quantum and Radiological Science and Technology (former JAEA) in Naka, Japan. The TFI, the last in a series of ITER insert coils, was tested in operating conditions relevant for the actual ITER TF coils, inserting it in the borehole of the central solenoid model coil, which provided the background magnetic field. In this paper, we consider the five quench propagation tests that were performed using one or two inductive heaters (IHs) as drivers; out of these, three used just one IH but with increasing delay times, up to 7.5 s, between the quench detection and the TFI current dump. The results of the 4C code prediction of the quench propagation up to the current dump are presented first, based on simulations performed before the tests. We then describe the experimental results, showing good reproducibility. Finally, we compare the 4C code predictions with the measurements, confirming the 4C code capability to accurately predict the quench propagation, and the evolution of total and local voltages, as well as of the hot spot temperature. To the best of our knowledge, such a predictive validation exercise is performed here for the first time for the quench of a Nb3Sn coil. Discrepancies between prediction and measurement are found in the evolution of the jacket temperatures, in the He pressurization and quench acceleration in the late phase of the transient before the dump, as well as in the early evolution of the inlet and outlet He mass flow rate. Based on the lessons learned in the predictive exercise, the model is then refined to try and improve a posteriori (i.e. in interpretive, as opposed to predictive mode) the agreement between simulation and experiment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nuzzo, Ralph G.; Rogers, John A.; Menard, Etienne
The invention provides methods and devices for fabricating printable semiconductor elements and assembling printable semiconductor elements onto substrate surfaces. Methods, devices and device components of the present invention are capable of generating a wide range of flexible electronic and optoelectronic devices and arrays of devices on substrates comprising polymeric materials. The present invention also provides stretchable semiconductor structures and stretchable electronic devices capable of good performance in stretched configurations.
ERIC Educational Resources Information Center
Hannon, Cliona; Faas, Daniel; O'Sullivan, Katriona
2017-01-01
Widening participation programmes aim to increase the progression of students from low socio-economic status (SES) groups to higher education. This research proposes that the human capabilities approach is a good justice-based framework within which to consider the social and cultural capital processes that impact upon the educational capabilities…
A Process for Assessing NASA's Capability in Aircraft Noise Prediction Technology
NASA Technical Reports Server (NTRS)
Dahl, Milo D.
2008-01-01
An acoustic assessment is being conducted by NASA that has been designed to assess the current state of the art in NASA s capability to predict aircraft related noise and to establish baselines for gauging future progress in the field. The process for determining NASA s current capabilities includes quantifying the differences between noise predictions and measurements of noise from experimental tests. The computed noise predictions are being obtained from semi-empirical, analytical, statistical, and numerical codes. In addition, errors and uncertainties are being identified and quantified both in the predictions and in the measured data to further enhance the credibility of the assessment. The content of this paper contains preliminary results, since the assessment project has not been fully completed, based on the contributions of many researchers and shows a select sample of the types of results obtained regarding the prediction of aircraft noise at both the system and component levels. The system level results are for engines and aircraft. The component level results are for fan broadband noise, for jet noise from a variety of nozzles, and for airframe noise from flaps and landing gear parts. There are also sample results for sound attenuation in lined ducts with flow and the behavior of acoustic lining in ducts.
Predicting U.S. food demand in the 20th century: a new look at system dynamics
NASA Astrophysics Data System (ADS)
Moorthy, Mukund; Cellier, Francois E.; LaFrance, Jeffrey T.
1998-08-01
The paper describes a new methodology for predicting the behavior of macroeconomic variables. The approach is based on System Dynamics and Fuzzy Inductive Reasoning. A four- layer pseudo-hierarchical model is proposed. The bottom layer makes predications about population dynamics, age distributions among the populace, as well as demographics. The second layer makes predications about the general state of the economy, including such variables as inflation and unemployment. The third layer makes predictions about the demand for certain goods or services, such as milk products, used cars, mobile telephones, or internet services. The fourth and top layer makes predictions about the supply of such goods and services, both in terms of their prices. Each layer can be influenced by control variables the values of which are only determined at higher levels. In this sense, the model is not strictly hierarchical. For example, the demand for goods at level three depends on the prices of these goods, which are only determined at level four. Yet, the prices are themselves influenced by the expected demand. The methodology is exemplified by means of a macroeconomic model that makes predictions about US food demand during the 20th century.
Huang, Yajun; Ding, Xiaokang; Qi, Yu; Yu, Bingran; Xu, Fu-Jian
2016-11-01
There is an increasing demand in developing of multifunctional materials with good antibacterial activity, biocompatibility and drug/gene delivery capability for next-generation biomedical applications. To achieve this purpose, in this work series of hydroxyl-rich hyperbranched polyaminoglycosides of gentamicin, tobramycin, and neomycin (HP and SS-HP with redox-responsive disulfide bonds) were readily synthesized via ring-opening reactions in a one-pot manner. Both HP and SS-HP exhibit high antibacterial activity toward Escherichia coli and Staphylococcus aureus. Meanwhile, the hemolysis assay of the above materials shows good biocompatibility. Moreover, SS-HPs show excellent gene transfection efficiency in vitro due to the breakdown of reduction-responsive disulfide bonds. For an in vivo anti-tumor assay, the SS-HP/p53 complexes exhibit potent inhibition capability to the growth of tumors. This study provides a promising approach for the design of next-generation multifunctional biomedical materials. Copyright © 2016 Elsevier Ltd. All rights reserved.
Artificial neural network model for ozone concentration estimation and Monte Carlo analysis
NASA Astrophysics Data System (ADS)
Gao, Meng; Yin, Liting; Ning, Jicai
2018-07-01
Air pollution in urban atmosphere directly affects public-health; therefore, it is very essential to predict air pollutant concentrations. Air quality is a complex function of emissions, meteorology and topography, and artificial neural networks (ANNs) provide a sound framework for relating these variables. In this study, we investigated the feasibility of using ANN model with meteorological parameters as input variables to predict ozone concentration in the urban area of Jinan, a metropolis in Northern China. We firstly found that the architecture of network of neurons had little effect on the predicting capability of ANN model. A parsimonious ANN model with 6 routinely monitored meteorological parameters and one temporal covariate (the category of day, i.e. working day, legal holiday and regular weekend) as input variables was identified, where the 7 input variables were selected following the forward selection procedure. Compared with the benchmarking ANN model with 9 meteorological and photochemical parameters as input variables, the predicting capability of the parsimonious ANN model was acceptable. Its predicting capability was also verified in term of warming success ratio during the pollution episodes. Finally, uncertainty and sensitivity analysis were also performed based on Monte Carlo simulations (MCS). It was concluded that the ANN could properly predict the ambient ozone level. Maximum temperature, atmospheric pressure, sunshine duration and maximum wind speed were identified as the predominate input variables significantly influencing the prediction of ambient ozone concentrations.
Harnessing atomistic simulations to predict the rate at which dislocations overcome obstacles
NASA Astrophysics Data System (ADS)
Saroukhani, S.; Nguyen, L. D.; Leung, K. W. K.; Singh, C. V.; Warner, D. H.
2016-05-01
Predicting the rate at which dislocations overcome obstacles is key to understanding the microscopic features that govern the plastic flow of modern alloys. In this spirit, the current manuscript examines the rate at which an edge dislocation overcomes an obstacle in aluminum. Predictions were made using different popular variants of Harmonic Transition State Theory (HTST) and compared to those of direct Molecular Dynamics (MD) simulations. The HTST predictions were found to be grossly inaccurate due to the large entropy barrier associated with the dislocation-obstacle interaction. Considering the importance of finite temperature effects, the utility of the Finite Temperature String (FTS) method was then explored. While this approach was found capable of identifying a prominent reaction tube, it was not capable of computing the free energy profile along the tube. Lastly, the utility of the Transition Interface Sampling (TIS) approach was explored, which does not need a free energy profile and is known to be less reliant on the choice of reaction coordinate. The TIS approach was found capable of accurately predicting the rate, relative to direct MD simulations. This finding was utilized to examine the temperature and load dependence of the dislocation-obstacle interaction in a simple periodic cell configuration. An attractive rate prediction approach combining TST and simple continuum models is identified, and the strain rate sensitivity of individual dislocation obstacle interactions is predicted.
ERIC Educational Resources Information Center
Parry, Malcolm
1998-01-01
Explains a novel way of approaching centripetal force: theory is used to predict an orbital period at which a toy train will topple from a circular track. The demonstration has elements of prediction (a criterion for a good model) and suspense (a criterion for a good demonstration). The demonstration proved useful in undergraduate physics and…
The Coastal Ocean Prediction Systems program: Understanding and managing our coastal ocean
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eden, H.F.; Mooers, C.N.K.
1990-06-01
The goal of COPS is to couple a program of regular observations to numerical models, through techniques of data assimilation, in order to provide a predictive capability for the US coastal ocean including the Great Lakes, estuaries, and the entire Exclusive Economic Zone (EEZ). The objectives of the program include: determining the predictability of the coastal ocean and the processes that govern the predictability; developing efficient prediction systems for the coastal ocean based on the assimilation of real-time observations into numerical models; and coupling the predictive systems for the physical behavior of the coastal ocean to predictive systems for biological,more » chemical, and geological processes to achieve an interdisciplinary capability. COPS will provide the basis for effective monitoring and prediction of coastal ocean conditions by optimizing the use of increased scientific understanding, improved observations, advanced computer models, and computer graphics to make the best possible estimates of sea level, currents, temperatures, salinities, and other properties of entire coastal regions.« less
Reid, John Michael; Dai, Dingwei; Delmonte, Susanna; Counsell, Carl; Phillips, Stephen J; MacLeod, Mary Joan
2017-05-01
physicians are often asked to prognosticate soon after a patient presents with stroke. This study aimed to compare two outcome prediction scores (Five Simple Variables [FSV] score and the PLAN [Preadmission comorbidities, Level of consciousness, Age, and focal Neurologic deficit]) with informal prediction by physicians. demographic and clinical variables were prospectively collected from consecutive patients hospitalised with acute ischaemic or haemorrhagic stroke (2012-13). In-person or telephone follow-up at 6 months established vital and functional status (modified Rankin score [mRS]). Area under the receiver operating curves (AUC) was used to establish prediction score performance. five hundred and seventy-five patients were included; 46% female, median age 76 years, 88% ischaemic stroke. Six months after stroke, 47% of patients had a good outcome (alive and independent, mRS 0-2) and 26% a devastating outcome (dead or severely dependent, mRS 5-6). The FSV and PLAN scores were superior to physician prediction (AUCs of 0.823-0.863 versus 0.773-0.805, P < 0.0001) for good and devastating outcomes. The FSV score was superior to the PLAN score for predicting good outcomes and vice versa for devastating outcomes (P < 0.001). Outcome prediction was more accurate for those with later presentations (>24 hours from onset). the FSV and PLAN scores are validated in this population for outcome prediction after both ischaemic and haemorrhagic stroke. The FSV score is the least complex of all developed scores and can assist outcome prediction by physicians. © The Author 2016. Published by Oxford University Press on behalf of the British Geriatrics Society. All rights reserved. For permissions, please email: journals.permissions@oup.com
Thermal niche estimators and the capability of poor dispersal species to cope with climate change
NASA Astrophysics Data System (ADS)
Sánchez-Fernández, David; Rizzo, Valeria; Cieslak, Alexandra; Faille, Arnaud; Fresneda, Javier; Ribera, Ignacio
2016-03-01
For management strategies in the context of global warming, accurate predictions of species response are mandatory. However, to date most predictions are based on niche (bioclimatic) models that usually overlook biotic interactions, behavioral adjustments or adaptive evolution, and assume that species can disperse freely without constraints. The deep subterranean environment minimises these uncertainties, as it is simple, homogeneous and with constant environmental conditions. It is thus an ideal model system to study the effect of global change in species with poor dispersal capabilities. We assess the potential fate of a lineage of troglobitic beetles under global change predictions using different approaches to estimate their thermal niche: bioclimatic models, rates of thermal niche change estimated from a molecular phylogeny, and data from physiological studies. Using bioclimatic models, at most 60% of the species were predicted to have suitable conditions in 2080. Considering the rates of thermal niche change did not improve this prediction. However, physiological data suggest that subterranean species have a broad thermal tolerance, allowing them to stand temperatures never experienced through their evolutionary history. These results stress the need of experimental approaches to assess the capability of poor dispersal species to cope with temperatures outside those they currently experience.
A Stochastic-entropic Approach to Detect Persistent Low-temperature Volcanogenic Thermal Anomalies
NASA Astrophysics Data System (ADS)
Pieri, D. C.; Baxter, S.
2011-12-01
Eruption prediction is a chancy idiosyncratic affair, as volcanoes often manifest waxing and/or waning pre-eruption emission, geodetic, and seismic behavior that is unsystematic. Thus, fundamental to increased prediction accuracy and precision are good and frequent assessments of the time-series behavior of relevant precursor geophysical, geochemical, and geological phenomena, especially when volcanoes become restless. The Advanced Spaceborne Thermal Emission and Reflection radiometer (ASTER), in orbit since 1999 on the NASA Terra Earth Observing System satellite is an important capability for detection of thermal eruption precursors (even subtle ones) and increased passive gas emissions. The unique combination of ASTER high spatial resolution multi-spectral thermal IR imaging data (90m/pixel; 5 bands in the 8-12um region), combined with simultaneous visible and near-IR imaging data, and stereo-photogrammetric capabilities make it a useful, especially thermal, precursor detection tool. The JPL ASTER Volcano Archive consisting of 80,000+ASTER volcano images allows systematic analysis of (a) baseline thermal emissions for 1550+ volcanoes, (b) important aspects of the time-dependent thermal variability, and (c) the limits of detection of temporal dynamics of eruption precursors. We are analyzing a catalog of the magnitude, frequency, and distribution of ASTER-documented volcano thermal signatures, compiled from 2000 onward, at 90m/pixel. Low contrast thermal anomalies of relatively low apparent absolute temperature (e.g., summit lakes, fumarolically altered areas, geysers, very small sub-pixel hotspots), for which the signal-to-noise ratio may be marginal (e.g., scene confusion due to clouds, water and water vapor, fumarolic emissions, variegated ground emissivity, and their combinations), are particularly important to discern and monitor. We have developed a technique to detect persistent hotspots that takes into account in-scene observed pixel joint frequency distributions over time, temperature contrast, and Shannon entropy. Preliminary analyses of Fogo Volcano and Yellowstone hotspots, among others, indicate that this is a very sensitive technique with good potential to be applied over the entire ASTER global night-time archive. We will discuss our progress in creating the global thermal anomaly catalog as well as algorithm approach and results. This work was carried out at the Jet Propulsion Laboratory of the California Institute of Technology under contract to NASA.
Airport Noise Prediction Model -- MOD 7
DOT National Transportation Integrated Search
1978-07-01
The MOD 7 Airport Noise Prediction Model is fully operational. The language used is Fortran, and it has been run on several different computer systems. Its capabilities include prediction of noise levels for single parameter changes, for multiple cha...
Evaluating Rapid Models for High-Throughput Exposure Forecasting (SOT)
High throughput exposure screening models can provide quantitative predictions for thousands of chemicals; however these predictions must be systematically evaluated for predictive ability. Without the capability to make quantitative, albeit uncertain, forecasts of exposure, the ...
In silico prediction of pharmaceutical degradation pathways: a benchmarking study.
Kleinman, Mark H; Baertschi, Steven W; Alsante, Karen M; Reid, Darren L; Mowery, Mark D; Shimanovich, Roman; Foti, Chris; Smith, William K; Reynolds, Dan W; Nefliu, Marcela; Ott, Martin A
2014-11-03
Zeneth is a new software application capable of predicting degradation products derived from small molecule active pharmaceutical ingredients. This study was aimed at understanding the current status of Zeneth's predictive capabilities and assessing gaps in predictivity. Using data from 27 small molecule drug substances from five pharmaceutical companies, the evolution of Zeneth predictions through knowledge base development since 2009 was evaluated. The experimentally observed degradation products from forced degradation, accelerated, and long-term stability studies were compared to Zeneth predictions. Steady progress in predictive performance was observed as the knowledge bases grew and were refined. Over the course of the development covered within this evaluation, the ability of Zeneth to predict experimentally observed degradants increased from 31% to 54%. In particular, gaps in predictivity were noted in the areas of epimerizations, N-dealkylation of N-alkylheteroaromatic compounds, photochemical decarboxylations, and electrocyclic reactions. The results of this study show that knowledge base development efforts have increased the ability of Zeneth to predict relevant degradation products and aid pharmaceutical research. This study has also provided valuable information to help guide further improvements to Zeneth and its knowledge base.
Landscape capability predicts upland game bird abundance and occurrence
Loman, Zachary G.; Blomberg, Erik J.; DeLuca, William; Harrison, Daniel J.; Loftin, Cyndy; Wood, Petra B.
2017-01-01
Landscape capability (LC) models are a spatial tool with potential applications in conservation planning. We used survey data to validate LC models as predictors of occurrence and abundance at broad and fine scales for American woodcock (Scolopax minor) and ruffed grouse (Bonasa umbellus). Landscape capability models were reliable predictors of occurrence but were less indicative of relative abundance at route (11.5–14.6 km) and point scales (0.5–1 km). As predictors of occurrence, LC models had high sensitivity (0.71–0.93) and were accurate (0.71–0.88) and precise (0.88 and 0.92 for woodcock and grouse, respectively). Models did not predict point-scale abundance independent of the ability to predict occurrence of either species. The LC models are useful predictors of patterns of occurrences in the northeastern United States, but they have limited utility as predictors of fine-scale or route-specific abundances.
NASA Technical Reports Server (NTRS)
Harris, Charles E.; Starnes, James H., Jr.; Newman, James C., Jr.
1995-01-01
NASA is developing a 'tool box' that includes a number of advanced structural analysis computer codes which, taken together, represent the comprehensive fracture mechanics capability required to predict the onset of widespread fatigue damage. These structural analysis tools have complementary and specialized capabilities ranging from a finite-element-based stress-analysis code for two- and three-dimensional built-up structures with cracks to a fatigue and fracture analysis code that uses stress-intensity factors and material-property data found in 'look-up' tables or from equations. NASA is conducting critical experiments necessary to verify the predictive capabilities of the codes, and these tests represent a first step in the technology-validation and industry-acceptance processes. NASA has established cooperative programs with aircraft manufacturers to facilitate the comprehensive transfer of this technology by making these advanced structural analysis codes available to industry.
Middha, Sushil Kumar; Goyal, Arvind Kumar; Faizan, Syed Ahmed; Sanghamitra, Nethramurthy; Basistha, Bharat Chandra; Usha, Talambedu
2013-11-01
Type 2 diabetes is an inevitably progressive disease, with irreversible beta cell failure. Glycogen synthase kinase and Glukokinase, two important enzymes with diverse biological actions in carbohydrate metabolism, are promising targets for developing novel antidiabetic drugs. A combinatorial structure-based molecular docking and pharmacophore modelling study was performed with the compounds of Hippophae salicifolia and H. rhamnoides as inhibitors. Docking with Discovery Studio 3.5 revealed that two compounds from H. salicifolia, viz Lutein D and an analogue of Zeaxanthin, and two compounds from H. rhamnoides, viz Isorhamnetin-3-rhamnoside and Isorhamnetin-7-glucoside, bind significantly to the GSK-3 beta receptor and play a role in its inhibition; whereas in the case of Glucokinase, only one compound from both the plants, i.e. vitamin C, had good binding characteristics capable of activation. The results help to understand the type of interactions that occur between the ligands and the receptors. Toxicity predictions revealed that none of the compounds had hepatotoxic effects and had good absorption as well as solubility characteristics. The compounds did not possess plasma protein-binding, crossing blood-brain barrier ability. Further, in vivo and in vitro studies need to be performed to prove that these compounds can be used effectively as antidiabetic drugs.
Van Bavel, Jay J.; Packer, Dominic J.; Haas, Ingrid Johnsen; Cunningham, William A.
2012-01-01
Over the past decade, intuitionist models of morality have challenged the view that moral reasoning is the sole or even primary means by which moral judgments are made. Rather, intuitionist models posit that certain situations automatically elicit moral intuitions, which guide moral judgments. We present three experiments showing that evaluations are also susceptible to the influence of moral versus non-moral construal. We had participants make moral evaluations (rating whether actions were morally good or bad) or non-moral evaluations (rating whether actions were pragmatically or hedonically good or bad) of a wide variety of actions. As predicted, moral evaluations were faster, more extreme, and more strongly associated with universal prescriptions—the belief that absolutely nobody or everybody should engage in an action—than non-moral (pragmatic or hedonic) evaluations of the same actions. Further, we show that people are capable of flexibly shifting from moral to non-moral evaluations on a trial-by-trial basis. Taken together, these experiments provide evidence that moral versus non-moral construal has an important influence on evaluation and suggests that effects of construal are highly flexible. We discuss the implications of these experiments for models of moral judgment and decision-making. PMID:23209557
Performance of FFT methods in local gravity field modelling
NASA Technical Reports Server (NTRS)
Forsberg, Rene; Solheim, Dag
1989-01-01
Fast Fourier transform (FFT) methods provide a fast and efficient means of processing large amounts of gravity or geoid data in local gravity field modelling. The FFT methods, however, has a number of theoretical and practical limitations, especially the use of flat-earth approximation, and the requirements for gridded data. In spite of this the method often yields excellent results in practice when compared to other more rigorous (and computationally expensive) methods, such as least-squares collocation. The good performance of the FFT methods illustrate that the theoretical approximations are offset by the capability of taking into account more data in larger areas, especially important for geoid predictions. For best results good data gridding algorithms are essential. In practice truncated collocation approaches may be used. For large areas at high latitudes the gridding must be done using suitable map projections such as UTM, to avoid trivial errors caused by the meridian convergence. The FFT methods are compared to ground truth data in New Mexico (xi, eta from delta g), Scandinavia (N from delta g, the geoid fits to 15 cm over 2000 km), and areas of the Atlantic (delta g from satellite altimetry using Wiener filtering). In all cases the FFT methods yields results comparable or superior to other methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baird, Benjamin; Loebick, Codruta; Roychoudhury, Subir
During Phase I both experimental evaluation and computational validation of an advanced Spouted Bed Reactor (SBR) approach for biomass and coal combustion was completed. All Phase I objectives were met and some exceeded. Comprehensive insight on SBR operation was achieved via design, fabrication, and testing of a small demonstration unit with pulverized coal and biomass as feedstock at University of Connecticut (UCONN). A scale-up and optimization tool for the next generation of coal and biomass co-firing for reducing GHG emissions was also developed. The predictive model was implemented with DOE’s MFIX computational model and was observed to accurately mimic evenmore » unsteady behavior. An updated Spouted Bed Reactor was fabricated, based on model feedback, and experimentally displayed near ideal behavior. This predictive capability based upon first principles and experimental correlation allows realistic simulation of mixed fuel combustion in these newly proposed power boiler designs. Compared to a conventional fluidized bed the SBR facilitates good mixing of coal and biomass, with relative insensitivity to particle size and densities, resulting in improved combustion efficiency. Experimental data with mixed coal and biomass fuels demonstrated complete oxidation at temperatures as low as 500ºC. This avoids NOx formation and residual carbon in the waste ash. Operation at stoichiometric conditions without requiring cooling or sintering of the carrier was also observed. Oxygen-blown operation were tested and indicated good performance. This highlighted the possibility of operating the SBR at a wide range of conditions suitable for power generation and partial oxidation byproducts. It also supports the possibility of implementing chemical looping (for readily capturing CO 2 and SO x).« less
Muller, Matthew P; McGeer, Allison J; Hassan, Kazi; Marshall, John; Christian, Michael
2010-03-05
The demand for inpatient medical services increases during influenza season. A scoring system capable of identifying influenza patients at low risk death or ICU admission could help clinicians make hospital admission decisions. Hospitalized patients with laboratory confirmed influenza were identified over 3 influenza seasons at 25 Ontario hospitals. Each patient was assigned a score for 6 pneumonia severity and 2 sepsis scores using the first data available following their registration in the emergency room. In-hospital mortality and ICU admission were the outcomes. Score performance was assessed using the area under the receiver operating characteristic curve (AUC) and the sensitivity and specificity for identifying low risk patients (risk of outcome <5%). The cohort consisted of 607 adult patients. Mean age was 76 years, 12% of patients died (71/607) and 9% required ICU care (55/607). None of the scores examined demonstrated good discriminatory ability (AUC>or=0.80). The Pneumonia Severity Index (AUC 0.78, 95% CI 0.72-0.83) and the Mortality in Emergency Department Sepsis score (AUC 0.77, 95% 0.71-0.83) demonstrated fair predictive ability (AUC>or=0.70) for in-hospital mortality. The best predictor of ICU admission was SMART-COP (AUC 0.73, 95% CI 0.67-0.79). All other scores were poor predictors (AUC <0.70) of either outcome. If patients classified as low risk for in-hospital mortality using the PSI were discharged, 35% of admissions would have been avoided. None of the scores studied were good predictors of in-hospital mortality or ICU admission. The PSI and MEDS score were fair predictors of death and if these results are validated, their use could reduce influenza admission rates significantly.
Stock and option portfolio using fuzzy logic approach
NASA Astrophysics Data System (ADS)
Sumarti, Novriana; Wahyudi, Nanang
2014-03-01
Fuzzy Logic in decision-making process has been widely implemented in various problems in industries. It is the theory of imprecision and uncertainty that was not based on probability theory. Fuzzy Logic adds values of degree between absolute true and absolute false. It starts with and builds on a set of human language rules supplied by the user. The fuzzy systems convert these rules to their mathematical equivalents. This could simplify the job of the system designer and the computer, and results in much more accurate representations of the way systems behave in the real world. In this paper we examine the decision making process of stock and option trading by the usage of MACD (Moving Average Convergence Divergence) technical analysis and Option Pricing with Fuzzy Logic approach. MACD technical analysis is for the prediction of the trends of underlying stock prices, such as bearish (going downward), bullish (going upward), and sideways. By using Fuzzy C-Means technique and Mamdani Fuzzy Inference System, we define the decision output where the value of MACD is high then decision is "Strong Sell", and the value of MACD is Low then the decision is "Strong Buy". We also implement the fuzzification of the Black-Scholes option-pricing formula. The stock and options methods are implemented on a portfolio of one stock and its options. Even though the values of input data, such as interest rates, stock price and its volatility, cannot be obtain accurately, these fuzzy methods can give a belief degree of the calculated the Black-Scholes formula so we can make the decision on option trading. The results show the good capability of the methods in the prediction of stock price trends. The performance of the simulated portfolio for a particular period of time also shows good return.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baird, Benjamin; Loebick, Codruta; Roychoudhury, Subir
During Phase I both experimental evaluation and computational validation of an advanced Spouted Bed Reactor (SBR) approach for biomass and coal combustion was completed. All Phase I objectives were met and some exceeded. Comprehensive insight on SBR operation was achieved via design, fabrication, and testing of a small demonstration unit with pulverized coal and biomass as feedstock at University of Connecticut (UCONN). A scale-up and optimization tool for the next generation of coal and biomass co-firing for reducing GHG emissions was also developed. The predictive model was implemented with DOE’s MFIX computational model and was observed to accurately mimic evenmore » unsteady behavior. An updated Spouted Bed Reactor was fabricated, based on model feedback, and experimentally displayed near ideal behavior. This predictive capability based upon first principles and experimental correlation allows realistic simulation of mixed fuel combustion in these newly proposed power boiler designs. Compared to a conventional fluidized bed the SBR facilitates good mixing of coal and biomass, with relative insensitivity to particle size and densities, resulting in improved combustion efficiency. Experimental data with mixed coal and biomass fuels demonstrated complete oxidation at temperatures as low as 500C. This avoids NOx formation and residual carbon in the waste ash. Operation at stoichiometric conditions without requiring cooling or sintering of the carrier was also observed. Oxygen-blown operation were tested and indicated good performance. This highlighted the possibility of operating the SBR at a wide range of conditions suitable for power generation and partial oxidation byproducts. It also supports the possibility of implementing chemical looping (for readily capturing CO2 and SOx).« less
NASA Astrophysics Data System (ADS)
Mogaji, Kehinde Anthony; Omobude, Osayande Bright
2017-12-01
Modeling of groundwater potentiality zones is a vital scheme for effective management of groundwater resources. This study developed a new multi-criteria decision making algorithm for groundwater potentiality modeling through modifying the standard GOD model. The developed model christened as GODT model was applied to assess groundwater potential in a multi-faceted crystalline geologic terrain, southwestern, Nigeria using the derived four unify groundwater potential conditioning factors namely: Groundwater hydraulic confinement (G), aquifer Overlying strata resistivity (O), Depth to water table (D) and Thickness of aquifer (T) from the interpreted geophysical data acquired in the area. With the developed model algorithm, the GIS-based produced G, O, D and T maps were synthesized to estimate groundwater potential index (GWPI) values for the area. The estimated GWPI values were processed in GIS environment to produce groundwater potential prediction index (GPPI) map which demarcate the area into four potential zones. The produced GODT model-based GPPI map was validated through application of both correlation technique and spatial attribute comparative scheme (SACS). The performance of the GODT model was compared with that of the standard analytic hierarchy process (AHP) model. The correlation technique results established 89% regression coefficients for the GODT modeling algorithm compared with 84% for the AHP model. On the other hand, the SACS validation results for the GODT and AHP models are 72.5% and 65%, respectively. The overall results indicate that both models have good capability for predicting groundwater potential zones with the GIS-based GODT model as a good alternative. The GPPI maps produced in this study can form part of decision making model for environmental planning and groundwater management in the area.
NASA Astrophysics Data System (ADS)
Christiansen, Rasmus E.; Sigmund, Ole
2016-09-01
This Letter reports on the experimental validation of a two-dimensional acoustic hyperbolic metamaterial slab optimized to exhibit negative refractive behavior. The slab was designed using a topology optimization based systematic design method allowing for tailoring the refractive behavior. The experimental results confirm the predicted refractive capability as well as the predicted transmission at an interface. The study simultaneously provides an estimate of the attenuation inside the slab stemming from the boundary layer effects—insight which can be utilized in the further design of the metamaterial slabs. The capability of tailoring the refractive behavior opens possibilities for different applications. For instance, a slab exhibiting zero refraction across a wide angular range is capable of funneling acoustic energy through it, while a material exhibiting the negative refractive behavior across a wide angular range provides lensing and collimating capabilities.
The NASA Severe Thunderstorm Observations and Regional Modeling (NASA STORM) Project
NASA Technical Reports Server (NTRS)
Schultz, Christopher J.; Gatlin, Patrick N.; Lang, Timothy J.; Srikishen, Jayanthi; Case, Jonathan L.; Molthan, Andrew L.; Zavodsky, Bradley T.; Bailey, Jeffrey; Blakeslee, Richard J.; Jedlovec, Gary J.
2016-01-01
The NASA Severe Storm Thunderstorm Observations and Regional Modeling(NASA STORM) project enhanced NASA’s severe weather research capabilities, building upon existing Earth Science expertise at NASA Marshall Space Flight Center (MSFC). During this project, MSFC extended NASA’s ground-based lightning detection capacity to include a readily deployable lightning mapping array (LMA). NASA STORM also enabled NASA’s Short-term Prediction and Research Transition (SPoRT) to add convection allowing ensemble modeling to its portfolio of regional numerical weather prediction (NWP) capabilities. As a part of NASA STORM, MSFC developed new open-source capabilities for analyzing and displaying weather radar observations integrated from both research and operational networks. These accomplishments enabled by NASA STORM are a step towards enhancing NASA’s capabilities for studying severe weather and positions them for any future NASA related severe storm field campaigns.
NASA Astrophysics Data System (ADS)
Nowak, W.; Schöniger, A.; Wöhling, T.; Illman, W. A.
2016-12-01
Model-based decision support requires justifiable models with good predictive capabilities. This, in turn, calls for a fine adjustment between predictive accuracy (small systematic model bias that can be achieved with rather complex models), and predictive precision (small predictive uncertainties that can be achieved with simpler models with fewer parameters). The implied complexity/simplicity trade-off depends on the availability of informative data for calibration. If not available, additional data collection can be planned through optimal experimental design. We present a model justifiability analysis that can compare models of vastly different complexity. It rests on Bayesian model averaging (BMA) to investigate the complexity/performance trade-off dependent on data availability. Then, we disentangle the complexity component from the performance component. We achieve this by replacing actually observed data by realizations of synthetic data predicted by the models. This results in a "model confusion matrix". Based on this matrix, the modeler can identify the maximum model complexity that can be justified by the available (or planned) amount and type of data. As a side product, the matrix quantifies model (dis-)similarity. We apply this analysis to aquifer characterization via hydraulic tomography, comparing four models with a vastly different number of parameters (from a homogeneous model to geostatistical random fields). As a testing scenario, we consider hydraulic tomography data. Using subsets of these data, we determine model justifiability as a function of data set size. The test case shows that geostatistical parameterization requires a substantial amount of hydraulic tomography data to be justified, while a zonation-based model can be justified with more limited data set sizes. The actual model performance (as opposed to model justifiability), however, depends strongly on the quality of prior geological information.
Multi-component testing using HZ-PAN and AgZ-PAN Sorbents for OSPREY Model validation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garn, Troy G.; Greenhalgh, Mitchell; Lyon, Kevin L.
2015-04-01
In efforts to further develop the capability of the Off-gas SeParation and RecoverY (OSPREY) model, multi-component tests were completed using both HZ-PAN and AgZ-PAN sorbents. The primary purpose of this effort was to obtain multi-component xenon and krypton capacities for comparison to future OSPREY predicted multi-component capacities using previously acquired Langmuir equilibrium parameters determined from single component isotherms. Experimental capacities were determined for each sorbent using two feed gas compositions of 1000 ppmv xenon and 150 ppmv krypton in either a helium or air balance. Test temperatures were consistently held at 220 K and the gas flowrate was 50 sccm.more » Capacities were calculated from breakthrough curves using TableCurve® 2D software by Jandel Scientific. The HZ-PAN sorbent was tested in the custom designed cryostat while the AgZ-PAN was tested in a newly installed cooling apparatus. Previous modeling validation efforts indicated the OSPREY model can be used to effectively predict single component xenon and krypton capacities for both engineered form sorbents. Results indicated good agreement with the experimental and predicted capacity values for both krypton and xenon on the sorbents. Overall, the model predicted slightly elevated capacities for both gases which can be partially attributed to the estimation of the parameters and the uncertainty associated with the experimental measurements. Currently, OSPREY is configured such that one species adsorbs and one does not (i.e. krypton in helium). Modification of OSPREY code is currently being performed to incorporate multiple adsorbing species and non-ideal interactions of gas phase species with the sorbent and adsorbed phases. Once these modifications are complete, the sorbent capacities determined in the present work will be used to validate OSPREY multicomponent adsorption predictions.« less
Characterization of the harvesting capabilities of an ionic polymer metal composite device
NASA Astrophysics Data System (ADS)
Brufau-Penella, J.; Puig-Vidal, M.; Giannone, P.; Graziani, S.; Strazzeri, S.
2008-02-01
Harvesting systems capable of transforming dusty environmental energy into electrical energy have aroused considerable interest in the last two decades. Several research works have focused on the transformation of mechanical environmental vibrations into electrical energy. Most of the research activity refers to classic piezoelectric ceramic materials, but more recently piezoelectric polymer materials have been considered. In this paper, a novel point of view regarding harvesting systems is proposed: using ionic polymer metal composites (IPMCs) as generating materials. The goal of this paper is the development of a model able to predict the energy harvesting capabilities of an IPMC material working in air. The model is developed by using the vibration transmission theory of an Euler-Bernoulli cantilever IPMC beam. The IPMC is considered to work in its linear elastic region with a viscous damping contribution ranging from 0.1 to 100 Hz. An identification process based on experimental measurements performed on a Nafion® 117 membrane is used to estimate the material parameters. The model validation shows a good agreement between simulated and experimental results. The model is used to predict the optimal working region and the optimal geometrical parameters for the maximum power generation capacity of a specific membrane. The model takes into account two restrictions. The first is due to the beam theory, which imposes a maximum ratio of 0.5 between the cantilever width and length. The second restriction is to force the cantilever to oscillate with a specific strain; in this paper a 0.3% strain is considered. By considering these two assumptions as constraints on the model, it is seen that IPMC materials could be used as low-power generators in a low-frequency region. The optimal dimensions for the Nafion® 117 membrane are length = 12 cm and width = 6.2 cm, and the electric power generation is 3 nW at a vibrating frequency of 7.09 rad s-1. IPMC materials can sustain big yield strains, so by increasing the strain allowed on the material the power will increase dramatically, the expected values being up to a few microwatts.
Why significant variables aren't automatically good predictors.
Lo, Adeline; Chernoff, Herman; Zheng, Tian; Lo, Shaw-Hwa
2015-11-10
Thus far, genome-wide association studies (GWAS) have been disappointing in the inability of investigators to use the results of identified, statistically significant variants in complex diseases to make predictions useful for personalized medicine. Why are significant variables not leading to good prediction of outcomes? We point out that this problem is prevalent in simple as well as complex data, in the sciences as well as the social sciences. We offer a brief explanation and some statistical insights on why higher significance cannot automatically imply stronger predictivity and illustrate through simulations and a real breast cancer example. We also demonstrate that highly predictive variables do not necessarily appear as highly significant, thus evading the researcher using significance-based methods. We point out that what makes variables good for prediction versus significance depends on different properties of the underlying distributions. If prediction is the goal, we must lay aside significance as the only selection standard. We suggest that progress in prediction requires efforts toward a new research agenda of searching for a novel criterion to retrieve highly predictive variables rather than highly significant variables. We offer an alternative approach that was not designed for significance, the partition retention method, which was very effective predicting on a long-studied breast cancer data set, by reducing the classification error rate from 30% to 8%.
PGT: A Statistical Approach to Prediction and Mechanism Design
NASA Astrophysics Data System (ADS)
Wolpert, David H.; Bono, James W.
One of the biggest challenges facing behavioral economics is the lack of a single theoretical framework that is capable of directly utilizing all types of behavioral data. One of the biggest challenges of game theory is the lack of a framework for making predictions and designing markets in a manner that is consistent with the axioms of decision theory. An approach in which solution concepts are distribution-valued rather than set-valued (i.e. equilibrium theory) has both capabilities. We call this approach Predictive Game Theory (or PGT). This paper outlines a general Bayesian approach to PGT. It also presents one simple example to illustrate the way in which this approach differs from equilibrium approaches in both prediction and mechanism design settings.
Davis, Eric; Devlin, Sean; Cooper, Candice; Nhaissi, Melissa; Paulson, Jennifer; Wells, Deborah; Scaradavou, Andromachi; Giralt, Sergio; Papadopoulos, Esperanza; Kernan, Nancy A; Byam, Courtney; Barker, Juliet N
2018-05-01
A strategy to rapidly determine if a matched unrelated donor (URD) can be secured for allograft recipients is needed. We sought to validate the accuracy of (1) HapLogic match predictions and (2) a resultant novel Search Prognosis (SP) patient categorization that could predict 8/8 HLA-matched URD(s) likelihood at search initiation. Patient prognosis categories at search initiation were correlated with URD confirmatory typing results. HapLogic-based SP categorizations accurately predicted the likelihood of an 8/8 HLA-match in 830 patients (1530 donors tested). Sixty percent of patients had 8/8 URD(s) identified. Patient SP categories (217 very good, 104 good, 178 fair, 33 poor, 153 very poor, 145 futile) were associated with a marked progressive decrease in 8/8 URD identification and transplantation. Very good to good categories were highly predictive of identifying and receiving an 8/8 URD regardless of ancestry. Europeans in fair/poor categories were more likely to identify and receive an 8/8 URD compared with non-Europeans. In all ancestries very poor and futile categories predicted no 8/8 URDs. HapLogic permits URD search results to be predicted once patient HLA typing and ancestry is obtained, dramatically improving search efficiency. Poor, very poor, andfutile searches can be immediately recognized, thereby facilitating prompt pursuit of alternative donors. Copyright © 2017 The American Society for Blood and Marrow Transplantation. Published by Elsevier Inc. All rights reserved.
Helicopter Rotor Noise Prediction: Background, Current Status, and Future Direction
NASA Technical Reports Server (NTRS)
Brentner, Kenneth S.
1997-01-01
Helicopter noise prediction is increasingly important. The purpose of this viewgraph presentation is to: 1) Put into perspective the recent progress; 2) Outline current prediction capabilities; 3) Forecast direction of future prediction research; 4) Identify rotorcraft noise prediction needs. The presentation includes an historical perspective, a description of governing equations, and the current status of source noise prediction.
NASA Technical Reports Server (NTRS)
West, Jeff; Strutzenberg, Louise L.; Putnam, Gabriel C.; Liever, Peter A.; Williams, Brandon R.
2012-01-01
This paper presents development efforts to establish modeling capabilities for launch vehicle liftoff acoustics and ignition transient environment predictions. Peak acoustic loads experienced by the launch vehicle occur during liftoff with strong interaction between the vehicle and the launch facility. Acoustic prediction engineering tools based on empirical models are of limited value in efforts to proactively design and optimize launch vehicles and launch facility configurations for liftoff acoustics. Modeling approaches are needed that capture the important details of the plume flow environment including the ignition transient, identify the noise generation sources, and allow assessment of the effects of launch pad geometric details and acoustic mitigation measures such as water injection. This paper presents a status of the CFD tools developed by the MSFC Fluid Dynamics Branch featuring advanced multi-physics modeling capabilities developed towards this goal. Validation and application examples are presented along with an overview of application in the prediction of liftoff environments and the design of targeted mitigation measures such as launch pad configuration and sound suppression water placement.
NASA Technical Reports Server (NTRS)
Evans, Diane
2012-01-01
Objective 2.1.1: Improve understanding of and improve the predictive capability for changes in the ozone layer, climate forcing, and air quality associated with changes in atmospheric composition. Objective 2.1.2: Enable improved predictive capability for weather and extreme weather events. Objective 2.1.3: Quantify, understand, and predict changes in Earth s ecosystems and biogeochemical cycles, including the global carbon cycle, land cover, and biodiversity. Objective 2.1.4: Quantify the key reservoirs and fluxes in the global water cycle and assess water cycle change and water quality. Objective 2.1.5: Improve understanding of the roles of the ocean, atmosphere, land and ice in the climate system and improve predictive capability for its future evolution. Objective 2.1.6: Characterize the dynamics of Earth s surface and interior and form the scientific basis for the assessment and mitigation of natural hazards and response to rare and extreme events. Objective 2.1.7: Enable the broad use of Earth system science observations and results in decision-making activities for societal benefits.
NASA Technical Reports Server (NTRS)
Goldhirsh, J.
1977-01-01
Disdrometer measurements and radar reflectivity measurements were injected into a computer program to estimate the path attenuation of the signal. Predicted attenuations when compared with the directly measured ones showed generally good correlation on a case by case basis and very good agreement statistically. The utility of using radar in conjunction with disdrometer measurements for predicting fade events and long term fade distributions associated with earth-satellite telecommunications is demonstrated.
Biomes computed from simulated climatologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Claussen, M.; Esch, M.
1994-01-01
The biome model of Prentice et al. is used to predict global patterns of potential natural plant formations, or biomes, from climatologies simulated by ECHAM, a model used for climate simulations at the Max-Planck-Institut fuer Meteorologie. This study undertaken in order to show the advantage of this biome model in diagnosing the performance of a climate model and assessing effects of past and future climate changes predicted by a climate model. Good overall agreement is found between global patterns of biomes computed from observed and simulated data of present climate. But there are also major discrepancies indicated by a differencemore » in biomes in Australia, in the Kalahari Desert, and in the Middle West of North America. These discrepancies can be traced back to in simulated rainfall as well as summer or winter temperatures. Global patterns of biomes computed from an ice age simulation reveal that North America, Europe, and Siberia should have been covered largely by tundra and taiga, whereas only small differences are for the tropical rain forests. A potential northeast shift of biomes is expected from a simulation with enhanced CO{sub 2} concentration according to the IPCC Scenario A. Little change is seen in the tropical rain forest and the Sahara. Since the biome model used is not capable of predicting chances in vegetation patterns due to a rapid climate change, the latter simulation to be taken as a prediction of chances in conditions favourable for the existence of certain biomes, not as a reduction of a future distribution of biomes. 15 refs., 8 figs., 2 tabs.« less
NASA Astrophysics Data System (ADS)
Rahmati, Omid; Tahmasebipour, Nasser; Haghizadeh, Ali; Pourghasemi, Hamid Reza; Feizizadeh, Bakhtiar
2017-12-01
Gully erosion constitutes a serious problem for land degradation in a wide range of environments. The main objective of this research was to compare the performance of seven state-of-the-art machine learning models (SVM with four kernel types, BP-ANN, RF, and BRT) to model the occurrence of gully erosion in the Kashkan-Poldokhtar Watershed, Iran. In the first step, a gully inventory map consisting of 65 gully polygons was prepared through field surveys. Three different sample data sets (S1, S2, and S3), including both positive and negative cells (70% for training and 30% for validation), were randomly prepared to evaluate the robustness of the models. To model the gully erosion susceptibility, 12 geo-environmental factors were selected as predictors. Finally, the goodness-of-fit and prediction skill of the models were evaluated by different criteria, including efficiency percent, kappa coefficient, and the area under the ROC curves (AUC). In terms of accuracy, the RF, RBF-SVM, BRT, and P-SVM models performed excellently both in the degree of fitting and in predictive performance (AUC values well above 0.9), which resulted in accurate predictions. Therefore, these models can be used in other gully erosion studies, as they are capable of rapidly producing accurate and robust gully erosion susceptibility maps (GESMs) for decision-making and soil and water management practices. Furthermore, it was found that performance of RF and RBF-SVM for modelling gully erosion occurrence is quite stable when the learning and validation samples are changed.
2016-01-01
People with hearing impairment are thought to rely heavily on context to compensate for reduced audibility. Here, we explore the resulting cost of this compensatory behavior, in terms of effort and the efficiency of ongoing predictive language processing. The listening task featured predictable or unpredictable sentences, and participants included people with cochlear implants as well as people with normal hearing who heard full-spectrum/unprocessed or vocoded speech. The crucial metric was the growth of the pupillary response and the reduction of this response for predictable versus unpredictable sentences, which would suggest reduced cognitive load resulting from predictive processing. Semantic context led to rapid reduction of listening effort for people with normal hearing; the reductions were observed well before the offset of the stimuli. Effort reduction was slightly delayed for people with cochlear implants and considerably more delayed for normal-hearing listeners exposed to spectrally degraded noise-vocoded signals; this pattern of results was maintained even when intelligibility was perfect. Results suggest that speed of sentence processing can still be disrupted, and exertion of effort can be elevated, even when intelligibility remains high. We discuss implications for experimental and clinical assessment of speech recognition, in which good performance can arise because of cognitive processes that occur after a stimulus, during a period of silence. Because silent gaps are not common in continuous flowing speech, the cognitive/linguistic restorative processes observed after sentences in such studies might not be available to listeners in everyday conversations, meaning that speech recognition in conventional tests might overestimate sentence-processing capability. PMID:27698260
NASA Astrophysics Data System (ADS)
Infante Corona, J. A.; Lakhankar, T.; Khanbilvardi, R.; Pradhanang, S. M.
2013-12-01
Stream flow estimation and flood prediction influenced by snow melting processes have been studied for the past couple of decades because of their destruction potential, money losses and demises. It has been observed that snow, that was very stationary during its seasons, now is variable in shorter time-scales (daily and hourly) and rapid snowmelt can contribute or been the cause of floods. Therefore, good estimates of snowpack properties on ground are necessary in order to have an accurate prediction of these destructive events. The snow thermal model (SNTHERM) is a 1-dimensional model that analyzes the snowpack properties given the climatological conditions of a particular area. Gridded data from both, in-situ meteorological observations and remote sensing data will be produced using interpolation methods; thus, snow water equivalent (SWE) and snowmelt estimations can be obtained. The soil and water assessment tool (SWAT) is a hydrological model capable of predicting runoff quantity and quality of a watershed given its main physical and hydrological properties. The results from SNTHERM will be used as an input for SWAT in order to have simulated runoff under snowmelt conditions. This project attempts to improve the river discharge estimation considering both, excess rainfall runoff and the snow melting process. Obtaining a better estimation of the snowpack properties and evolution is expected. A coupled use of SNTHERM and SWAT based on meteorological in situ and remote sensed data will improve the temporal and spatial resolution of the snowpack characterization and river discharge estimations, and thus flood prediction.
Updraft Fixed Bed Gasification Aspen Plus Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
2007-09-27
The updraft fixed bed gasification model provides predictive modeling capabilities for updraft fixed bed gasifiers, when devolatilization data is available. The fixed bed model is constructed using Aspen Plus, process modeling software, coupled with a FORTRAN user kinetic subroutine. Current updraft gasification models created in Aspen Plus have limited predictive capabilities and must be "tuned" to reflect a generalized gas composition as specified in literature or by the gasifier manufacturer. This limits the applicability of the process model.
Contact and Impact Dynamic Modeling Capabilities of LS-DYNA for Fluid-Structure Interaction Problems
2010-12-02
rigid sphere in a vertical water entry,” Applied Ocean Research, 13(1), pp. 43-48. Monaghan, J.J., 1994. “ Simulating free surface flows with SPH ...The kinematic free surface condition was used to determine the intersection between the free surface and the body in the outer flow domain...and the results were compared with analytical and numerical predictions. The predictive capability of ALE and SPH features of LS-DYNA for simulation
Real-time scene and signature generation for ladar and imaging sensors
NASA Astrophysics Data System (ADS)
Swierkowski, Leszek; Christie, Chad L.; Antanovskii, Leonid; Gouthas, Efthimios
2014-05-01
This paper describes development of two key functionalities within the VIRSuite scene simulation program, broadening its scene generation capabilities and increasing accuracy of thermal signatures. Firstly, a new LADAR scene generation module has been designed. It is capable of simulating range imagery for Geiger mode LADAR, in addition to the already existing functionality for linear mode systems. Furthermore, a new 3D heat diffusion solver has been developed within the VIRSuite signature prediction module. It is capable of calculating the temperature distribution in complex three-dimensional objects for enhanced dynamic prediction of thermal signatures. With these enhancements, VIRSuite is now a robust tool for conducting dynamic simulation for missiles with multi-mode seekers.
Fekete, Tibor; Rásó, Erzsébet; Pete, Imre; Tegze, Bálint; Liko, István; Munkácsy, Gyöngyi; Sipos, Norbert; Rigó, János; Györffy, Balázs
2012-07-01
Transcriptomic analysis of global gene expression in ovarian carcinoma can identify dysregulated genes capable to serve as molecular markers for histology subtypes and survival. The aim of our study was to validate previous candidate signatures in an independent setting and to identify single genes capable to serve as biomarkers for ovarian cancer progression. As several datasets are available in the GEO today, we were able to perform a true meta-analysis. First, 829 samples (11 datasets) were downloaded, and the predictive power of 16 previously published gene sets was assessed. Of these, eight were capable to discriminate histology subtypes, and none was capable to predict survival. To overcome the differences in previous studies, we used the 829 samples to identify new predictors. Then, we collected 64 ovarian cancer samples (median relapse-free survival 24.5 months) and performed TaqMan Real Time Polimerase Chain Reaction (RT-PCR) analysis for the best 40 genes associated with histology subtypes and survival. Over 90% of subtype-associated genes were confirmed. Overall survival was effectively predicted by hormone receptors (PGR and ESR2) and by TSPAN8. Relapse-free survival was predicted by MAPT and SNCG. In summary, we successfully validated several gene sets in a meta-analysis in large datasets of ovarian samples. Additionally, several individual genes identified were validated in a clinical cohort. Copyright © 2011 UICC.
NASA Technical Reports Server (NTRS)
Gardner, Kevin D.; Liu, Jong-Shang; Murthy, Durbha V.; Kruse, Marlin J.; James, Darrell
1999-01-01
AlliedSignal Engines, in cooperation with NASA GRC (National Aeronautics and Space Administration Glenn Research Center), completed an evaluation of recently-developed aeroelastic computer codes using test cases from the AlliedSignal Engines fan blisk and turbine databases. Test data included strain gage, performance, and steady-state pressure information obtained for conditions where synchronous or flutter vibratory conditions were found to occur. Aeroelastic codes evaluated included quasi 3-D UNSFLO (MIT Developed/AE Modified, Quasi 3-D Aeroelastic Computer Code), 2-D FREPS (NASA-Developed Forced Response Prediction System Aeroelastic Computer Code), and 3-D TURBO-AE (NASA/Mississippi State University Developed 3-D Aeroelastic Computer Code). Unsteady pressure predictions for the turbine test case were used to evaluate the forced response prediction capabilities of each of the three aeroelastic codes. Additionally, one of the fan flutter cases was evaluated using TURBO-AE. The UNSFLO and FREPS evaluation predictions showed good agreement with the experimental test data trends, but quantitative improvements are needed. UNSFLO over-predicted turbine blade response reductions, while FREPS under-predicted them. The inviscid TURBO-AE turbine analysis predicted no discernible blade response reduction, indicating the necessity of including viscous effects for this test case. For the TURBO-AE fan blisk test case, significant effort was expended getting the viscous version of the code to give converged steady flow solutions for the transonic flow conditions. Once converged, the steady solutions provided an excellent match with test data and the calibrated DAWES (AlliedSignal 3-D Viscous Steady Flow CFD Solver). However, efforts expended establishing quality steady-state solutions prevented exercising the unsteady portion of the TURBO-AE code during the present program. AlliedSignal recommends that unsteady pressure measurement data be obtained for both test cases examined for use in aeroelastic code validation.
Hostettler, Isabel Charlotte; Muroi, Carl; Richter, Johannes Konstantin; Schmid, Josef; Neidert, Marian Christoph; Seule, Martin; Boss, Oliver; Pangalu, Athina; Germans, Menno Robbert; Keller, Emanuela
2018-01-19
OBJECTIVE The aim of this study was to create prediction models for outcome parameters by decision tree analysis based on clinical and laboratory data in patients with aneurysmal subarachnoid hemorrhage (aSAH). METHODS The database consisted of clinical and laboratory parameters of 548 patients with aSAH who were admitted to the Neurocritical Care Unit, University Hospital Zurich. To examine the model performance, the cohort was randomly divided into a derivation cohort (60% [n = 329]; training data set) and a validation cohort (40% [n = 219]; test data set). The classification and regression tree prediction algorithm was applied to predict death, functional outcome, and ventriculoperitoneal (VP) shunt dependency. Chi-square automatic interaction detection was applied to predict delayed cerebral infarction on days 1, 3, and 7. RESULTS The overall mortality was 18.4%. The accuracy of the decision tree models was good for survival on day 1 and favorable functional outcome at all time points, with a difference between the training and test data sets of < 5%. Prediction accuracy for survival on day 1 was 75.2%. The most important differentiating factor was the interleukin-6 (IL-6) level on day 1. Favorable functional outcome, defined as Glasgow Outcome Scale scores of 4 and 5, was observed in 68.6% of patients. Favorable functional outcome at all time points had a prediction accuracy of 71.1% in the training data set, with procalcitonin on day 1 being the most important differentiating factor at all time points. A total of 148 patients (27%) developed VP shunt dependency. The most important differentiating factor was hyperglycemia on admission. CONCLUSIONS The multiple variable analysis capability of decision trees enables exploration of dependent variables in the context of multiple changing influences over the course of an illness. The decision tree currently generated increases awareness of the early systemic stress response, which is seemingly pertinent for prognostication.
A data-driven prediction method for fast-slow systems
NASA Astrophysics Data System (ADS)
Groth, Andreas; Chekroun, Mickael; Kondrashov, Dmitri; Ghil, Michael
2016-04-01
In this work, we present a prediction method for processes that exhibit a mixture of variability on low and fast scales. The method relies on combining empirical model reduction (EMR) with singular spectrum analysis (SSA). EMR is a data-driven methodology for constructing stochastic low-dimensional models that account for nonlinearity and serial correlation in the estimated noise, while SSA provides a decomposition of the complex dynamics into low-order components that capture spatio-temporal behavior on different time scales. Our study focuses on the data-driven modeling of partial observations from dynamical systems that exhibit power spectra with broad peaks. The main result in this talk is that the combination of SSA pre-filtering with EMR modeling improves, under certain circumstances, the modeling and prediction skill of such a system, as compared to a standard EMR prediction based on raw data. Specifically, it is the separation into "fast" and "slow" temporal scales by the SSA pre-filtering that achieves the improvement. We show, in particular that the resulting EMR-SSA emulators help predict intermittent behavior such as rapid transitions between specific regions of the system's phase space. This capability of the EMR-SSA prediction will be demonstrated on two low-dimensional models: the Rössler system and a Lotka-Volterra model for interspecies competition. In either case, the chaotic dynamics is produced through a Shilnikov-type mechanism and we argue that the latter seems to be an important ingredient for the good prediction skills of EMR-SSA emulators. Shilnikov-type behavior has been shown to arise in various complex geophysical fluid models, such as baroclinic quasi-geostrophic flows in the mid-latitude atmosphere and wind-driven double-gyre ocean circulation models. This pervasiveness of the Shilnikow mechanism of fast-slow transition opens interesting perspectives for the extension of the proposed EMR-SSA approach to more realistic situations.
[Run the risk: social disadvantage or capability?
Muñoz-Duque, Luz Adriana
2018-05-10
This article discusses the notions of risk and risk acceptability from a social justice perspective, especially in light of the capability approach proposed by Amartya Sen. The article argues that risk can be the expression of restrictions on subjects' capabilities, deriving from social disadvantages that can be taken for granted in their daily realities. On the other hand, risk can be viewed as an expression of capability in cases where subjects have accepted or admitted the risk through the exercise of freedom, as long as the subjects that relate to the risk do so in keeping with their idea of a good life, the building of which implies the full development of capability for agency. The article concludes with some thoughts on the issues of risk and risk acceptability in the sphere of public health.
Self-care behavior of type 2 diabetes mellitus patients in Bandar Abbas in 2015.
Karimi, Fatemeh; Abedini, Sedigheh; Mohseni, Shokrollah
2017-11-01
Diabetes self-care helps to control the blood sugar which, in turn, results in a better state of health. However, more than 50% of diabetic patients do not have self-care capabilities. To determine type 2 diabetes self-care capabilities among patients visiting a Bandar Abbas diabetes clinic in 2016. The present descriptive-analytical research was of a cross-sectional type. The sample was comprised of 120 patients afflicted with type 2 diabetes, who had been selected through the simple randomized sampling method. The data collection instrument was a questionnaire comprised of two sections: demographic information, and a summary of patients' diabetes self-care activities. A 7-point Likert scale was used for the rating. The final score would be interpreted as any of the three levels: good (acceptable) (75-100), moderate (50-74) and poor (below 50). The data entered SPSS version 18.0 for the required statistical analyses. The mean age of the sample was 51.88±10.12 years. Of the 120 subjects, 86 were female (71.7%) and 34 were male (28.3%). The findings revealed that the self-care capability of 83 subjects (69.2%) was poor; capability of 28 subjects was moderate (23.3%) and the same score of good/acceptable in 9 subjects (7.5%). The results of the present research indicate that a large number of diabetic patients have a poor self-care capability. Due to the key role of such activities in a diabetic patient's life, it is suggested to include educational programs to increase the level of self-care capabilities among these patients.
NASA Technical Reports Server (NTRS)
Prichard, Devon S.
1996-01-01
This document provides a brief overview of use of the ROTONET rotorcraft system noise prediction capability within the Aircraft Noise Program (ANOPP). Reviews are given on rotorcraft noise, the state-of-the-art of system noise prediction, and methods for using the various ROTONET prediction modules.
Comparison of Fire Model Predictions with Experiments Conducted in a Hangar With a 15 Meter Ceiling
NASA Technical Reports Server (NTRS)
Davis, W. D.; Notarianni, K. A.; McGrattan, K. B.
1996-01-01
The purpose of this study is to examine the predictive capabilities of fire models using the results of a series of fire experiments conducted in an aircraft hangar with a ceiling height of about 15 m. This study is designed to investigate model applicability at a ceiling height where only a limited amount of experimental data is available. This analysis deals primarily with temperature comparisons as a function of distance from the fire center and depth beneath the ceiling. Only limited velocity measurements in the ceiling jet were available but these are also compared with those models with a velocity predictive capability.
Observational breakthroughs lead the way to improved hydrological predictions
NASA Astrophysics Data System (ADS)
Lettenmaier, Dennis P.
2017-04-01
New data sources are revolutionizing the hydrological sciences. The capabilities of hydrological models have advanced greatly over the last several decades, but until recently model capabilities have outstripped the spatial resolution and accuracy of model forcings (atmospheric variables at the land surface) and the hydrologic state variables (e.g., soil moisture; snow water equivalent) that the models predict. This has begun to change, as shown in two examples here: soil moisture and drought evolution over Africa as predicted by a hydrology model forced with satellite-derived precipitation, and observations of snow water equivalent at very high resolution over a river basin in California's Sierra Nevada.
Prediction of Agglomeration, Fouling, and Corrosion Tendency of Fuels in CFB Co-Combustion
NASA Astrophysics Data System (ADS)
Barišć, Vesna; Zabetta, Edgardo Coda; Sarkki, Juha
Prediction of agglomeration, fouling, and corrosion tendency of fuels is essential to the design of any CFB boiler. During the years, tools have been successfully developed at Foster Wheeler to help with such predictions for the most commercial fuels. However, changes in fuel market and the ever-growing demand for co-combustion capabilities pose a continuous need for development. This paper presents results from recently upgraded models used at Foster Wheeler to predict agglomeration, fouling, and corrosion tendency of a variety of fuels and mixtures. The models, subject of this paper, are semi-empirical computer tools that combine the theoretical basics of agglomeration/fouling/corrosion phenomena with empirical correlations. Correlations are derived from Foster Wheeler's experience in fluidized beds, including nearly 10,000 fuel samples and over 1,000 tests in about 150 CFB units. In these models, fuels are evaluated based on their classification, their chemical and physical properties by standard analyses (proximate, ultimate, fuel ash composition, etc.;.) alongside with Foster Wheeler own characterization methods. Mixtures are then evaluated taking into account the component fuels. This paper presents the predictive capabilities of the agglomeration/fouling/corrosion probability models for selected fuels and mixtures fired in full-scale. The selected fuels include coals and different types of biomass. The models are capable to predict the behavior of most fuels and mixtures, but also offer possibilities for further improvements.
High-fidelity modeling and impact footprint prediction for vehicle breakup analysis
NASA Astrophysics Data System (ADS)
Ling, Lisa
For decades, vehicle breakup analysis had been performed for space missions that used nuclear heater or power units in order to assess aerospace nuclear safety for potential launch failures leading to inadvertent atmospheric reentry. Such pre-launch risk analysis is imperative to assess possible environmental impacts, obtain launch approval, and for launch contingency planning. In order to accurately perform a vehicle breakup analysis, the analysis tool should include a trajectory propagation algorithm coupled with thermal and structural analyses and influences. Since such a software tool was not available commercially or in the public domain, a basic analysis tool was developed by Dr. Angus McRonald prior to this study. This legacy software consisted of low-fidelity modeling and had the capability to predict vehicle breakup, but did not predict the surface impact point of the nuclear component. Thus the main thrust of this study was to develop and verify the additional dynamics modeling and capabilities for the analysis tool with the objectives to (1) have the capability to predict impact point and footprint, (2) increase the fidelity in the prediction of vehicle breakup, and (3) reduce the effort and time required to complete an analysis. The new functions developed for predicting the impact point and footprint included 3-degrees-of-freedom trajectory propagation, the generation of non-arbitrary entry conditions, sensitivity analysis, and the calculation of impact footprint. The functions to increase the fidelity in the prediction of vehicle breakup included a panel code to calculate the hypersonic aerodynamic coefficients for an arbitrary-shaped body and the modeling of local winds. The function to reduce the effort and time required to complete an analysis included the calculation of node failure criteria. The derivation and development of these new functions are presented in this dissertation, and examples are given to demonstrate the new capabilities and the improvements made, with comparisons between the results obtained from the upgraded analysis tool and the legacy software wherever applicable.
Extending the time window for endovascular procedures according to collateral pial circulation.
Ribo, Marc; Flores, Alan; Rubiera, Marta; Pagola, Jorge; Sargento-Freitas, Joao; Rodriguez-Luna, David; Coscojuela, Pilar; Maisterra, Olga; Piñeiro, Socorro; Romero, Francisco J; Alvarez-Sabin, Jose; Molina, Carlos A
2011-12-01
Good collateral pial circulation (CPC) predicts a favorable outcome in patients undergoing intra-arterial procedures. We aimed to determine if CPC status may be used to decide about pursuing recanalization efforts. Pial collateral score (0-5) was determined on initial angiogram. We considered good CPC when pial collateral score<3, defined total time of ischemia (TTI) as onset-to-recanalization time, and clinical improvement>4-point decline in admission-discharge National Institutes of Health Stroke Scale. We studied CPC in 61 patients (31 middle cerebral artery, 30 internal carotid artery). Good CPC patients (n=21 [34%]) had lower discharge National Institutes of Health Stroke Scale score (7 versus 21; P=0.02) and smaller infarcts (56 mL versus 238 mL; P<0.001). In poor CPC patients, a receiver operating characteristic curve defined a TTI cutoff point<300 minutes (sensitivity 67%, specificity 75%) that better predicted clinical improvement (TTI<300: 66.7% versus TTI>300: 25%; P=0.05). For good CPC patients, no temporal cutoff point could be defined. Although clinical improvement was similar for patients recanalizing within 300 minutes (poor CPC: 60% versus good CPC: 85.7%; P=0.35), the likelihood of clinical improvement was 3-fold higher after 300 minutes only in good CPC patients (23.1% versus 90.1%; P=0.01). Similarly, infarct volume was reduced 7-fold in good as compared with poor CPC patients only when TTI>300 minutes (TTI<300: poor CPC: 145 mL versus good CPC: 93 mL; P=0.56 and TTI>300: poor CPC: 217 mL versus good CPC: 33 mL; P<0.01). After adjusting for age and baseline National Institutes of Health Stroke Scale score, TTI<300 emerged as an independent predictor of clinical improvement in poor CPC patients (OR, 6.6; 95% CI, 1.01-44.3; P=0.05) but not in good CPC patients. In a logistic regression, good CPC independently predicted clinical improvement after adjusting for TTI, admission National Institutes of Health Stroke Scale score, and age (OR, 12.5; 95% CI, 1.6-74.8; P=0.016). Good CPC predicts better clinical response to intra-arterial treatment beyond 5 hours from onset. In patients with stroke receiving endovascular treatment, identification of good CPC may help physicians when considering pursuing recanalization efforts in late time windows.
Micromechanics and Piezo Enhancements of HyperSizer
NASA Technical Reports Server (NTRS)
Arnold, Steven M.; Bednarcyk, Brett A.; Yarrington, Phillip; Collier, Craig S.
2006-01-01
The commercial HyperSizer aerospace-composite-material-structure-sizing software has been enhanced by incorporating capabilities for representing coupled thermal, piezoelectric, and piezomagnetic effects on the levels of plies, laminates, and stiffened panels. This enhancement is based on a formulation similar to that of the pre-existing HyperSizer capability for representing thermal effects. As a result of this enhancement, the electric and/or magnetic response of a material or structure to a mechanical or thermal load, or its mechanical response to an applied electric or magnetic field can be predicted. In another major enhancement, a capability for representing micromechanical effects has been added by establishment of a linkage between HyperSizer and Glenn Research Center s Micromechanics Analysis Code With Generalized Method of Cells (MAC/GMC) computer program, which was described in several prior NASA Tech Briefs articles. The linkage enables Hyper- Sizer to localize to the fiber and matrix level rather than only to the ply level, making it possible to predict local failures and to predict properties of plies from those of the component fiber and matrix materials. Advanced graphical user interfaces and database structures have been developed to support the new HyperSizer micromechanics capabilities.
NASA Astrophysics Data System (ADS)
Wang, Yujie; Zhang, Xu; Liu, Chang; Pan, Rui; Chen, Zonghai
2018-06-01
The power capability and maximum charge and discharge energy are key indicators for energy management systems, which can help the energy storage devices work in a suitable area and prevent them from over-charging and over-discharging. In this work, a model based power and energy assessment approach is proposed for the lithium-ion battery and supercapacitor hybrid system. The model framework of the lithium-ion battery and supercapacitor hybrid system is developed based on the equivalent circuit model, and the model parameters are identified by regression method. Explicit analyses of the power capability and maximum charge and discharge energy prediction with multiple constraints are elaborated. Subsequently, the extended Kalman filter is employed for on-board power capability and maximum charge and discharge energy prediction to overcome estimation error caused by system disturbance and sensor noise. The charge and discharge power capability, and the maximum charge and discharge energy are quantitatively assessed under both the dynamic stress test and the urban dynamometer driving schedule. The maximum charge and discharge energy prediction of the lithium-ion battery and supercapacitor hybrid system with different time scales are explored and discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gajjar, Anant; /Liverpool U.
Measurements of the di-photon cross section have been made in the central region and are found to be in good agreement with NLO QCD predictions. The cross section of events containing a photon and additional heavy flavor jet have also been measured, as well as the ratio of photon + b to photon + c. The statistically limited sample shows good agreement with Leading Order predictions.
Analysis of Artificial Neural Network in Erosion Modeling: A Case Study of Serang Watershed
NASA Astrophysics Data System (ADS)
Arif, N.; Danoedoro, P.; Hartono
2017-12-01
Erosion modeling is an important measuring tool for both land users and decision makers to evaluate land cultivation and thus it is necessary to have a model to represent the actual reality. Erosion models are a complex model because of uncertainty data with different sources and processing procedures. Artificial neural networks can be relied on for complex and non-linear data processing such as erosion data. The main difficulty in artificial neural network training is the determination of the value of each network input parameters, i.e. hidden layer, momentum, learning rate, momentum, and RMS. This study tested the capability of artificial neural network application in the prediction of erosion risk with some input parameters through multiple simulations to get good classification results. The model was implemented in Serang Watershed, Kulonprogo, Yogyakarta which is one of the critical potential watersheds in Indonesia. The simulation results showed the number of iterations that gave a significant effect on the accuracy compared to other parameters. A small number of iterations can produce good accuracy if the combination of other parameters was right. In this case, one hidden layer was sufficient to produce good accuracy. The highest training accuracy achieved in this study was 99.32%, occurred in ANN 14 simulation with combination of network input parameters of 1 HL; LR 0.01; M 0.5; RMS 0.0001, and the number of iterations of 15000. The ANN training accuracy was not influenced by the number of channels, namely input dataset (erosion factors) as well as data dimensions, rather it was determined by changes in network parameters.
Want to Improve Children's Writing? Don't Neglect Their Handwriting
ERIC Educational Resources Information Center
Graham, Steve
2010-01-01
The famed playwright Harold Pinter, having just been introduced as a very good writer, was once asked by a six-year-old boy if he could do a "w." The author suspects that "w" was a difficult letter for this young man, and he judged the writing capability of others accordingly. This student's assumption--that being a "good writer" means having good…
JPRS Report, Near East and South Asia, India
1992-05-13
withdrawal of the RBI import policy for capital goods, raw materials, compo- nents and industrial consumables . Also, the licensing of host of industrial ...fully capable of establishing consumer production industries and providing raw materials for it. This would also benefit India, because a huge market...which two- thirds of population depends for their livelihood, demand base for the industrial sector, especially essen- tial consumer goods is likely
The Influence of Large-Scale Computing on Aircraft Structural Design.
1986-04-01
the customer in the most cost- effective manner. Computer facility organizations became computer resource power brokers. A good data processing...capabilities generated on other processors can be easily used. This approach is easily implementable and provides a good strategy for using existing...assistance to member nations for the purpose of increasing their scientific and technical potential; - Recommending effective ways for the member nations to
Electroencephalography Predicts Poor and Good Outcomes After Cardiac Arrest: A Two-Center Study.
Rossetti, Andrea O; Tovar Quiroga, Diego F; Juan, Elsa; Novy, Jan; White, Roger D; Ben-Hamouda, Nawfel; Britton, Jeffrey W; Oddo, Mauro; Rabinstein, Alejandro A
2017-07-01
The prognostic role of electroencephalography during and after targeted temperature management in postcardiac arrest patients, relatively to other predictors, is incompletely known. We assessed performances of electroencephalography during and after targeted temperature management toward good and poor outcomes, along with other recognized predictors. Cohort study (April 2009 to March 2016). Two academic hospitals (Centre Hospitalier Universitaire Vaudois, Lausanne, Switzerland; Mayo Clinic, Rochester, MN). Consecutive comatose adults admitted after cardiac arrest, identified through prospective registries. All patients were managed with targeted temperature management, receiving prespecified standardized clinical, neurophysiologic (particularly, electroencephalography during and after targeted temperature management), and biochemical evaluations. We assessed electroencephalography variables (reactivity, continuity, epileptiform features, and prespecified "benign" or "highly malignant" patterns based on the American Clinical Neurophysiology Society nomenclature) and other clinical, neurophysiologic (somatosensory-evoked potential), and biochemical prognosticators. Good outcome (Cerebral Performance Categories 1 and 2) and mortality predictions at 3 months were calculated. Among 357 patients, early electroencephalography reactivity and continuity and flexor or better motor reaction had greater than 70% positive predictive value for good outcome; reactivity (80.4%; 95% CI, 75.9-84.4%) and motor response (80.1%; 95% CI, 75.6-84.1%) had highest accuracy. Early benign electroencephalography heralded good outcome in 86.2% (95% CI, 79.8-91.1%). False positive rates for mortality were less than 5% for epileptiform or nonreactive early electroencephalography, nonreactive late electroencephalography, absent somatosensory-evoked potential, absent pupillary or corneal reflexes, presence of myoclonus, and neuron-specific enolase greater than 75 µg/L; accuracy was highest for early electroencephalography reactivity (86.6%; 95% CI, 82.6-90.0). Early highly malignant electroencephalography had an false positive rate of 1.5% with accuracy of 85.7% (95% CI, 81.7-89.2%). This study provides class III evidence that electroencephalography reactivity predicts both poor and good outcomes, and motor reaction good outcome after cardiac arrest. Electroencephalography reactivity seems to be the best discriminator between good and poor outcomes. Standardized electroencephalography interpretation seems to predict both conditions during and after targeted temperature management.
THE FUTURE OF TOXICOLOGY-PREDICTIVE TOXICOLOGY: AN EXPANDED VIEW OF CHEMICAL TOXICITY
A chemistry approach to predictive toxicology relies on structure−activity relationship (SAR) modeling to predict biological activity from chemical structure. Such approaches have proven capabilities when applied to well-defined toxicity end points or regions of chemical space. T...
Thermal niche estimators and the capability of poor dispersal species to cope with climate change
Sánchez-Fernández, David; Rizzo, Valeria; Cieslak, Alexandra; Faille, Arnaud; Fresneda, Javier; Ribera, Ignacio
2016-01-01
For management strategies in the context of global warming, accurate predictions of species response are mandatory. However, to date most predictions are based on niche (bioclimatic) models that usually overlook biotic interactions, behavioral adjustments or adaptive evolution, and assume that species can disperse freely without constraints. The deep subterranean environment minimises these uncertainties, as it is simple, homogeneous and with constant environmental conditions. It is thus an ideal model system to study the effect of global change in species with poor dispersal capabilities. We assess the potential fate of a lineage of troglobitic beetles under global change predictions using different approaches to estimate their thermal niche: bioclimatic models, rates of thermal niche change estimated from a molecular phylogeny, and data from physiological studies. Using bioclimatic models, at most 60% of the species were predicted to have suitable conditions in 2080. Considering the rates of thermal niche change did not improve this prediction. However, physiological data suggest that subterranean species have a broad thermal tolerance, allowing them to stand temperatures never experienced through their evolutionary history. These results stress the need of experimental approaches to assess the capability of poor dispersal species to cope with temperatures outside those they currently experience. PMID:26983802
Methods and devices for fabricating and assembling printable semiconductor elements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nuzzo, Ralph G.; Rogers, John A.; Menard, Etienne
The invention provides methods and devices for fabricating printable semiconductor elements and assembling printable semiconductor elements onto substrate surfaces. Methods, devices and device components of the present invention are capable of generating a wide range of flexible electronic and optoelectronic devices and arrays of devices on substrates comprising polymeric materials. The present invention also provides stretchable semiconductor structures and stretchable electronic devices capable of good performance in stretched configurations.
NASA Technical Reports Server (NTRS)
Labberton, D.
1974-01-01
A preliminary evaluation of environmental capabilities was undertaken on toggle switches and on Apollo-type toggle switches. The purpose of this evaluation was to take a first look at their tested capabilities for the purpose of determining whether the candidate hardware appears to have a good chance of successfully completing a detailed envrionmental qualification test program.
Methods and devices for fabricating and assembling printable semiconductor elements
Nuzzo, Ralph G; Rogers, John A; Menard, Etienne; Lee, Keon Jae; Khang, Dahl-Young; Sun, Yugang; Meitl, Matthew; Zhu, Zhengtao
2014-03-04
The invention provides methods and devices for fabricating printable semiconductor elements and assembling printable semiconductor elements onto substrate surfaces. Methods, devices and device components of the present invention are capable of generating a wide range of flexible electronic and optoelectronic devices and arrays of devices on substrates comprising polymeric materials. The present invention also provides stretchable semiconductor structures and stretchable electronic devices capable of good performance in stretched configurations.