Sample records for limited predictive capability

  1. Updraft Fixed Bed Gasification Aspen Plus Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2007-09-27

    The updraft fixed bed gasification model provides predictive modeling capabilities for updraft fixed bed gasifiers, when devolatilization data is available. The fixed bed model is constructed using Aspen Plus, process modeling software, coupled with a FORTRAN user kinetic subroutine. Current updraft gasification models created in Aspen Plus have limited predictive capabilities and must be "tuned" to reflect a generalized gas composition as specified in literature or by the gasifier manufacturer. This limits the applicability of the process model.

  2. Comparison of Fire Model Predictions with Experiments Conducted in a Hangar With a 15 Meter Ceiling

    NASA Technical Reports Server (NTRS)

    Davis, W. D.; Notarianni, K. A.; McGrattan, K. B.

    1996-01-01

    The purpose of this study is to examine the predictive capabilities of fire models using the results of a series of fire experiments conducted in an aircraft hangar with a ceiling height of about 15 m. This study is designed to investigate model applicability at a ceiling height where only a limited amount of experimental data is available. This analysis deals primarily with temperature comparisons as a function of distance from the fire center and depth beneath the ceiling. Only limited velocity measurements in the ceiling jet were available but these are also compared with those models with a velocity predictive capability.

  3. Fire spread probabilities for experimental beds composed of mixedwood boreal forest fuels

    Treesearch

    M.B. Dickinson; E.A. Johnson; R. Artiaga

    2013-01-01

    Although fuel characteristics are assumed to have an important impact on fire regimes through their effects on extinction dynamics, limited capabilities exist for predicting whether a fire will spread in mixedwood boreal forest surface fuels. To improve predictive capabilities, we conducted 347 no-wind, laboratory test burns in surface fuels collected from the mixed-...

  4. A Unit on Deterministic Chaos for Student Teachers

    ERIC Educational Resources Information Center

    Stavrou, D.; Assimopoulos, S.; Skordoulis, C.

    2013-01-01

    A unit aiming to introduce pre-service teachers of primary education to the limited predictability of deterministic chaotic systems is presented. The unit is based on a commercial chaotic pendulum system connected with a data acquisition interface. The capabilities and difficulties in understanding the notion of limited predictability of 18…

  5. Provocative work experiences predict the acquired capability for suicide in physicians.

    PubMed

    Fink-Miller, Erin L

    2015-09-30

    The interpersonal psychological theory of suicidal behavior (IPTS) offers a potential means to explain suicide in physicians. The IPTS posits three necessary and sufficient precursors to death by suicide: thwarted belongingness, perceived burdensomeness, and acquired capability. The present study sought to examine whether provocative work experiences unique to physicians (e.g., placing sutures, withdrawing life support) would predict levels of acquired capability, while controlling for gender and painful and provocative experiences outside the work environment. Data were obtained from 376 of 7723 recruited physicians. Study measures included the Acquired Capability for Suicide Scale, the Interpersonal Needs Questionnaire, the Painful and Provocative Events Scale, and the Life Events Scale-Medical Doctors Version. Painful and provocative events outside of work predicted acquired capability (β=0.23, t=3.82, p<0.001, f(2)=0.09) as did provocative work experiences (β=0.12, t=2.05, p<0.05, f(2)=0.07). This represents the first study assessing the potential impact of unique work experiences on suicidality in physicians. Limitations include over-representation of Caucasian participants, limited representation from various specialties of medicine, and lack of information regarding individual differences. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  6. Assessment of CFD capability for prediction of hypersonic shock interactions

    NASA Astrophysics Data System (ADS)

    Knight, Doyle; Longo, José; Drikakis, Dimitris; Gaitonde, Datta; Lani, Andrea; Nompelis, Ioannis; Reimann, Bodo; Walpot, Louis

    2012-01-01

    The aerothermodynamic loadings associated with shock wave boundary layer interactions (shock interactions) must be carefully considered in the design of hypersonic air vehicles. The capability of Computational Fluid Dynamics (CFD) software to accurately predict hypersonic shock wave laminar boundary layer interactions is examined. A series of independent computations performed by researchers in the US and Europe are presented for two generic configurations (double cone and cylinder) and compared with experimental data. The results illustrate the current capabilities and limitations of modern CFD methods for these flows.

  7. Evaluation of the synoptic and mesoscale predictive capabilities of a mesoscale atmospheric simulation system

    NASA Technical Reports Server (NTRS)

    Koch, S. E.; Skillman, W. C.; Kocin, P. J.; Wetzel, P. J.; Brill, K.; Keyser, D. A.; Mccumber, M. C.

    1983-01-01

    The overall performance characteristics of a limited area, hydrostatic, fine (52 km) mesh, primitive equation, numerical weather prediction model are determined in anticipation of satellite data assimilations with the model. The synoptic and mesoscale predictive capabilities of version 2.0 of this model, the Mesoscale Atmospheric Simulation System (MASS 2.0), were evaluated. The two part study is based on a sample of approximately thirty 12h and 24h forecasts of atmospheric flow patterns during spring and early summer. The synoptic scale evaluation results benchmark the performance of MASS 2.0 against that of an operational, synoptic scale weather prediction model, the Limited area Fine Mesh (LFM). The large sample allows for the calculation of statistically significant measures of forecast accuracy and the determination of systematic model errors. The synoptic scale benchmark is required before unsmoothed mesoscale forecast fields can be seriously considered.

  8. Managing Computer Systems Development: Understanding the Human and Technological Imperatives.

    DTIC Science & Technology

    1985-06-01

    for their organization’s use? How can they predict tle impact of future systems ca their management control capabilities ? Cf equal importance is the...commercial organizations discovered that there was only a limited capability of interaction between various types of computers. These organizations were...Viewed together, these three interrelated subsystems, EDP, MIS, and DSS, establish the framework of an overall systems capability known as a Computer

  9. Mariner Jupiter/Saturn LCSSE thruster/valve assembly and injection propulsion unit rocket engine assemblies: 0.2-lbf T/VA development and margin limit test report

    NASA Technical Reports Server (NTRS)

    Clark, E. C.

    1975-01-01

    Thruster valve assemblies (T/VA's) were subjected to the development test program for the combined JPL Low-Cost Standardized Spacecraft Equipment (LCSSE) and Mariner Jupiter/Saturn '77 spacecraft (MJS) programs. The development test program was designed to achieve the following program goals: (1) demonstrate T/VA design compliance with JPL Specifications, (2) to conduct a complete performance Cf map of the T/VA over the full operating range of environment, (3) demonstrate T/VA life capability and characteristics of life margin for steady-state limit cycle and momentum wheel desaturation duty cycles, (4) verification of structural design capability, and (5) generate a computerized performance model capable of predicting T/VA operation over pressures ranging from 420 to 70 psia, propellant temperatures ranging from 140 F to 40 F, pulse widths of 0.008 to steady-state operation with unlimited duty cycle capability, and finally predict the transient performance associated with reactor heatup during any given duty cycle, start temperature, feed pressure, and propellant temperature conditions.

  10. How feasible is the rapid development of artificial superintelligence?

    NASA Astrophysics Data System (ADS)

    Sotala, Kaj

    2017-11-01

    What kinds of fundamental limits are there in how capable artificial intelligence (AI) systems might become? Two questions in particular are of interest: (1) How much more capable could AI become relative to humans, and (2) how easily could superhuman capability be acquired? To answer these questions, we will consider the literature on human expertise and intelligence, discuss its relevance for AI, and consider how AI could improve on humans in two major aspects of thought and expertise, namely simulation and pattern recognition. We find that although there are very real limits to prediction, it seems like AI could still substantially improve on human intelligence.

  11. Evaluation of a computational model to predict elbow range of motion

    PubMed Central

    Nishiwaki, Masao; Johnson, James A.; King, Graham J. W.; Athwal, George S.

    2014-01-01

    Computer models capable of predicting elbow flexion and extension range of motion (ROM) limits would be useful for assisting surgeons in improving the outcomes of surgical treatment of patients with elbow contractures. A simple and robust computer-based model was developed that predicts elbow joint ROM using bone geometries calculated from computed tomography image data. The model assumes a hinge-like flexion-extension axis, and that elbow passive ROM limits can be based on terminal bony impingement. The model was validated against experimental results with a cadaveric specimen, and was able to predict the flexion and extension limits of the intact joint to 0° and 3°, respectively. The model was also able to predict the flexion and extension limits to 1° and 2°, respectively, when simulated osteophytes were inserted into the joint. Future studies based on this approach will be used for the prediction of elbow flexion-extension ROM in patients with primary osteoarthritis to help identify motion-limiting hypertrophic osteophytes, and will eventually permit real-time computer-assisted navigated excisions. PMID:24841799

  12. Capability for suicide interacts with states of heightened arousal to predict death by suicide beyond the effects of depression and hopelessness

    PubMed Central

    Ribeiro, Jessica D.; Yen, Shirley; Joiner, Thomas; Siegler, Ilene C.

    2016-01-01

    Background States of heightened arousal (e.g., agitation, sleep disturbance) have been repeatedly linked to suicidal thoughts and behaviors, including attempts and death. Studies have further indicated that these states may be particularly pernicious among individuals who evidence high suicidal capability. The objective of this study was to examine the interactive effects of heightened arousal and the capability for suicide in the prospective prediction of death by suicide. We examine this relation beyond the effects of robust predictors of suicide, namely depression and hopelessness. Methods Participants were drawn from a larger study of undergraduates who completed baseline assessments during their freshman year and were then followed to time of death. The sample in this study only included individuals who had died by suicide (n=96) or other causes (n=542). Proxy measures to assess predictor variables were constructed using items from the MMPI, which was administered at baseline. An independent sample of clinical outpatients (n=was used to evaluate the construct validity of the proxy measures). Results Results were in line with expectation: heightened arousal interacted with capability for suicide to prospectively predict death by suicide, such that, as severity of heightened arousal symptoms increased, the likelihood of death by suicide increased among individuals high but not low on capability for suicide. Limitations Limitations include the use of proxy measures, the extended length of follow-up, and the homogeneity of the sample (i.e., primarily White males). Conclusion These findings add to an emerging literature that supports the moderating influence of capability for suicide on the relationship between states of heightened arousal on the likelihood of death by suicide. PMID:26342889

  13. Landscape capability predicts upland game bird abundance and occurrence

    USGS Publications Warehouse

    Loman, Zachary G.; Blomberg, Erik J.; DeLuca, William; Harrison, Daniel J.; Loftin, Cyndy; Wood, Petra B.

    2017-01-01

    Landscape capability (LC) models are a spatial tool with potential applications in conservation planning. We used survey data to validate LC models as predictors of occurrence and abundance at broad and fine scales for American woodcock (Scolopax minor) and ruffed grouse (Bonasa umbellus). Landscape capability models were reliable predictors of occurrence but were less indicative of relative abundance at route (11.5–14.6 km) and point scales (0.5–1 km). As predictors of occurrence, LC models had high sensitivity (0.71–0.93) and were accurate (0.71–0.88) and precise (0.88 and 0.92 for woodcock and grouse, respectively). Models did not predict point-scale abundance independent of the ability to predict occurrence of either species. The LC models are useful predictors of patterns of occurrences in the northeastern United States, but they have limited utility as predictors of fine-scale or route-specific abundances. 

  14. The Development of PIPA: An Integrated and Automated Pipeline for Genome-Wide Protein Function Annotation

    DTIC Science & Technology

    2008-01-25

    limitations and plans for improvement Perhaps, one of PIPA’s main limitations is that all of its currently integrated resources to predict protein function...are planning on expending PIPA’s function prediction capabilities by incorporating comparative analysis approaches, e.g., phy- logenetic tree analysis...tools and services. Nucleic Acids Res 2005/12/31 edition. 2006, 34(Database issue):D247-51. 6. Bru C, Courcelle E, Carrere S, Beausse Y, Dalmar S

  15. Control model design to limit DC-link voltage during grid fault in a dfig variable speed wind turbine

    NASA Astrophysics Data System (ADS)

    Nwosu, Cajethan M.; Ogbuka, Cosmas U.; Oti, Stephen E.

    2017-08-01

    This paper presents a control model design capable of inhibiting the phenomenal rise in the DC-link voltage during grid- fault condition in a variable speed wind turbine. Against the use of power circuit protection strategies with inherent limitations in fault ride-through capability, a control circuit algorithm capable of limiting the DC-link voltage rise which in turn bears dynamics that has direct influence on the characteristics of the rotor voltage especially during grid faults is here proposed. The model results so obtained compare favorably with the simulation results as obtained in a MATLAB/SIMULINK environment. The generated model may therefore be used to predict near accurately the nature of DC-link voltage variations during fault given some factors which include speed and speed mode of operation, the value of damping resistor relative to half the product of inner loop current control bandwidth and the filter inductance.

  16. Mobility performance of the lunar roving vehicle: Terrestrial studies: Apollo 15 results

    NASA Technical Reports Server (NTRS)

    Costes, N. C.; Farmer, J. E.; George, E. B.

    1972-01-01

    The constriants of the Apollo 15 mission dictated that the average and limiting performance capabilities of the first manned lunar roving vehicle be known or estimated within narrow margins. Extensive studies were conducted and are compared with the actual performance of the lunar roving vehicle during the Apollo 15 mission. From this comparison, conclusions are drawn relating to the capabilities and limitation of current terrestrial methodology in predicting the mobility performance of lunar roving vehicles under in-situ environmental conditions, and recommendations are offered concerning the performance of surface vehicles on future missions related to lunar or planetary exploration.

  17. Development of Modeling Capabilities for Launch Pad Acoustics and Ignition Transient Environment Prediction

    NASA Technical Reports Server (NTRS)

    West, Jeff; Strutzenberg, Louise L.; Putnam, Gabriel C.; Liever, Peter A.; Williams, Brandon R.

    2012-01-01

    This paper presents development efforts to establish modeling capabilities for launch vehicle liftoff acoustics and ignition transient environment predictions. Peak acoustic loads experienced by the launch vehicle occur during liftoff with strong interaction between the vehicle and the launch facility. Acoustic prediction engineering tools based on empirical models are of limited value in efforts to proactively design and optimize launch vehicles and launch facility configurations for liftoff acoustics. Modeling approaches are needed that capture the important details of the plume flow environment including the ignition transient, identify the noise generation sources, and allow assessment of the effects of launch pad geometric details and acoustic mitigation measures such as water injection. This paper presents a status of the CFD tools developed by the MSFC Fluid Dynamics Branch featuring advanced multi-physics modeling capabilities developed towards this goal. Validation and application examples are presented along with an overview of application in the prediction of liftoff environments and the design of targeted mitigation measures such as launch pad configuration and sound suppression water placement.

  18. Elastic And Plastic Deformations In Butt Welds

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1992-01-01

    Report presents study of mathematical modeling of stresses and strains, reaching beyond limits of elasticity, in bars and plates. Study oriented toward development of capability to predict stresses and resulting elastic and plastic strains in butt welds.

  19. Improve SSME power balance model

    NASA Technical Reports Server (NTRS)

    Karr, Gerald R.

    1992-01-01

    Effort was dedicated to development and testing of a formal strategy for reconciling uncertain test data with physically limited computational prediction. Specific weaknesses in the logical structure of the current Power Balance Model (PBM) version are described with emphasis given to the main routing subroutines BAL and DATRED. Selected results from a variational analysis of PBM predictions are compared to Technology Test Bed (TTB) variational study results to assess PBM predictive capability. The motivation for systematic integration of uncertain test data with computational predictions based on limited physical models is provided. The theoretical foundation for the reconciliation strategy developed in this effort is presented, and results of a reconciliation analysis of the Space Shuttle Main Engine (SSME) high pressure fuel side turbopump subsystem are examined.

  20. Forming limit prediction by an evolving non-quadratic yield criterion considering the anisotropic hardening and r-value evolution

    NASA Astrophysics Data System (ADS)

    Lian, Junhe; Shen, Fuhui; Liu, Wenqi; Münstermann, Sebastian

    2018-05-01

    The constitutive model development has been driven to a very accurate and fine-resolution description of the material behaviour responding to various environmental variable changes. The evolving features of the anisotropic behaviour during deformation, therefore, has drawn particular attention due to its possible impacts on the sheet metal forming industry. An evolving non-associated Hill48 (enHill48) model was recently proposed and applied to the forming limit prediction by coupling with the modified maximum force criterion. On the one hand, the study showed the significance to include the anisotropic evolution for accurate forming limit prediction. On the other hand, it also illustrated that the enHill48 model introduced an instability region that suddenly decreases the formability. Therefore, in this study, an alternative model that is based on the associated flow rule and provides similar anisotropic predictive capability is extended to chapter the evolving effects and further applied to the forming limit prediction. The final results are compared with experimental data as well as the results by enHill48 model.

  1. Simulation Tool for Dielectric Barrier Discharge Plasma Actuators at Atmospheric and Sub-Atmospheric Pressures: SBIR Phase I Final Report

    NASA Technical Reports Server (NTRS)

    Likhanskii, Alexandre

    2012-01-01

    This report is the final report of a SBIR Phase I project. It is identical to the final report submitted, after some proprietary information of administrative nature has been removed. The development of a numerical simulation tool for dielectric barrier discharge (DBD) plasma actuator is reported. The objectives of the project were to analyze and predict DBD operation at wide range of ambient gas pressures. It overcomes the limitations of traditional DBD codes which are limited to low-speed applications and have weak prediction capabilities. The software tool allows DBD actuator analysis and prediction for subsonic to hypersonic flow regime. The simulation tool is based on the VORPAL code developed by Tech-X Corporation. VORPAL's capability of modeling DBD plasma actuator at low pressures (0.1 to 10 torr) using kinetic plasma modeling approach, and at moderate to atmospheric pressures (1 to 10 atm) using hydrodynamic plasma modeling approach, were demonstrated. In addition, results of experiments with pulsed+bias DBD configuration that were performed for validation purposes are reported.

  2. Recent developments in analysis of crack propagation and fracture of practical materials. [stress analysis in aircraft structures

    NASA Technical Reports Server (NTRS)

    Hardrath, H. F.; Newman, J. C., Jr.; Elber, W.; Poe, C. C., Jr.

    1978-01-01

    The limitations of linear elastic fracture mechanics in aircraft design and in the study of fatigue crack propagation in aircraft structures are discussed. NASA-Langley research to extend the capabilities of fracture mechanics to predict the maximum load that can be carried by a cracked part and to deal with aircraft design problems are reported. Achievements include: (1) improved stress intensity solutions for laboratory specimens; (2) fracture criterion for practical materials; (3) crack propagation predictions that account for mean stress and high maximum stress effects; (4) crack propagation predictions for variable amplitude loading; and (5) the prediction of crack growth and residual stress in built-up structural assemblies. These capabilities are incorporated into a first generation computerized analysis that allows for damage tolerance and tradeoffs with other disciplines to produce efficient designs that meet current airworthiness requirements.

  3. Modeling of propulsive jet plumes--extension of modeling capabilities by utilizing wall curvature effects

    NASA Astrophysics Data System (ADS)

    Doerr, S. E.

    1984-06-01

    Modeling of aerodynamic interference effects of propulsive jet plumes, by using inert gases as substitute propellants, introduces design limits. To extend the range of modeling capabilities, nozzle wall curvature effects may be utilized. Numerical calculations, using the Method of Characteristics, were made and experimental data were taken to evaluate the merits of the theoretical predictions. A bibliography, listing articles that led to the present report, is included.

  4. Modeling of ESD events from polymeric surfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pfeifer, Kent Bryant

    2014-03-01

    Transient electrostatic discharge (ESD) events are studied to assemble a predictive model of discharge from polymer surfaces. An analog circuit simulation is produced and its response is compared to various literature sources to explore its capabilities and limitations. Results suggest that polymer ESD events can be predicted to within an order of magnitude. These results compare well to empirical findings from other sources having similar reproducibility.

  5. Recent advances in hypersonic technology

    NASA Technical Reports Server (NTRS)

    Dwoyer, Douglas L.

    1990-01-01

    This paper will focus on recent advances in hypersonic aerodynamic prediction techniques. Current capabilities of existing numerical methods for predicting high Mach number flows will be discussed and shortcomings will be identified. Physical models available for inclusion into modern codes for predicting the effects of transition and turbulence will also be outlined and their limitations identified. Chemical reaction models appropriate to high-speed flows will be addressed, and the impact of their inclusion in computational fluid dynamics codes will be discussed. Finally, the problem of validating predictive techniques for high Mach number flows will be addressed.

  6. Initial Integration of Noise Prediction Tools for Acoustic Scattering Effects

    NASA Technical Reports Server (NTRS)

    Nark, Douglas M.; Burley, Casey L.; Tinetti, Ana; Rawls, John W.

    2008-01-01

    This effort provides an initial glimpse at NASA capabilities available in predicting the scattering of fan noise from a non-conventional aircraft configuration. The Aircraft NOise Prediction Program, Fast Scattering Code, and the Rotorcraft Noise Model were coupled to provide increased fidelity models of scattering effects on engine fan noise sources. The integration of these codes led to the identification of several keys issues entailed in applying such multi-fidelity approaches. In particular, for prediction at noise certification points, the inclusion of distributed sources leads to complications with the source semi-sphere approach. Computational resource requirements limit the use of the higher fidelity scattering code to predict radiated sound pressure levels for full scale configurations at relevant frequencies. And, the ability to more accurately represent complex shielding surfaces in current lower fidelity models is necessary for general application to scattering predictions. This initial step in determining the potential benefits/costs of these new methods over the existing capabilities illustrates a number of the issues that must be addressed in the development of next generation aircraft system noise prediction tools.

  7. Route Prediction on Tracking Data to Location-Based Services

    NASA Astrophysics Data System (ADS)

    Petróczi, Attila István; Gáspár-Papanek, Csaba

    Wireless networks have become so widespread, it is beneficial to determine the ability of cellular networks for localization. This property enables the development of location-based services, providing useful information. These services can be improved by route prediction under the condition of using simple algorithms, because of the limited capabilities of mobile stations. This study gives alternative solutions for this problem of route prediction based on a specific graph model. Our models provide the opportunity to reach our destinations with less effort.

  8. Effects of High-Density Impacts on Shielding Capability

    NASA Technical Reports Server (NTRS)

    Christiansen, Eric L.; Lear, Dana M.

    2014-01-01

    Spacecraft are shielded from micrometeoroids and orbital debris (MMOD) impacts to meet requirements for crew safety and/or mission success. In the past, orbital debris particles have been considered to be composed entirely of aluminum (medium-density material) for the purposes of MMOD shielding design and verification. Meteoroids have been considered to be low-density porous materials, with an average density of 1 g/cu cm. Recently, NASA released a new orbital debris environment model, referred to as ORDEM 3.0, that indicates orbital debris contains a substantial fraction of high-density material for which steel is used in MMOD risk assessments [Ref.1]. Similarly, an update to the meteoroid environment model is also under consideration to include a high-density component of that environment. This paper provides results of hypervelocity impact tests and hydrocode simulations on typical spacecraft MMOD shields using steel projectiles. It was found that previous ballistic limit equations (BLEs) that define the protection capability of the MMOD shields did not predict the results from the steel impact tests and hydrocode simulations (typically, the predictions from these equations were too optimistic). The ballistic limit equations required updates to more accurately represent shield protection capability from the range of densities in the orbital debris environment. Ballistic limit equations were derived from the results of the work and are provided in the paper.

  9. Using machine learning to emulate human hearing for predictive maintenance of equipment

    NASA Astrophysics Data System (ADS)

    Verma, Dinesh; Bent, Graham

    2017-05-01

    At the current time, interfaces between humans and machines use only a limited subset of senses that humans are capable of. The interaction among humans and computers can become much more intuitive and effective if we are able to use more senses, and create other modes of communicating between them. New machine learning technologies can make this type of interaction become a reality. In this paper, we present a framework for a holistic communication between humans and machines that uses all of the senses, and discuss how a subset of this capability can allow machines to talk to humans to indicate their health for various tasks such as predictive maintenance.

  10. Piloted Simulator Tests of a Guidance System which Can Continously Predict Landing Point of a Low L/D Vehicle During Atmosphere Re-Entry

    NASA Technical Reports Server (NTRS)

    Wingrove, Rodney C.; Coate, Robert E.

    1961-01-01

    The guidance system for maneuvering vehicles within a planetary atmosphere which was studied uses the concept of fast continuous prediction of the maximum maneuver capability from existing conditions rather than a stored-trajectory technique. used, desired touchdown points are compared with the maximum range capability and heating or acceleration limits, so that a proper decision and choice of control inputs can be made by the pilot. In the method of display and control a piloted fixed simulator was used t o demonstrate the feasibility od the concept and to study its application to control of lunar mission reentries and recoveries from aborts.

  11. Pulsed CO2 characterization for lidar use

    NASA Technical Reports Server (NTRS)

    Jaenisch, Holger M.

    1992-01-01

    An account is given of a scaled functional testbed laser for space-qualified coherent-detection lidar applications which employs a CO2 laser. This laser has undergone modification and characterization for inherent performance capabilities as a model of coherent detection. While characterization results show good overall performance that is in agreement with theoretical predictions, frequency-stability and pulse-length limitations severely limit the laser's use in coherent detection.

  12. NASA's Evolutionary Xenon Thruster (NEXT) Project Qualification Propellant Throughput Milestone: Performance, Erosion, and Thruster Service Life Prediction After 450 kg

    NASA Technical Reports Server (NTRS)

    Herman, Daniel A.

    2010-01-01

    The NASA s Evolutionary Xenon Thruster (NEXT) program is tasked with significantly improving and extending the capabilities of current state-of-the-art NSTAR thruster. The service life capability of the NEXT ion thruster is being assessed by thruster wear test and life-modeling of critical thruster components, such as the ion optics and cathodes. The NEXT Long-Duration Test (LDT) was initiated to validate and qualify the NEXT thruster propellant throughput capability. The NEXT thruster completed the primary goal of the LDT; namely to demonstrate the project qualification throughput of 450 kg by the end of calendar year 2009. The NEXT LDT has demonstrated 28,500 hr of operation and processed 466 kg of xenon throughput--more than double the throughput demonstrated by the NSTAR flight-spare. Thruster performance changes have been consistent with a priori predictions. Thruster erosion has been minimal and consistent with the thruster service life assessment, which predicts the first failure mode at greater than 750 kg throughput. The life-limiting failure mode for NEXT is predicted to be loss of structural integrity of the accelerator grid due to erosion by charge-exchange ions.

  13. The efficacy of calibrating hydrologic model using remotely sensed evapotranspiration and soil moisture for streamflow prediction

    NASA Astrophysics Data System (ADS)

    Kunnath-Poovakka, A.; Ryu, D.; Renzullo, L. J.; George, B.

    2016-04-01

    Calibration of spatially distributed hydrologic models is frequently limited by the availability of ground observations. Remotely sensed (RS) hydrologic information provides an alternative source of observations to inform models and extend modelling capability beyond the limits of ground observations. This study examines the capability of RS evapotranspiration (ET) and soil moisture (SM) in calibrating a hydrologic model and its efficacy to improve streamflow predictions. SM retrievals from the Advanced Microwave Scanning Radiometer-EOS (AMSR-E) and daily ET estimates from the CSIRO MODIS ReScaled potential ET (CMRSET) are used to calibrate a simplified Australian Water Resource Assessment - Landscape model (AWRA-L) for a selection of parameters. The Shuffled Complex Evolution Uncertainty Algorithm (SCE-UA) is employed for parameter estimation at eleven catchments in eastern Australia. A subset of parameters for calibration is selected based on the variance-based Sobol' sensitivity analysis. The efficacy of 15 objective functions for calibration is assessed based on streamflow predictions relative to control cases, and relative merits of each are discussed. Synthetic experiments were conducted to examine the effect of bias in RS ET observations on calibration. The objective function containing the root mean square deviation (RMSD) of ET result in best streamflow predictions and the efficacy is superior for catchments with medium to high average runoff. Synthetic experiments revealed that accurate ET product can improve the streamflow predictions in catchments with low average runoff.

  14. High fidelity studies of exploding foil initiator bridges, Part 3: ALEGRA MHD simulations

    NASA Astrophysics Data System (ADS)

    Neal, William; Garasi, Christopher

    2017-01-01

    Simulations of high voltage detonators, such as Exploding Bridgewire (EBW) and Exploding Foil Initiators (EFI), have historically been simple, often empirical, one-dimensional models capable of predicting parameters such as current, voltage, and in the case of EFIs, flyer velocity. Experimental methods have correspondingly generally been limited to the same parameters. With the advent of complex, first principles magnetohydrodynamic codes such as ALEGRA and ALE-MHD, it is now possible to simulate these components in three dimensions, and predict a much greater range of parameters than before. A significant improvement in experimental capability was therefore required to ensure these simulations could be adequately verified. In this third paper of a three part study, the experimental results presented in part 2 are compared against 3-dimensional MHD simulations. This improved experimental capability, along with advanced simulations, offer an opportunity to gain a greater understanding of the processes behind the functioning of EBW and EFI detonators.

  15. Dispersionless Manipulation of Reflected Acoustic Wavefront by Subwavelength Corrugated Surface

    PubMed Central

    Zhu, Yi-Fan; Zou, Xin-Ye; Li, Rui-Qi; Jiang, Xue; Tu, Juan; Liang, Bin; Cheng, Jian-Chun

    2015-01-01

    Free controls of optic/acoustic waves for bending, focusing or steering the energy of wavefronts are highly desirable in many practical scenarios. However, the dispersive nature of the existing metamaterials/metasurfaces for wavefront manipulation necessarily results in limited bandwidth. Here, we propose the concept of dispersionless wavefront manipulation and report a theoretical, numerical and experimental work on the design of a reflective surface capable of controlling the acoustic wavefront arbitrarily without bandwidth limitation. Analytical analysis predicts the possibility to completely eliminate the frequency dependence with a specific gradient surface which can be implemented by designing a subwavelength corrugated surface. Experimental and numerical results, well consistent with the theoretical predictions, have validated the proposed scheme by demonstrating a distinct phenomenon of extraordinary acoustic reflection within an ultra-broad band. For acquiring a deeper insight into the underlying physics, a simple physical model is developed which helps to interpret this extraordinary phenomenon and predict the upper cutoff frequency precisely. Generations of planar focusing and non-diffractive beam have also been exemplified. With the dispersionless wave-steering capability and deep discrete resolution, our designed structure may open new avenue to fully steer classical waves and offer design possibilities for broadband optical/acoustical devices. PMID:26077772

  16. Defense Weather Satellites: Analysis of Alternatives is Useful for Certain Capabilities, but Ineffective Coordination Limited Assessment of Two Critical Capabilities

    DTIC Science & Technology

    2016-03-10

    2015, DOD also inquired with NOAA about the possibility of using one of NOAA’s geostationary weather satellites to preserve coverage over the Indian...life of DMSP-20.” It further states, “This means there will always be geostationary coverage” of several regions, including the Indian Ocean, “by U.S...weather prediction. • Geostationary satellites maintain a fixed position relative to the earth, collecting data on a specific geographic region and

  17. Engineering the earth system

    NASA Astrophysics Data System (ADS)

    Keith, D. W.

    2005-12-01

    The post-war growth of the earth sciences has been fueled, in part, by a drive to quantify environmental insults in order to support arguments for their reduction, yet paradoxically the knowledge gained is grants us ever greater capability to deliberately engineer environmental processes on a planetary scale. Increased capability can arises though seemingly unconnected scientific advances. Improvements in numerical weather prediction such as the use of adjoint models in analysis/forecast systems, for example, means that weather modification can be accomplished with smaller control inputs. Purely technological constraints on our ability to engineer earth systems arise from our limited ability to measure and predict system responses and from limits on our ability to manage large engineering projects. Trends in all three constraints suggest a rapid growth in our ability to engineer the planet. What are the implications of our growing ability to geoengineer? Will we see a reemergence of proposals to engineer our way out of the climate problem? How can we avoid the moral hazard posed by the knowledge that geoengineering might provide a backstop to climate damages? I will speculate about these issues, and suggest some institutional factors that may provide a stronger constraint on the use of geoengineering than is provided by any purely technological limit.

  18. A survey of the broadband shock associated noise prediction methods

    NASA Technical Reports Server (NTRS)

    Kim, Chan M.; Krejsa, Eugene A.; Khavaran, Abbas

    1992-01-01

    Several different prediction methods to estimate the broadband shock associated noise of a supersonic jet are introduced and compared with experimental data at various test conditions. The nozzle geometries considered for comparison include a convergent and a convergent-divergent nozzle, both axisymmetric. Capabilities and limitations of prediction methods in incorporating the two nozzle geometries, flight effect, and temperature effect are discussed. Predicted noise field shows the best agreement for a convergent nozzle geometry under static conditions. Predicted results for nozzles in flight show larger discrepancies from data and more dependable flight data are required for further comparison. Qualitative effects of jet temperature, as observed in experiment, are reproduced in predicted results.

  19. Inferring interventional predictions from observational learning data.

    PubMed

    Meder, Bjorn; Hagmayer, York; Waldmann, Michael R

    2008-02-01

    Previous research has shown that people are capable of deriving correct predictions for previously unseen actions from passive observations of causal systems (Waldmann & Hagmayer, 2005). However, these studies were limited, since learning data were presented as tabulated data only, which may have turned the task more into a reasoning rather than a learning task. In two experiments, we therefore presented learners with trial-by-trial observational learning input referring to a complex causal model consisting of four events. To test the robustness of the capacity to derive correct observational and interventional inferences, we pitted causal order against the temporal order of learning events. The results show that people are, in principle, capable of deriving correct predictions after purely observational trial-by-trial learning, even with relatively complex causal models. However, conflicting temporal information can impair performance, particularly when the inferences require taking alternative causal pathways into account.

  20. Method to predict external store carriage characteristics at transonic speeds

    NASA Technical Reports Server (NTRS)

    Rosen, Bruce S.

    1988-01-01

    Development of a computational method for prediction of external store carriage characteristics at transonic speeds is described. The geometric flexibility required for treatment of pylon-mounted stores is achieved by computing finite difference solutions on a five-level embedded grid arrangement. A completely automated grid generation procedure facilitates applications. Store modeling capability consists of bodies of revolution with multiple fore and aft fins. A body-conforming grid improves the accuracy of the computed store body flow field. A nonlinear relaxation scheme developed specifically for modified transonic small disturbance flow equations enhances the method's numerical stability and accuracy. As a result, treatment of lower aspect ratio, more highly swept and tapered wings is possible. A limited supersonic freestream capability is also provided. Pressure, load distribution, and force/moment correlations show good agreement with experimental data for several test cases. A detailed computer program description for the Transonic Store Carriage Loads Prediction (TSCLP) Code is included.

  1. Identifying and Assessing Gaps in Subseasonal to Seasonal Prediction Skill using the North American Multi-model Ensemble

    NASA Astrophysics Data System (ADS)

    Pegion, K.; DelSole, T. M.; Becker, E.; Cicerone, T.

    2016-12-01

    Predictability represents the upper limit of prediction skill if we had an infinite member ensemble and a perfect model. It is an intrinsic limit of the climate system associated with the chaotic nature of the atmosphere. Producing a forecast system that can make predictions very near to this limit is the ultimate goal of forecast system development. Estimates of predictability together with calculations of current prediction skill are often used to define the gaps in our prediction capabilities on subseasonal to seasonal timescales and to inform the scientific issues that must be addressed to build the next forecast system. Quantification of the predictability is also important for providing a scientific basis for relaying to stakeholders what kind of climate information can be provided to inform decision-making and what kind of information is not possible given the intrinsic predictability of the climate system. One challenge with predictability estimates is that different prediction systems can give different estimates of the upper limit of skill. How do we know which estimate of predictability is most representative of the true predictability of the climate system? Previous studies have used the spread-error relationship and the autocorrelation to evaluate the fidelity of the signal and noise estimates. Using a multi-model ensemble prediction system, we can quantify whether these metrics accurately indicate an individual model's ability to properly estimate the signal, noise, and predictability. We use this information to identify the best estimates of predictability for 2-meter temperature, precipitation, and sea surface temperature from the North American Multi-model Ensemble and compare with current skill to indicate the regions with potential for improving skill.

  2. Development of a Simulation Capability for the Space Station Active Rack Isolation System

    NASA Technical Reports Server (NTRS)

    Johnson, Terry L.; Tolson, Robert H.

    1998-01-01

    To realize quality microgravity science on the International Space Station, many microgravity facilities will utilize the Active Rack Isolation System (ARIS). Simulation capabilities for ARIS will be needed to predict the microgravity environment. This paper discusses the development of a simulation model for use in predicting the performance of the ARIS in attenuating disturbances with frequency content between 0.01 Hz and 10 Hz. The derivation of the model utilizes an energy-based approach. The complete simulation includes the dynamic model of the ISPR integrated with the model for the ARIS controller so that the entire closed-loop system is simulated. Preliminary performance predictions are made for the ARIS in attenuating both off-board disturbances as well as disturbances from hardware mounted onboard the microgravity facility. These predictions suggest that the ARIS does eliminate resonant behavior detrimental to microgravity experimentation. A limited comparison is made between the simulation predictions of ARIS attenuation of off-board disturbances and results from the ARIS flight test. These comparisons show promise, but further tuning of the simulation is needed.

  3. Efficient prediction of human protein-protein interactions at a global scale.

    PubMed

    Schoenrock, Andrew; Samanfar, Bahram; Pitre, Sylvain; Hooshyar, Mohsen; Jin, Ke; Phillips, Charles A; Wang, Hui; Phanse, Sadhna; Omidi, Katayoun; Gui, Yuan; Alamgir, Md; Wong, Alex; Barrenäs, Fredrik; Babu, Mohan; Benson, Mikael; Langston, Michael A; Green, James R; Dehne, Frank; Golshani, Ashkan

    2014-12-10

    Our knowledge of global protein-protein interaction (PPI) networks in complex organisms such as humans is hindered by technical limitations of current methods. On the basis of short co-occurring polypeptide regions, we developed a tool called MP-PIPE capable of predicting a global human PPI network within 3 months. With a recall of 23% at a precision of 82.1%, we predicted 172,132 putative PPIs. We demonstrate the usefulness of these predictions through a range of experiments. The speed and accuracy associated with MP-PIPE can make this a potential tool to study individual human PPI networks (from genomic sequences alone) for personalized medicine.

  4. Advances and Computational Tools towards Predictable Design in Biological Engineering

    PubMed Central

    2014-01-01

    The design process of complex systems in all the fields of engineering requires a set of quantitatively characterized components and a method to predict the output of systems composed by such elements. This strategy relies on the modularity of the used components or the prediction of their context-dependent behaviour, when parts functioning depends on the specific context. Mathematical models usually support the whole process by guiding the selection of parts and by predicting the output of interconnected systems. Such bottom-up design process cannot be trivially adopted for biological systems engineering, since parts function is hard to predict when components are reused in different contexts. This issue and the intrinsic complexity of living systems limit the capability of synthetic biologists to predict the quantitative behaviour of biological systems. The high potential of synthetic biology strongly depends on the capability of mastering this issue. This review discusses the predictability issues of basic biological parts (promoters, ribosome binding sites, coding sequences, transcriptional terminators, and plasmids) when used to engineer simple and complex gene expression systems in Escherichia coli. A comparison between bottom-up and trial-and-error approaches is performed for all the discussed elements and mathematical models supporting the prediction of parts behaviour are illustrated. PMID:25161694

  5. High fidelity studies of exploding foil initiator bridges, Part 1: Experimental method

    NASA Astrophysics Data System (ADS)

    Bowden, Mike; Neal, William

    2017-01-01

    Simulations of high voltage detonators, such as Exploding Bridgewire (EBW) and Exploding Foil Initiators (EFI), have historically been simple, often empirical, one-dimensional models capable of predicting parameters such as current, voltage and in the case of EFIs, flyer velocity. Correspondingly, experimental methods have in general been limited to the same parameters. With the advent of complex, first principles magnetohydrodynamic codes such as ALEGRA and ALE-MHD, it is now possible to simulate these components in three dimensions, predicting a much greater range of parameters than before. A significant improvement in experimental capability was therefore required to ensure these simulations could be adequately validated. In this first paper of a three part study, the experimental method for determining the current, voltage, flyer velocity and multi-dimensional profile of detonator components is presented. This improved capability, along with high fidelity simulations, offer an opportunity to gain a greater understanding of the processes behind the functioning of EBW and EFI detonators.

  6. Maximal Predictability Approach for Identifying the Right Descriptors for Electrocatalytic Reactions.

    PubMed

    Krishnamurthy, Dilip; Sumaria, Vaidish; Viswanathan, Venkatasubramanian

    2018-02-01

    Density functional theory (DFT) calculations are being routinely used to identify new material candidates that approach activity near fundamental limits imposed by thermodynamics or scaling relations. DFT calculations are associated with inherent uncertainty, which limits the ability to delineate materials (distinguishability) that possess high activity. Development of error-estimation capabilities in DFT has enabled uncertainty propagation through activity-prediction models. In this work, we demonstrate an approach to propagating uncertainty through thermodynamic activity models leading to a probability distribution of the computed activity and thereby its expectation value. A new metric, prediction efficiency, is defined, which provides a quantitative measure of the ability to distinguish activity of materials and can be used to identify the optimal descriptor(s) ΔG opt . We demonstrate the framework for four important electrochemical reactions: hydrogen evolution, chlorine evolution, oxygen reduction and oxygen evolution. Future studies could utilize expected activity and prediction efficiency to significantly improve the prediction accuracy of highly active material candidates.

  7. NASA's Evolutionary Xenon Thruster (NEXT) Long-Duration Test as of 736 kg of Propellant Throughput

    NASA Technical Reports Server (NTRS)

    Shastry, Rohit; Herman, Daniel A.; Soulas, George C.; Patterson, Michael J.

    2012-01-01

    The NASA s Evolutionary Xenon Thruster (NEXT) program is developing the next-generation solar-electric ion propulsion system with significant enhancements beyond the state-of-the-art NASA Solar Electric Propulsion Technology Application Readiness (NSTAR) ion propulsion system to provide future NASA science missions with enhanced mission capabilities. A Long-Duration Test (LDT) was initiated in June 2005 to validate the thruster service life modeling and to qualify the thruster propellant throughput capability. The thruster has set electric propulsion records for the longest operating duration, highest propellant throughput, and most total impulse demonstrated. At the time of this publication, the NEXT LDT has surpassed 42,100 h of operation, processed more than 736 kg of xenon propellant, and demonstrated greater than 28.1 MN s total impulse. Thruster performance has been steady with negligible degradation. The NEXT thruster design has mitigated several lifetime limiting mechanisms encountered in the NSTAR design, including the NSTAR first failure mode, thereby drastically improving thruster capabilities. Component erosion rates and the progression of the predicted life-limiting erosion mechanism for the thruster compare favorably to pretest predictions based upon semi-empirical ion thruster models used in the thruster service life assessment. Service life model validation has been accomplished by the NEXT LDT. Assuming full-power operation until test article failure, the models and extrapolated erosion data predict penetration of the accelerator grid grooves after more than 45,000 hours of operation while processing over 800 kg of xenon propellant. Thruster failure due to degradation of the accelerator grid structural integrity is expected after groove penetration.

  8. NASA's Evolutionary Xenon Thruster (NEXT) Long-Duration Test as of 736 kg of Propellant Throughput

    NASA Technical Reports Server (NTRS)

    Shastry, Rohit; Herman, Daniel A.; Soulas, George C.; Patterson, Michael J.

    2012-01-01

    The NASA s Evolutionary Xenon Thruster (NEXT) program is developing the next-generation solar-electric ion propulsion system with significant enhancements beyond the state-of-the-art NASA Solar Electric Propulsion Technology Application Readiness (NSTAR) ion propulsion system to provide future NASA science missions with enhanced mission capabilities. A Long-Duration Test (LDT) was initiated in June 2005 to validate the thruster service life modeling and to qualify the thruster propellant throughput capability. The thruster has set electric propulsion records for the longest operating duration, highest propellant throughput, and most total impulse demonstrated. At the time of this publication, the NEXT LDT has surpassed 42,100 h of operation, processed more than 736 kg of xenon propellant, and demonstrated greater than 28.1 MN s total impulse. Thruster performance has been steady with negligible degradation. The NEXT thruster design has mitigated several lifetime limiting mechanisms encountered in the NSTAR design, including the NSTAR first failure mode, thereby drastically improving thruster capabilities. Component erosion rates and the progression of the predicted life-limiting erosion mechanism for the thruster compare favorably to pretest predictions based upon semi-empirical ion thruster models used in the thruster service life assessment. Service life model validation has been accomplished by the NEXT LDT. Assuming full-power operation until test article failure, the models and extrapolated erosion data predict penetration of the accelerator grid grooves after more than 45,000 hours of operation while processing over 800 kg of xenon propellant. Thruster failure due to degradation of the accelerator grid structural integrity is expected after

  9. Status of the NASA's Evolutionary Xenon Thruster (NEXT) Long-Duration Test After 30,352 Hours of Operation

    NASA Technical Reports Server (NTRS)

    Herman, Daniel A.

    2010-01-01

    The NASA s Evolutionary Xenon Thruster (NEXT) program is tasked with significantly improving and extending the capabilities of current state-of-the-art NSTAR thruster. The service life capability of the NEXT ion thruster is being assessed by thruster wear test and life-modeling of critical thruster components, such as the ion optics and cathodes. The NEXT Long-Duration Test (LDT) was initiated to validate and qualify the NEXT thruster propellant throughput capability. The NEXT thruster completed the primary goal of the LDT; namely to demonstrate the project qualification throughput of 450 kg by the end of calendar year 2009. The NEXT LDT has demonstrated 30,352 hr of operation and processed 490 kg of xenon throughput--surpassing the NSTAR Extended Life Test hours demonstrated and more than double the throughput demonstrated by the NSTAR flight-spare. Thruster performance changes have been consistent with a priori predictions. Thruster erosion has been minimal and consistent with the thruster service life assessment, which predicts the first failure mode at greater than 750 kg throughput. The life-limiting failure mode for NEXT is predicted to be loss of structural integrity of the accelerator grid due to erosion by charge-exchange ions.

  10. Modeling and Analysis of Global and Regional Climate Change in Relation to Atmospheric Hydrologic Processes

    NASA Technical Reports Server (NTRS)

    Johnson, Donald R.

    2001-01-01

    This research was directed to the development and application of global isentropic modeling and analysis capabilities to describe hydrologic processes and energy exchange in the climate system, and discern regional climate change. An additional objective was to investigate the accuracy and theoretical limits of global climate predictability which are imposed by the inherent limitations of simulating trace constituent transport and the hydrologic processes of condensation, precipitation and cloud life cycles.

  11. Uncertainty quantification's role in modeling and simulation planning, and credibility assessment through the predictive capability maturity model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rider, William J.; Witkowski, Walter R.; Mousseau, Vincent Andrew

    2016-04-13

    The importance of credible, trustworthy numerical simulations is obvious especially when using the results for making high-consequence decisions. Determining the credibility of such numerical predictions is much more difficult and requires a systematic approach to assessing predictive capability, associated uncertainties and overall confidence in the computational simulation process for the intended use of the model. This process begins with an evaluation of the computational modeling of the identified, important physics of the simulation for its intended use. This is commonly done through a Phenomena Identification Ranking Table (PIRT). Then an assessment of the evidence basis supporting the ability to computationallymore » simulate these physics can be performed using various frameworks such as the Predictive Capability Maturity Model (PCMM). There were several critical activities that follow in the areas of code and solution verification, validation and uncertainty quantification, which will be described in detail in the following sections. Here, we introduce the subject matter for general applications but specifics are given for the failure prediction project. In addition, the first task that must be completed in the verification & validation procedure is to perform a credibility assessment to fully understand the requirements and limitations of the current computational simulation capability for the specific application intended use. The PIRT and PCMM are tools used at Sandia National Laboratories (SNL) to provide a consistent manner to perform such an assessment. Ideally, all stakeholders should be represented and contribute to perform an accurate credibility assessment. PIRTs and PCMMs are both described in brief detail below and the resulting assessments for an example project are given.« less

  12. Modeling the viscosity of polydisperse suspensions: Improvements in prediction of limiting behavior

    NASA Astrophysics Data System (ADS)

    Mwasame, Paul M.; Wagner, Norman J.; Beris, Antony N.

    2016-06-01

    The present study develops a fully consistent extension of the approach pioneered by Farris ["Prediction of the viscosity of multimodal suspensions from unimodal viscosity data," Trans. Soc. Rheol. 12, 281-301 (1968)] to describe the viscosity of polydisperse suspensions significantly improving upon our previous model [P. M. Mwasame, N. J. Wagner, and A. N. Beris, "Modeling the effects of polydispersity on the viscosity of noncolloidal hard sphere suspensions," J. Rheol. 60, 225-240 (2016)]. The new model captures the Farris limit of large size differences between consecutive particle size classes in a suspension. Moreover, the new model includes a further generalization that enables its application to real, complex suspensions that deviate from ideal non-colloidal suspension behavior. The capability of the new model to predict the viscosity of complex suspensions is illustrated by comparison against experimental data.

  13. Neural network based automatic limit prediction and avoidance system and method

    NASA Technical Reports Server (NTRS)

    Calise, Anthony J. (Inventor); Prasad, Jonnalagadda V. R. (Inventor); Horn, Joseph F. (Inventor)

    2001-01-01

    A method for performance envelope boundary cueing for a vehicle control system comprises the steps of formulating a prediction system for a neural network and training the neural network to predict values of limited parameters as a function of current control positions and current vehicle operating conditions. The method further comprises the steps of applying the neural network to the control system of the vehicle, where the vehicle has capability for measuring current control positions and current vehicle operating conditions. The neural network generates a map of current control positions and vehicle operating conditions versus the limited parameters in a pre-determined vehicle operating condition. The method estimates critical control deflections from the current control positions required to drive the vehicle to a performance envelope boundary. Finally, the method comprises the steps of communicating the critical control deflection to the vehicle control system; and driving the vehicle control system to provide a tactile cue to an operator of the vehicle as the control positions approach the critical control deflections.

  14. Using APEX to Model Anticipated Human Error: Analysis of a GPS Navigational Aid

    NASA Technical Reports Server (NTRS)

    VanSelst, Mark; Freed, Michael; Shefto, Michael (Technical Monitor)

    1997-01-01

    The interface development process can be dramatically improved by predicting design facilitated human error at an early stage in the design process. The approach we advocate is to SIMULATE the behavior of a human agent carrying out tasks with a well-specified user interface, ANALYZE the simulation for instances of human error, and then REFINE the interface or protocol to minimize predicted error. This approach, incorporated into the APEX modeling architecture, differs from past approaches to human simulation in Its emphasis on error rather than e.g. learning rate or speed of response. The APEX model consists of two major components: (1) a powerful action selection component capable of simulating behavior in complex, multiple-task environments; and (2) a resource architecture which constrains cognitive, perceptual, and motor capabilities to within empirically demonstrated limits. The model mimics human errors arising from interactions between limited human resources and elements of the computer interface whose design falls to anticipate those limits. We analyze the design of a hand-held Global Positioning System (GPS) device used for radical and navigational decisions in small yacht recalls. The analysis demonstrates how human system modeling can be an effective design aid, helping to accelerate the process of refining a product (or procedure).

  15. Nondestructive strain depth profiling with high energy X-ray diffraction: System capabilities and limitations

    NASA Astrophysics Data System (ADS)

    Zhang, Zhan; Wendt, Scott; Cosentino, Nicholas; Bond, Leonard J.

    2018-04-01

    Limited by photon energy, and penetration capability, traditional X-ray diffraction (XRD) strain measurements are only capable of achieving a few microns depth due to the use of copper (Cu Kα1) or molybdenum (Mo Kα1) characteristic radiation. For deeper strain depth profiling, destructive methods are commonly necessary to access layers of interest by removing material. To investigate deeper depth profiles nondestructively, a laboratory bench-top high-energy X-ray diffraction (HEXRD) system was previously developed. This HEXRD method uses an industrial 320 kVp X-Ray tube and the Kα1 characteristic peak of tungsten, to produces a higher intensity X-ray beam which enables depth profiling measurement of lattice strain. An aluminum sample was investigated with deformation/load provided using a bending rig. It was shown that the HEXRD method is capable of strain depth profiling to 2.5 mm. The method was validated using an aluminum sample where both the HEXRD method and the traditional X-ray diffraction method gave data compared with that obtained using destructive etching layer removal, performed by a commercial provider. The results demonstrate comparable accuracy up to 0.8 mm depth. Nevertheless, higher attenuation capabilities in heavier metals limit the applications in other materials. Simulations predict that HEXRD works for steel and nickel in material up to 200 µm, but experiment results indicate that the HEXRD strain profile is not practical for steel and nickel material, and the measured diffraction signals are undetectable when compared to the noise.

  16. A demonstration of motion base design alternatives for the National Advanced Driving Simulator

    NASA Technical Reports Server (NTRS)

    Mccauley, Michael E.; Sharkey, Thomas J.; Sinacori, John B.; Laforce, Soren; Miller, James C.; Cook, Anthony

    1992-01-01

    A demonstration of the capability of NASA's Vertical Motion Simulator to simulate two alternative motion base designs for the National Advanced Driving simulator (NADS) is reported. The VMS is located at ARC. The motion base conditions used in this demonstration were as follows: (1) a large translational motion base; and (2) a motion base design with limited translational capability. The latter had translational capability representative of a typical synergistic motion platform. These alternatives were selected to test the prediction that large amplitude translational motion would result in a lower incidence or severity of simulator induced sickness (SIS) than would a limited translational motion base. A total of 10 drivers performed two tasks, slaloms and quick-stops, using each of the motion bases. Physiological, objective, and subjective measures were collected. No reliable differences in SIS between the motion base conditions was found in this demonstration. However, in light of the cost considerations and engineering challenges associated with implementing a large translation motion base, performance of a formal study is recommended.

  17. Quantitative Comparison of Photothermal Heat Generation between Gold Nanospheres and Nanorods.

    PubMed

    Qin, Zhenpeng; Wang, Yiru; Randrianalisoa, Jaona; Raeesi, Vahid; Chan, Warren C W; Lipiński, Wojciech; Bischof, John C

    2016-07-21

    Gold nanoparticles (GNPs) are widely used for biomedical applications due to unique optical properties, established synthesis methods, and biological compatibility. Despite important applications of plasmonic heating in thermal therapy, imaging, and diagnostics, the lack of quantification in heat generation leads to difficulties in comparing the heating capability for new plasmonic nanostructures and predicting the therapeutic and diagnostic outcome. This study quantifies GNP heat generation by experimental measurements and theoretical predictions for gold nanospheres (GNS) and nanorods (GNR). Interestingly, the results show a GNP-type dependent agreement between experiment and theory. The measured heat generation of GNS matches well with theory, while the measured heat generation of GNR is only 30% of that predicted theoretically at peak absorption. This then leads to a surprising finding that the polydispersity, the deviation of nanoparticle size and shape from nominal value, significantly influences GNR heat generation (>70% reduction), while having a limited effect for GNS (<10% change). This work demonstrates that polydispersity is an important metric in quantitatively predicting plasmonic heat generation and provides a validated framework to quantitatively compare the heating capabilities between gold and other plasmonic nanostructures.

  18. Quantitative Comparison of Photothermal Heat Generation between Gold Nanospheres and Nanorods

    NASA Astrophysics Data System (ADS)

    Qin, Zhenpeng; Wang, Yiru; Randrianalisoa, Jaona; Raeesi, Vahid; Chan, Warren C. W.; Lipiński, Wojciech; Bischof, John C.

    2016-07-01

    Gold nanoparticles (GNPs) are widely used for biomedical applications due to unique optical properties, established synthesis methods, and biological compatibility. Despite important applications of plasmonic heating in thermal therapy, imaging, and diagnostics, the lack of quantification in heat generation leads to difficulties in comparing the heating capability for new plasmonic nanostructures and predicting the therapeutic and diagnostic outcome. This study quantifies GNP heat generation by experimental measurements and theoretical predictions for gold nanospheres (GNS) and nanorods (GNR). Interestingly, the results show a GNP-type dependent agreement between experiment and theory. The measured heat generation of GNS matches well with theory, while the measured heat generation of GNR is only 30% of that predicted theoretically at peak absorption. This then leads to a surprising finding that the polydispersity, the deviation of nanoparticle size and shape from nominal value, significantly influences GNR heat generation (>70% reduction), while having a limited effect for GNS (<10% change). This work demonstrates that polydispersity is an important metric in quantitatively predicting plasmonic heat generation and provides a validated framework to quantitatively compare the heating capabilities between gold and other plasmonic nanostructures.

  19. Optimization of a novel biophysical model using large scale in vivo antisense hybridization data displays improved prediction capabilities of structurally accessible RNA regions

    PubMed Central

    Vazquez-Anderson, Jorge; Mihailovic, Mia K.; Baldridge, Kevin C.; Reyes, Kristofer G.; Haning, Katie; Cho, Seung Hee; Amador, Paul; Powell, Warren B.

    2017-01-01

    Abstract Current approaches to design efficient antisense RNAs (asRNAs) rely primarily on a thermodynamic understanding of RNA–RNA interactions. However, these approaches depend on structure predictions and have limited accuracy, arguably due to overlooking important cellular environment factors. In this work, we develop a biophysical model to describe asRNA–RNA hybridization that incorporates in vivo factors using large-scale experimental hybridization data for three model RNAs: a group I intron, CsrB and a tRNA. A unique element of our model is the estimation of the availability of the target region to interact with a given asRNA using a differential entropic consideration of suboptimal structures. We showcase the utility of this model by evaluating its prediction capabilities in four additional RNAs: a group II intron, Spinach II, 2-MS2 binding domain and glgC 5΄ UTR. Additionally, we demonstrate the applicability of this approach to other bacterial species by predicting sRNA–mRNA binding regions in two newly discovered, though uncharacterized, regulatory RNAs. PMID:28334800

  20. Hydrological Predictability for the Peruvian Amazon

    NASA Astrophysics Data System (ADS)

    Towner, Jamie; Stephens, Elizabeth; Cloke, Hannah; Bazo, Juan; Coughlan, Erin; Zsoter, Ervin

    2017-04-01

    Population growth in the Peruvian Amazon has prompted the expansion of livelihoods further into the floodplain and thus increasing vulnerability to the annual rise and fall of the river. This growth has coincided with a period of increasing hydrological extremes with more frequent severe flood events. The anticipation and forecasting of these events is crucial for mitigating vulnerability. Forecast-based Financing (FbF) an initiative of the German Red Cross implements risk reducing actions based on threshold exceedance within hydrometeorological forecasts using the Global Flood Awareness System (GloFAS). However, the lead times required to complete certain actions can be long (e.g. several weeks to months ahead to purchase materials and reinforce houses) and are beyond the current capabilities of GloFAS. Therefore, further calibration of the model is required in addition to understanding the climatic drivers and associated hydrological response for specific flood events, such as those observed in 2009, 2012 and 2015. This review sets out to determine the current capabilities of the GloFAS model while exploring the limits of predictability for the Amazon basin. More specifically, how the temporal patterns of flow within the main coinciding tributaries correspond to the overall Amazonian flood wave under various climatic and meteorological influences. Linking the source areas of flow to predictability within the seasonal forecasting system will develop the ability to expand the limit of predictability of the flood wave. This presentation will focus on the Iquitos region of Peru, while providing an overview of the new techniques and current challenges faced within seasonal flood prediction.

  1. Limitations in predicting the space radiation health risk for exploration astronauts.

    PubMed

    Chancellor, Jeffery C; Blue, Rebecca S; Cengel, Keith A; Auñón-Chancellor, Serena M; Rubins, Kathleen H; Katzgraber, Helmut G; Kennedy, Ann R

    2018-01-01

    Despite years of research, understanding of the space radiation environment and the risk it poses to long-duration astronauts remains limited. There is a disparity between research results and observed empirical effects seen in human astronaut crews, likely due to the numerous factors that limit terrestrial simulation of the complex space environment and extrapolation of human clinical consequences from varied animal models. Given the intended future of human spaceflight, with efforts now to rapidly expand capabilities for human missions to the moon and Mars, there is a pressing need to improve upon the understanding of the space radiation risk, predict likely clinical outcomes of interplanetary radiation exposure, and develop appropriate and effective mitigation strategies for future missions. To achieve this goal, the space radiation and aerospace community must recognize the historical limitations of radiation research and how such limitations could be addressed in future research endeavors. We have sought to highlight the numerous factors that limit understanding of the risk of space radiation for human crews and to identify ways in which these limitations could be addressed for improved understanding and appropriate risk posture regarding future human spaceflight.

  2. Observer properties for understanding dynamical displays: Capacities, limitations, and defaults

    NASA Technical Reports Server (NTRS)

    Proffitt, Dennis R.; Kaiser, Mary K.

    1991-01-01

    People's ability to extract relevant information while viewing ongoing events is discussed in terms of human capabilities, limitations, and defaults. A taxonomy of event complexity is developed which predicts which dynamical events people can and cannot construe. This taxonomy is related to the distinction drawn in classical mechanics between particle and extended body motions. People's commonsense understandings of simple mechanical systems are impacted little by formal training, but rather reflect heuristical simplifications that focus on a single dimension of perceived dynamical relevance.

  3. Directed Nanopatterning with Nonlinear Laser Lithography

    NASA Astrophysics Data System (ADS)

    Tokel, Onur; Yavuz, Ozgun; Ergecen, Emre; Pavlov, Ihor; Makey, Ghaith; Ilday, Fatih Omer

    In spite of the successes of maskless optical nanopatterning methods, it remains extremely challenging to create any isotropic, periodic nanopattern. Further, available optical techniques lack the long-range coverage and high periodicity demanded by photonics and photovoltaics applications. Here, we provide a novel solution with Nonlinear Laser Lithography (NLL) approach. Notably, we demonstrate that self-organized nanopatterns can be produced in all possible Bravais lattice types. Further, we show that carefully chosen defects or structued noise can direct NLL symmetries. Exploitation of directed self-organizatio to select or guide to predetermined symmetries is a new capability. Predictive capabilities for such far-from-equilibrium, dissipative systems is very limited due to a lack of experimental systems with predictive models. Here we also present a completely predictive model, and experimentally confirm that the emergence of motifs can be regulated by engineering defects, while the polarization of the ultrafast laser prescribes lattice symmetry, which in turn reinforces translational invariance. Thus, NLL enables a novel, maskless nanofabrication approach, where laser-induced nanopatterns can be rapidly created in any lattice symmetry

  4. Evaluation of icing drag coefficient correlations applied to iced propeller performance prediction

    NASA Technical Reports Server (NTRS)

    Miller, Thomas L.; Shaw, R. J.; Korkan, K. D.

    1987-01-01

    Evaluation of three empirical icing drag coefficient correlations is accomplished through application to a set of propeller icing data. The various correlations represent the best means currently available for relating drag rise to various flight and atmospheric conditions for both fixed-wing and rotating airfoils, and the work presented here ilustrates and evaluates one such application of the latter case. The origins of each of the correlations are discussed, and their apparent capabilities and limitations are summarized. These correlations have been made to be an integral part of a computer code, ICEPERF, which has been designed to calculate iced propeller performance. Comparison with experimental propeller icing data shows generally good agreement, with the quality of the predicted results seen to be directly related to the radial icing extent of each case. The code's capability to properly predict thrust coefficient, power coefficient, and propeller efficiency is shown to be strongly dependent on the choice of correlation selected, as well as upon proper specificatioon of radial icing extent.

  5. New smoke predictions for Alaska in NOAA’s National Air Quality Forecast Capability

    NASA Astrophysics Data System (ADS)

    Davidson, P. M.; Ruminski, M.; Draxler, R.; Kondragunta, S.; Zeng, J.; Rolph, G.; Stajner, I.; Manikin, G.

    2009-12-01

    Smoke from wildfire is an important component of fine particle pollution, which is responsible for tens of thousands of premature deaths each year in the US. In Alaska, wildfire smoke is the leading cause of poor air quality in summer. Smoke forecast guidance helps air quality forecasters and the public take steps to limit exposure to airborne particulate matter. A new smoke forecast guidance tool, built by a cross-NOAA team, leverages efforts of NOAA’s partners at the USFS on wildfire emissions information, and with EPA, in coordinating with state/local air quality forecasters. Required operational deployment criteria, in categories of objective verification, subjective feedback, and production readiness, have been demonstrated in experimental testing during 2008-2009, for addition to the operational products in NOAA's National Air Quality Forecast Capability. The Alaska smoke forecast tool is an adaptation of NOAA’s smoke predictions implemented operationally for the lower 48 states (CONUS) in 2007. The tool integrates satellite information on location of wildfires with weather (North American mesoscale model) and smoke dispersion (HYSPLIT) models to produce daily predictions of smoke transport for Alaska, in binary and graphical formats. Hour-by hour predictions at 12km grid resolution of smoke at the surface and in the column are provided each day by 13 UTC, extending through midnight next day. Forecast accuracy and reliability are monitored against benchmark criteria for accuracy and reliability. While wildfire activity in the CONUS is year-round, the intense wildfire activity in AK is limited to the summer. Initial experimental testing during summer 2008 was hindered by unusually limited wildfire activity and very cloudy conditions. In contrast, heavier than average wildfire activity during summer 2009 provided a representative basis (more than 60 days of wildfire smoke) for demonstrating required prediction accuracy. A new satellite observation product was developed for routine near-real time verification of these predictions. The footprint of the predicted smoke from identified fires is verified with satellite observations of the spatial extent of smoke aerosols (5km resolution). Based on geostationary aerosol optical depth measurements that provide good time resolution of the horizontal spatial extent of the plumes, these observations do not yield quantitative concentrations of smoke particles at the surface. Predicted surface smoke concentrations are consistent with the limited number of in situ observations of total fine particle mass from all sources; however they are much higher than predicted for most CONUS fires. To assess uncertainty associated with fire emissions estimates, sensitivity analyses are in progress.

  6. HANFORD DST THERMAL & SEISMIC PROJECT ANSYS BENCHMARK ANALYSIS OF SEISMIC INDUCED FLUID STRUCTURE INTERACTION IN A HANFORD DOUBLE SHELL PRIMARY TANK

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MACKEY, T.C.

    M&D Professional Services, Inc. (M&D) is under subcontract to Pacific Northwest National Laboratories (PNNL) to perform seismic analysis of the Hanford Site Double-Shell Tanks (DSTs) in support of a project entitled ''Double-Shell Tank (DSV Integrity Project-DST Thermal and Seismic Analyses)''. The overall scope of the project is to complete an up-to-date comprehensive analysis of record of the DST System at Hanford in support of Tri-Party Agreement Milestone M-48-14. The work described herein was performed in support of the seismic analysis of the DSTs. The thermal and operating loads analysis of the DSTs is documented in Rinker et al. (2004). Themore » overall seismic analysis of the DSTs is being performed with the general-purpose finite element code ANSYS. The overall model used for the seismic analysis of the DSTs includes the DST structure, the contained waste, and the surrounding soil. The seismic analysis of the DSTs must address the fluid-structure interaction behavior and sloshing response of the primary tank and contained liquid. ANSYS has demonstrated capabilities for structural analysis, but the capabilities and limitations of ANSYS to perform fluid-structure interaction are less well understood. The purpose of this study is to demonstrate the capabilities and investigate the limitations of ANSYS for performing a fluid-structure interaction analysis of the primary tank and contained waste. To this end, the ANSYS solutions are benchmarked against theoretical solutions appearing in BNL 1995, when such theoretical solutions exist. When theoretical solutions were not available, comparisons were made to theoretical solutions of similar problems and to the results from Dytran simulations. The capabilities and limitations of the finite element code Dytran for performing a fluid-structure interaction analysis of the primary tank and contained waste were explored in a parallel investigation (Abatt 2006). In conjunction with the results of the global ANSYS analysis reported in Carpenter et al. (2006), the results of the two investigations will be compared to help determine if a more refined sub-model of the primary tank is necessary to capture the important fluid-structure interaction effects in the tank and if so, how to best utilize a refined sub-model of the primary tank. Both rigid tank and flexible tank configurations were analyzed with ANSYS. The response parameters of interest are total hydrodynamic reaction forces, impulsive and convective mode frequencies, waste pressures, and slosh heights. To a limited extent: tank stresses are also reported. The results of this study demonstrate that the ANSYS model has the capability to adequately predict global responses such as frequencies and overall reaction forces. Thus, the model is suitable for predicting the global response of the tank and contained waste. On the other hand, while the ANSYS model is capable of adequately predicting waste pressures and primary tank stresses in a large portion of the waste tank, the model does not accurately capture the convective behavior of the waste near the free surface, nor did the model give accurate predictions of slosh heights. Based on the ability of the ANSYS benchmark model to accurately predict frequencies and global reaction forces and on the results presented in Abatt, et al. (2006), the global ANSYS model described in Carpenter et al. (2006) is sufficient for the seismic evaluation of all tank components except for local areas of the primary tank. Due to the limitations of the ANSYS model in predicting the convective response of the waste, the evaluation of primary tank stresses near the waste free surface should be supplemented by results from an ANSYS sub-model of the primary tank that incorporates pressures from theoretical solutions or from Dytran solutions. However, the primary tank is expected to have low demand to capacity ratios in the upper wall. Moreover, due to the less than desired mesh resolution in the primary tank knuckle of the global ANSYS model, the evaluation of the primary tank stresses in the lower knuckle should be supplemented by results from a more refined ANSYS sub-model of the primary tank that incorporates pressures from theoretical solutions or from Dytran solutions.« less

  7. Current status of one- and two-dimensional numerical models: Successes and limitations

    NASA Technical Reports Server (NTRS)

    Schwartz, R. J.; Gray, J. L.; Lundstrom, M. S.

    1985-01-01

    The capabilities of one and two-dimensional numerical solar cell modeling programs (SCAP1D and SCAP2D) are described. The occasions when a two-dimensional model is required are discussed. The application of the models to design, analysis, and prediction are presented along with a discussion of problem areas for solar cell modeling.

  8. A review of methods for predicting air pollution dispersion

    NASA Technical Reports Server (NTRS)

    Mathis, J. J., Jr.; Grose, W. L.

    1973-01-01

    Air pollution modeling, and problem areas in air pollution dispersion modeling were surveyed. Emission source inventory, meteorological data, and turbulent diffusion are discussed in terms of developing a dispersion model. Existing mathematical models of urban air pollution, and highway and airport models are discussed along with their limitations. Recommendations for improving modeling capabilities are included.

  9. A methodology for reduced order modeling and calibration of the upper atmosphere

    NASA Astrophysics Data System (ADS)

    Mehta, Piyush M.; Linares, Richard

    2017-10-01

    Atmospheric drag is the largest source of uncertainty in accurately predicting the orbit of satellites in low Earth orbit (LEO). Accurately predicting drag for objects that traverse LEO is critical to space situational awareness. Atmospheric models used for orbital drag calculations can be characterized either as empirical or physics-based (first principles based). Empirical models are fast to evaluate but offer limited real-time predictive/forecasting ability, while physics based models offer greater predictive/forecasting ability but require dedicated parallel computational resources. Also, calibration with accurate data is required for either type of models. This paper presents a new methodology based on proper orthogonal decomposition toward development of a quasi-physical, predictive, reduced order model that combines the speed of empirical and the predictive/forecasting capabilities of physics-based models. The methodology is developed to reduce the high dimensionality of physics-based models while maintaining its capabilities. We develop the methodology using the Naval Research Lab's Mass Spectrometer Incoherent Scatter model and show that the diurnal and seasonal variations can be captured using a small number of modes and parameters. We also present calibration of the reduced order model using the CHAMP and GRACE accelerometer-derived densities. Results show that the method performs well for modeling and calibration of the upper atmosphere.

  10. Phonon-tunnelling dissipation in mechanical resonators

    PubMed Central

    Cole, Garrett D.; Wilson-Rae, Ignacio; Werbach, Katharina; Vanner, Michael R.; Aspelmeyer, Markus

    2011-01-01

    Microscale and nanoscale mechanical resonators have recently emerged as ubiquitous devices for use in advanced technological applications, for example, in mobile communications and inertial sensors, and as novel tools for fundamental scientific endeavours. Their performance is in many cases limited by the deleterious effects of mechanical damping. In this study, we report a significant advancement towards understanding and controlling support-induced losses in generic mechanical resonators. We begin by introducing an efficient numerical solver, based on the 'phonon-tunnelling' approach, capable of predicting the design-limited damping of high-quality mechanical resonators. Further, through careful device engineering, we isolate support-induced losses and perform a rigorous experimental test of the strong geometric dependence of this loss mechanism. Our results are in excellent agreement with the theory, demonstrating the predictive power of our approach. In combination with recent progress on complementary dissipation mechanisms, our phonon-tunnelling solver represents a major step towards accurate prediction of the mechanical quality factor. PMID:21407197

  11. The role of thermal and lubricant boundary layers in the transient thermal analysis of spur gears

    NASA Technical Reports Server (NTRS)

    El-Bayoumy, L. E.; Akin, L. S.; Townsend, D. P.; Choy, F. C.

    1989-01-01

    An improved convection heat-transfer model has been developed for the prediction of the transient tooth surface temperature of spur gears. The dissipative quality of the lubricating fluid is shown to be limited to the capacity extent of the thermal boundary layer. This phenomenon can be of significance in the determination of the thermal limit of gears accelerating to the point where gear scoring occurs. Steady-state temperature prediction is improved considerably through the use of a variable integration time step that substantially reduces computer time. Computer-generated plots of temperature contours enable the user to animate the propagation of the thermal wave as the gears come into and out of contact, thus contributing to better understanding of this complex problem. This model has a much better capability at predicting gear-tooth temperatures than previous models.

  12. Controlling chaos faster.

    PubMed

    Bick, Christian; Kolodziejski, Christoph; Timme, Marc

    2014-09-01

    Predictive feedback control is an easy-to-implement method to stabilize unknown unstable periodic orbits in chaotic dynamical systems. Predictive feedback control is severely limited because asymptotic convergence speed decreases with stronger instabilities which in turn are typical for larger target periods, rendering it harder to effectively stabilize periodic orbits of large period. Here, we study stalled chaos control, where the application of control is stalled to make use of the chaotic, uncontrolled dynamics, and introduce an adaptation paradigm to overcome this limitation and speed up convergence. This modified control scheme is not only capable of stabilizing more periodic orbits than the original predictive feedback control but also speeds up convergence for typical chaotic maps, as illustrated in both theory and application. The proposed adaptation scheme provides a way to tune parameters online, yielding a broadly applicable, fast chaos control that converges reliably, even for periodic orbits of large period.

  13. Can future land use change be usefully predicted?

    NASA Astrophysics Data System (ADS)

    Ramankutty, N.; Coomes, O.

    2011-12-01

    There has been increasing recognition over the last decade that land use and land cover change is an important driver of global environmental change. Consequently, there have been growing efforts to understanding processes of land change from local-to-global scales, and to develop models to predict future changes in the land. However, we believe that such efforts are hampered by limited attention being paid to the critical points of land change. Here, we present a framework for understanding land use change by distinguishing within-regime land-use dynamics from land-use regime shifts. Illustrative historical examples reveal the significance of land-use regime shifts. We further argue that the land-use literature predominantly demonstrates a good understanding (with predictive power) of within-regime dynamics, while understanding of land-use regime shifts is limited to ex post facto explanations with limited predictive capability. The focus of land use change science needs to be redirected toward studying land-use regime shifts if we are to have any hope of making useful future projections. We present a preliminary framework for understanding land-use regime-shifts, using two case studies in Latin America as examples. We finally discuss the implications of our proposal for land change science.

  14. LAMPS software

    NASA Technical Reports Server (NTRS)

    Perkey, D. J.; Kreitzberg, C. W.

    1984-01-01

    The dynamic prediction model along with its macro-processor capability and data flow system from the Drexel Limited-Area and Mesoscale Prediction System (LAMPS) were converted and recorded for the Perkin-Elmer 3220. The previous version of this model was written for Control Data Corporation 7600 and CRAY-1a computer environment which existed until recently at the National Center for Atmospheric Research. The purpose of this conversion was to prepare LAMPS for porting to computer environments other than that encountered at NCAR. The emphasis was shifted from programming tasks to model simulation and evaluation tests.

  15. Cold formability prediction by the modified maximum force criterion with a non-associated Hill48 model accounting for anisotropic hardening

    NASA Astrophysics Data System (ADS)

    Lian, J.; Ahn, D. C.; Chae, D. C.; Münstermann, S.; Bleck, W.

    2016-08-01

    Experimental and numerical investigations on the characterisation and prediction of cold formability of a ferritic steel sheet are performed in this study. Tensile tests and Nakajima tests were performed for the plasticity characterisation and the forming limit diagram determination. In the numerical prediction, the modified maximum force criterion is selected as the localisation criterion. For the plasticity model, a non-associated formulation of the Hill48 model is employed. With the non-associated flow rule, the model can result in a similar predictive capability of stress and r-value directionality to the advanced non-quadratic associated models. To accurately characterise the anisotropy evolution during hardening, the anisotropic hardening is also calibrated and implemented into the model for the prediction of the formability.

  16. Conceptual models governing leaching behavior and their long-term predictive capability

    USGS Publications Warehouse

    Claassen, Hans C.

    1981-01-01

    Six models that may be used to describe the interaction of radioactive waste solids with aqueous solutions are as follows:Simple linear mass transfer;Simple parabolic mass transfer;Parabolic mass transfer with the formation of a diffusion-limiting surface layer at an arbitrary time;Initial parabolic mass transfer followed by linear mass transfer at an arbitrary time;Parabolic (or linear) mass transfer and concomitant surface sorption; andParabolic (or linear) mass transfer and concomitant chemical precipitation.Some of these models lead to either illogical or unrealistic predictions when published data are extrapolated to long times. These predictions result because most data result from short-term experimentation. Probably for longer times, processes will occur that have not been observed in the shorter experiments. This hypothesis has been verified by mass-transfer data from laboratory experiments using natural volcanic glass to predict the composition of groundwater. That such rate-limiting mechanisms do occur is reassuring, although now it is not possible to deduce a single mass-transfer limiting mechanism that could control the solution concentration of all components of all waste forms being investigated. Probably the most reasonable mechanisms are surface sorption and chemical precipitation of the species of interest. Another is limiting of mass transfer by chemical precipitation on the waste form surface of a substance not containing the species of interest, that is, presence of a diffusion-limiting layer. The presence of sorption and chemical precipitation as factors limiting mass transfer has been verified in natural groundwater systems, whereas the diffusion-limiting mechanism has not been verified yet.

  17. Environmental Capability of Liquid Lubricants

    NASA Technical Reports Server (NTRS)

    Beerbower, A.

    1973-01-01

    The methods available for predicting the properties of liquid lubricants from their structural formulas are discussed. The methods make it possible to design lubricants by forecasting the results of changing the structure and to determine the limits to which liquid lubricants can cope with environmental extremes. The methods are arranged in order of their thermodynamic properties through empirical physical properties to chemical properties.

  18. Decompression models: review, relevance and validation capabilities.

    PubMed

    Hugon, J

    2014-01-01

    For more than a century, several types of mathematical models have been proposed to describe tissue desaturation mechanisms in order to limit decompression sickness. These models are statistically assessed by DCS cases, and, over time, have gradually included bubble formation biophysics. This paper proposes to review this evolution and discuss its limitations. This review is organized around the comparison of decompression model biophysical criteria and theoretical foundations. Then, the DCS-predictive capability was analyzed to assess whether it could be improved by combining different approaches. Most of the operational decompression models have a neo-Haldanian form. Nevertheless, bubble modeling has been gaining popularity, and the circulating bubble amount has become a major output. By merging both views, it seems possible to build a relevant global decompression model that intends to simulate bubble production while predicting DCS risks for all types of exposures and decompression profiles. A statistical approach combining both DCS and bubble detection databases has to be developed to calibrate a global decompression model. Doppler ultrasound and DCS data are essential: i. to make correlation and validation phases reliable; ii. to adjust biophysical criteria to fit at best the observed bubble kinetics; and iii. to build a relevant risk function.

  19. Can we predict age at natural menopause using ovarian reserve tests or mother's age at menopause? A systematic literature review.

    PubMed

    Depmann, Martine; Broer, Simone L; van der Schouw, Yvonne T; Tehrani, Fahimeh R; Eijkemans, Marinus J; Mol, Ben W; Broekmans, Frank J

    2016-02-01

    This review aimed to appraise data on prediction of age at natural menopause (ANM) based on antimüllerian hormone (AMH), antral follicle count (AFC), and mother's ANM to evaluate clinical usefulness and to identify directions for further research. We conducted three systematic reviews of the literature to identify studies of menopause prediction based on AMH, AFC, or mother's ANM, corrected for baseline age. Six studies selected in the search for AMH all consistently demonstrated AMH as being capable of predicting ANM (hazard ratio, 5.6-9.2). The sole study reporting on mother's ANM indicated that AMH was capable of predicting ANM (hazard ratio, 9.1-9.3). Two studies provided analyses of AFC and yielded conflicting results, making this marker less strong. AMH is currently the most promising marker for ANM prediction. The predictive capacity of mother's ANM demonstrated in a single study makes this marker a promising contributor to AMH for menopause prediction. Models, however, do not predict the extremes of menopause age very well and have wide prediction interval. These markers clearly need improvement before they can be used for individual prediction of menopause in the clinical setting. Moreover, potential limitations for such use include variations in AMH assays used and a lack of correction for factors or diseases affecting AMH levels or ANM. Future studies should include women of a broad age range (irrespective of cycle regularity) and should base predictions on repeated AMH measurements. Furthermore, currently unknown candidate predictors need to be identified.

  20. A Simplified Approach for the Rapid Generation of Transient Heat-Shield Environments

    NASA Technical Reports Server (NTRS)

    Wurster, Kathryn E.; Zoby, E. Vincent; Mills, Janelle C.; Kamhawi, Hilmi

    2007-01-01

    A simplified approach has been developed whereby transient entry heating environments are reliably predicted based upon a limited set of benchmark radiative and convective solutions. Heating, pressure and shear-stress levels, non-dimensionalized by an appropriate parameter at each benchmark condition are applied throughout the entry profile. This approach was shown to be valid based on the observation that the fully catalytic, laminar distributions examined were relatively insensitive to altitude as well as velocity throughout the regime of significant heating. In order to establish a best prediction by which to judge the results that can be obtained using a very limited benchmark set, predictions based on a series of benchmark cases along a trajectory are used. Solutions which rely only on the limited benchmark set, ideally in the neighborhood of peak heating, are compared against the resultant transient heating rates and total heat loads from the best prediction. Predictions based on using two or fewer benchmark cases at or near the trajectory peak heating condition, yielded results to within 5-10 percent of the best predictions. Thus, the method provides transient heating environments over the heat-shield face with sufficient resolution and accuracy for thermal protection system design and also offers a significant capability to perform rapid trade studies such as the effect of different trajectories, atmospheres, or trim angle of attack, on convective and radiative heating rates and loads, pressure, and shear-stress levels.

  1. The Potential for Predicting Precipitation on Seasonal-to-Interannual Timescales

    NASA Technical Reports Server (NTRS)

    Koster, R. D.

    1999-01-01

    The ability to predict precipitation several months in advance would have a significant impact on water resource management. This talk provides an overview of a project aimed at developing this prediction capability. NASA's Seasonal-to-Interannual Prediction Project (NSIPP) will generate seasonal-to-interannual sea surface temperature predictions through detailed ocean circulation modeling and will then translate these SST forecasts into forecasts of continental precipitation through the application of an atmospheric general circulation model and a "SVAT"-type land surface model. As part of the process, ocean variables (e.g., height) and land variables (e.g., soil moisture) will be updated regularly via data assimilation. The overview will include a discussion of the variability inherent in such a modeling system and will provide some quantitative estimates of the absolute upper limits of seasonal-to-interannual precipitation predictability.

  2. Physiologically Based Pharmacokinetic Model for Terbinafine in Rats and Humans

    PubMed Central

    Hosseini-Yeganeh, Mahboubeh; McLachlan, Andrew J.

    2002-01-01

    The aim of this study was to develop a physiologically based pharmacokinetic (PB-PK) model capable of describing and predicting terbinafine concentrations in plasma and tissues in rats and humans. A PB-PK model consisting of 12 tissue and 2 blood compartments was developed using concentration-time data for tissues from rats (n = 33) after intravenous bolus administration of terbinafine (6 mg/kg of body weight). It was assumed that all tissues except skin and testis tissues were well-stirred compartments with perfusion rate limitations. The uptake of terbinafine into skin and testis tissues was described by a PB-PK model which incorporates a membrane permeability rate limitation. The concentration-time data for terbinafine in human plasma and tissues were predicted by use of a scaled-up PB-PK model, which took oral absorption into consideration. The predictions obtained from the global PB-PK model for the concentration-time profile of terbinafine in human plasma and tissues were in close agreement with the observed concentration data for rats. The scaled-up PB-PK model provided an excellent prediction of published terbinafine concentration-time data obtained after the administration of single and multiple oral doses in humans. The estimated volume of distribution at steady state (Vss) obtained from the PB-PK model agreed with the reported value of 11 liters/kg. The apparent volume of distribution of terbinafine in skin and adipose tissues accounted for 41 and 52%, respectively, of the Vss for humans, indicating that uptake into and redistribution from these tissues dominate the pharmacokinetic profile of terbinafine. The PB-PK model developed in this study was capable of accurately predicting the plasma and tissue terbinafine concentrations in both rats and humans and provides insight into the physiological factors that determine terbinafine disposition. PMID:12069977

  3. High fidelity studies of exploding foil initiator bridges, Part 2: Experimental results

    NASA Astrophysics Data System (ADS)

    Neal, William; Bowden, Mike

    2017-01-01

    Simulations of high voltage detonators, such as Exploding Bridgewire (EBW) and Exploding Foil Initiators (EFI), have historically been simple, often empirical, one-dimensional models capable of predicting parameters such as current, voltage, and in the case of EFIs, flyer velocity. Experimental methods have correspondingly generally been limited to the same parameters. With the advent of complex, first principles magnetohydrodynamic codes such as ALEGRA MHD, it is now possible to simulate these components in three dimensions and predict greater range of parameters than before. A significant improvement in experimental capability was therefore required to ensure these simulations could be adequately verified. In this second paper of a three part study, data is presented from a flexible foil EFI header experiment. This study has shown that there is significant bridge expansion before time of peak voltage and that heating within the bridge material is spatially affected by the microstructure of the metal foil.

  4. Status of Computational Aerodynamic Modeling Tools for Aircraft Loss-of-Control

    NASA Technical Reports Server (NTRS)

    Frink, Neal T.; Murphy, Patrick C.; Atkins, Harold L.; Viken, Sally A.; Petrilli, Justin L.; Gopalarathnam, Ashok; Paul, Ryan C.

    2016-01-01

    A concerted effort has been underway over the past several years to evolve computational capabilities for modeling aircraft loss-of-control under the NASA Aviation Safety Program. A principal goal has been to develop reliable computational tools for predicting and analyzing the non-linear stability & control characteristics of aircraft near stall boundaries affecting safe flight, and for utilizing those predictions for creating augmented flight simulation models that improve pilot training. Pursuing such an ambitious task with limited resources required the forging of close collaborative relationships with a diverse body of computational aerodynamicists and flight simulation experts to leverage their respective research efforts into the creation of NASA tools to meet this goal. Considerable progress has been made and work remains to be done. This paper summarizes the status of the NASA effort to establish computational capabilities for modeling aircraft loss-of-control and offers recommendations for future work.

  5. A Robust Compositional Architecture for Autonomous Systems

    NASA Technical Reports Server (NTRS)

    Brat, Guillaume; Deney, Ewen; Farrell, Kimberley; Giannakopoulos, Dimitra; Jonsson, Ari; Frank, Jeremy; Bobby, Mark; Carpenter, Todd; Estlin, Tara

    2006-01-01

    Space exploration applications can benefit greatly from autonomous systems. Great distances, limited communications and high costs make direct operations impossible while mandating operations reliability and efficiency beyond what traditional commanding can provide. Autonomous systems can improve reliability and enhance spacecraft capability significantly. However, there is reluctance to utilizing autonomous systems. In part this is due to general hesitation about new technologies, but a more tangible concern is that of reliability of predictability of autonomous software. In this paper, we describe ongoing work aimed at increasing robustness and predictability of autonomous software, with the ultimate goal of building trust in such systems. The work combines state-of-the-art technologies and capabilities in autonomous systems with advanced validation and synthesis techniques. The focus of this paper is on the autonomous system architecture that has been defined, and on how it enables the application of validation techniques for resulting autonomous systems.

  6. Advanced Booster Liquid Engine Combustion Stability

    NASA Technical Reports Server (NTRS)

    Tucker, Kevin; Gentz, Steve; Nettles, Mindy

    2015-01-01

    Combustion instability is a phenomenon in liquid rocket engines caused by complex coupling between the time-varying combustion processes and the fluid dynamics in the combustor. Consequences of the large pressure oscillations associated with combustion instability often cause significant hardware damage and can be catastrophic. The current combustion stability assessment tools are limited by the level of empiricism in many inputs and embedded models. This limited predictive capability creates significant uncertainty in stability assessments. This large uncertainty then increases hardware development costs due to heavy reliance on expensive and time-consuming testing.

  7. Development of a 3D numerical methodology for fast prediction of gun blast induced loading

    NASA Astrophysics Data System (ADS)

    Costa, E.; Lagasco, F.

    2014-05-01

    In this paper, the development of a methodology based on semi-empirical models from the literature to carry out 3D prediction of pressure loading on surfaces adjacent to a weapon system during firing is presented. This loading is consequent to the impact of the blast wave generated by the projectile exiting the muzzle bore. When exceeding a pressure threshold level, loading is potentially capable to induce unwanted damage to nearby hard structures as well as frangible panels or electronic equipment. The implemented model shows the ability to quickly predict the distribution of the blast wave parameters over three-dimensional complex geometry surfaces when the weapon design and emplacement data as well as propellant and projectile characteristics are available. Considering these capabilities, the use of the proposed methodology is envisaged as desirable in the preliminary design phase of the combat system to predict adverse effects and then enable to identify the most appropriate countermeasures. By providing a preliminary but sensitive estimate of the operative environmental loading, this numerical means represents a good alternative to more powerful, but time consuming advanced computational fluid dynamics tools, which use can, thus, be limited to the final phase of the design.

  8. Optimization of a novel biophysical model using large scale in vivo antisense hybridization data displays improved prediction capabilities of structurally accessible RNA regions.

    PubMed

    Vazquez-Anderson, Jorge; Mihailovic, Mia K; Baldridge, Kevin C; Reyes, Kristofer G; Haning, Katie; Cho, Seung Hee; Amador, Paul; Powell, Warren B; Contreras, Lydia M

    2017-05-19

    Current approaches to design efficient antisense RNAs (asRNAs) rely primarily on a thermodynamic understanding of RNA-RNA interactions. However, these approaches depend on structure predictions and have limited accuracy, arguably due to overlooking important cellular environment factors. In this work, we develop a biophysical model to describe asRNA-RNA hybridization that incorporates in vivo factors using large-scale experimental hybridization data for three model RNAs: a group I intron, CsrB and a tRNA. A unique element of our model is the estimation of the availability of the target region to interact with a given asRNA using a differential entropic consideration of suboptimal structures. We showcase the utility of this model by evaluating its prediction capabilities in four additional RNAs: a group II intron, Spinach II, 2-MS2 binding domain and glgC 5΄ UTR. Additionally, we demonstrate the applicability of this approach to other bacterial species by predicting sRNA-mRNA binding regions in two newly discovered, though uncharacterized, regulatory RNAs. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  9. Experimental evaluation of models for predicting Cherenkov light intensities from short-cooled nuclear fuel assemblies

    NASA Astrophysics Data System (ADS)

    Branger, E.; Grape, S.; Jansson, P.; Jacobsson Svärd, S.

    2018-02-01

    The Digital Cherenkov Viewing Device (DCVD) is a tool used by nuclear safeguards inspectors to verify irradiated nuclear fuel assemblies in wet storage based on the recording of Cherenkov light produced by the assemblies. One type of verification involves comparing the measured light intensity from an assembly with a predicted intensity, based on assembly declarations. Crucial for such analyses is the performance of the prediction model used, and recently new modelling methods have been introduced to allow for enhanced prediction capabilities by taking the irradiation history into account, and by including the cross-talk radiation from neighbouring assemblies in the predictions. In this work, the performance of three models for Cherenkov-light intensity prediction is evaluated by applying them to a set of short-cooled PWR 17x17 assemblies for which experimental DCVD measurements and operator-declared irradiation data was available; (1) a two-parameter model, based on total burnup and cooling time, previously used by the safeguards inspectors, (2) a newly introduced gamma-spectrum-based model, which incorporates cycle-wise burnup histories, and (3) the latter gamma-spectrum-based model with the addition to account for contributions from neighbouring assemblies. The results show that the two gamma-spectrum-based models provide significantly higher precision for the measured inventory compared to the two-parameter model, lowering the standard deviation between relative measured and predicted intensities from 15.2 % to 8.1 % respectively 7.8 %. The results show some systematic differences between assemblies of different designs (produced by different manufacturers) in spite of their similar PWR 17x17 geometries, and possible ways are discussed to address such differences, which may allow for even higher prediction capabilities. Still, it is concluded that the gamma-spectrum-based models enable confident verification of the fuel assembly inventory at the currently used detection limit for partial defects, being a 30 % discrepancy between measured and predicted intensities, while some false detection occurs with the two-parameter model. The results also indicate that the gamma-spectrum-based prediction methods are accurate enough that the 30 % discrepancy limit could potentially be lowered.

  10. Temperature evolution during compaction of pharmaceutical powders.

    PubMed

    Zavaliangos, Antonios; Galen, Steve; Cunningham, John; Winstead, Denita

    2008-08-01

    A numerical approach to the prediction of temperature evolution in tablet compaction is presented here. It is based on a coupled thermomechanical finite element analysis and a calibrated Drucker-Prager Cap model. This approach is capable of predicting transient temperatures during compaction, which cannot be assessed by experimental techniques due to inherent test limitations. Model predictions are validated with infrared (IR) temperature measurements of the top tablet surface after ejection and match well with experiments. The dependence of temperature fields on speed and degree of compaction are naturally captured. The estimated transient temperatures are maximum at the end of compaction at the center of the tablet and close to the die wall next to the powder/die interface.

  11. Landscape-and regional-scale shifts in forest composition under climate change in the Central Hardwood Region of the United States

    Treesearch

    Wen J. Wang; Hong S. He; Frank R. Thompson; Jacob S. Fraser; William D. Dijak

    2016-01-01

    Tree species distribution and abundance are affected by forces operating at multiple scales. Niche and biophysical process models have been commonly used to predict climate change effects at regional scales, however, these models have limited capability to include site-scale population dynamics and landscape- scale disturbance and dispersal. We applied a landscape...

  12. Prediction task guided representation learning of medical codes in EHR.

    PubMed

    Cui, Liwen; Xie, Xiaolei; Shen, Zuojun

    2018-06-18

    There have been rapidly growing applications using machine learning models for predictive analytics in Electronic Health Records (EHR) to improve the quality of hospital services and the efficiency of healthcare resource utilization. A fundamental and crucial step in developing such models is to convert medical codes in EHR to feature vectors. These medical codes are used to represent diagnoses or procedures. Their vector representations have a tremendous impact on the performance of machine learning models. Recently, some researchers have utilized representation learning methods from Natural Language Processing (NLP) to learn vector representations of medical codes. However, most previous approaches are unsupervised, i.e. the generation of medical code vectors is independent from prediction tasks. Thus, the obtained feature vectors may be inappropriate for a specific prediction task. Moreover, unsupervised methods often require a lot of samples to obtain reliable results, but most practical problems have very limited patient samples. In this paper, we develop a new method called Prediction Task Guided Health Record Aggregation (PTGHRA), which aggregates health records guided by prediction tasks, to construct training corpus for various representation learning models. Compared with unsupervised approaches, representation learning models integrated with PTGHRA yield a significant improvement in predictive capability of generated medical code vectors, especially for limited training samples. Copyright © 2018. Published by Elsevier Inc.

  13. Stress Prediction System

    NASA Technical Reports Server (NTRS)

    1995-01-01

    NASA wanted to know how astronauts' bodies would react under various gravitational pulls and space suit weights. Under contract to NASA, the University of Michigan's Center for Ergonomics developed a model capable of predicting what type of stress and what degree of load a body could stand. The algorithm generated was commercialized with the ISTU (Isometric Strength Testing Unit) Functional Capacity Evaluation System, which simulates tasks such as lifting a heavy box or pushing a cart and evaluates the exertion expended. It also identifies the muscle group that limits the subject's performance. It is an effective tool of personnel evaluation, selection and job redesign.

  14. Output-Adaptive Tetrahedral Cut-Cell Validation for Sonic Boom Prediction

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Darmofal, David L.

    2008-01-01

    A cut-cell approach to Computational Fluid Dynamics (CFD) that utilizes the median dual of a tetrahedral background grid is described. The discrete adjoint is also calculated, which permits adaptation based on improving the calculation of a specified output (off-body pressure signature) in supersonic inviscid flow. These predicted signatures are compared to wind tunnel measurements on and off the configuration centerline 10 body lengths below the model to validate the method for sonic boom prediction. Accurate mid-field sonic boom pressure signatures are calculated with the Euler equations without the use of hybrid grid or signature propagation methods. Highly-refined, shock-aligned anisotropic grids were produced by this method from coarse isotropic grids created without prior knowledge of shock locations. A heuristic reconstruction limiter provided stable flow and adjoint solution schemes while producing similar signatures to Barth-Jespersen and Venkatakrishnan limiters. The use of cut-cells with an output-based adaptive scheme completely automated this accurate prediction capability after a triangular mesh is generated for the cut surface. This automation drastically reduces the manual intervention required by existing methods.

  15. Limited sampling strategy models for estimating the AUC of gliclazide in Chinese healthy volunteers.

    PubMed

    Huang, Ji-Han; Wang, Kun; Huang, Xiao-Hui; He, Ying-Chun; Li, Lu-Jin; Sheng, Yu-Cheng; Yang, Juan; Zheng, Qing-Shan

    2013-06-01

    The aim of this work is to reduce the cost of required sampling for the estimation of the area under the gliclazide plasma concentration versus time curve within 60 h (AUC0-60t ). The limited sampling strategy (LSS) models were established and validated by the multiple regression model within 4 or fewer gliclazide concentration values. Absolute prediction error (APE), root of mean square error (RMSE) and visual prediction check were used as criterion. The results of Jack-Knife validation showed that 10 (25.0 %) of the 40 LSS based on the regression analysis were not within an APE of 15 % using one concentration-time point. 90.2, 91.5 and 92.4 % of the 40 LSS models were capable of prediction using 2, 3 and 4 points, respectively. Limited sampling strategies were developed and validated for estimating AUC0-60t of gliclazide. This study indicates that the implementation of an 80 mg dosage regimen enabled accurate predictions of AUC0-60t by the LSS model. This study shows that 12, 6, 4, 2 h after administration are the key sampling times. The combination of (12, 2 h), (12, 8, 2 h) or (12, 8, 4, 2 h) can be chosen as sampling hours for predicting AUC0-60t in practical application according to requirement.

  16. Generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB.

    PubMed

    Lee, Leng-Feng; Umberger, Brian R

    2016-01-01

    Computer modeling, simulation and optimization are powerful tools that have seen increased use in biomechanics research. Dynamic optimizations can be categorized as either data-tracking or predictive problems. The data-tracking approach has been used extensively to address human movement problems of clinical relevance. The predictive approach also holds great promise, but has seen limited use in clinical applications. Enhanced software tools would facilitate the application of predictive musculoskeletal simulations to clinically-relevant research. The open-source software OpenSim provides tools for generating tracking simulations but not predictive simulations. However, OpenSim includes an extensive application programming interface that permits extending its capabilities with scripting languages such as MATLAB. In the work presented here, we combine the computational tools provided by MATLAB with the musculoskeletal modeling capabilities of OpenSim to create a framework for generating predictive simulations of musculoskeletal movement based on direct collocation optimal control techniques. In many cases, the direct collocation approach can be used to solve optimal control problems considerably faster than traditional shooting methods. Cyclical and discrete movement problems were solved using a simple 1 degree of freedom musculoskeletal model and a model of the human lower limb, respectively. The problems could be solved in reasonable amounts of time (several seconds to 1-2 hours) using the open-source IPOPT solver. The problems could also be solved using the fmincon solver that is included with MATLAB, but the computation times were excessively long for all but the smallest of problems. The performance advantage for IPOPT was derived primarily by exploiting sparsity in the constraints Jacobian. The framework presented here provides a powerful and flexible approach for generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB. This should allow researchers to more readily use predictive simulation as a tool to address clinical conditions that limit human mobility.

  17. Generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB

    PubMed Central

    Lee, Leng-Feng

    2016-01-01

    Computer modeling, simulation and optimization are powerful tools that have seen increased use in biomechanics research. Dynamic optimizations can be categorized as either data-tracking or predictive problems. The data-tracking approach has been used extensively to address human movement problems of clinical relevance. The predictive approach also holds great promise, but has seen limited use in clinical applications. Enhanced software tools would facilitate the application of predictive musculoskeletal simulations to clinically-relevant research. The open-source software OpenSim provides tools for generating tracking simulations but not predictive simulations. However, OpenSim includes an extensive application programming interface that permits extending its capabilities with scripting languages such as MATLAB. In the work presented here, we combine the computational tools provided by MATLAB with the musculoskeletal modeling capabilities of OpenSim to create a framework for generating predictive simulations of musculoskeletal movement based on direct collocation optimal control techniques. In many cases, the direct collocation approach can be used to solve optimal control problems considerably faster than traditional shooting methods. Cyclical and discrete movement problems were solved using a simple 1 degree of freedom musculoskeletal model and a model of the human lower limb, respectively. The problems could be solved in reasonable amounts of time (several seconds to 1–2 hours) using the open-source IPOPT solver. The problems could also be solved using the fmincon solver that is included with MATLAB, but the computation times were excessively long for all but the smallest of problems. The performance advantage for IPOPT was derived primarily by exploiting sparsity in the constraints Jacobian. The framework presented here provides a powerful and flexible approach for generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB. This should allow researchers to more readily use predictive simulation as a tool to address clinical conditions that limit human mobility. PMID:26835184

  18. Cancer survival analysis using semi-supervised learning method based on Cox and AFT models with L1/2 regularization.

    PubMed

    Liang, Yong; Chai, Hua; Liu, Xiao-Ying; Xu, Zong-Ben; Zhang, Hai; Leung, Kwong-Sak

    2016-03-01

    One of the most important objectives of the clinical cancer research is to diagnose cancer more accurately based on the patients' gene expression profiles. Both Cox proportional hazards model (Cox) and accelerated failure time model (AFT) have been widely adopted to the high risk and low risk classification or survival time prediction for the patients' clinical treatment. Nevertheless, two main dilemmas limit the accuracy of these prediction methods. One is that the small sample size and censored data remain a bottleneck for training robust and accurate Cox classification model. In addition to that, similar phenotype tumours and prognoses are actually completely different diseases at the genotype and molecular level. Thus, the utility of the AFT model for the survival time prediction is limited when such biological differences of the diseases have not been previously identified. To try to overcome these two main dilemmas, we proposed a novel semi-supervised learning method based on the Cox and AFT models to accurately predict the treatment risk and the survival time of the patients. Moreover, we adopted the efficient L1/2 regularization approach in the semi-supervised learning method to select the relevant genes, which are significantly associated with the disease. The results of the simulation experiments show that the semi-supervised learning model can significant improve the predictive performance of Cox and AFT models in survival analysis. The proposed procedures have been successfully applied to four real microarray gene expression and artificial evaluation datasets. The advantages of our proposed semi-supervised learning method include: 1) significantly increase the available training samples from censored data; 2) high capability for identifying the survival risk classes of patient in Cox model; 3) high predictive accuracy for patients' survival time in AFT model; 4) strong capability of the relevant biomarker selection. Consequently, our proposed semi-supervised learning model is one more appropriate tool for survival analysis in clinical cancer research.

  19. Predicting future threats to the long-term survival of Gila Trout using a high-resolution simulation of climate change

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kennedy, Thomas L.; Gutzler, David S.; Leung, Lai R.

    2008-11-20

    Regional climates are a major factor in determining the distribution of many species. Anthropogenic inputs of greenhouse gases into the atmosphere have been predicted to cause rapid climatic changes in the next 50-100 years. Species such as the Gila Trout (Onchorhynchus gilae) that have small ranges, limited dispersal capabilities, and narrow physiological tolerances will become increasingly susceptible to extinction as their climate envelope changes. This study uses a regional climate change simulation (Leung et al. 2004) to determine changes in the climate envelope for Gila Trout, which is sensitive to maximum temperature, associated with a plausible scenario for greenhouse gasmore » increases. The model predicts approximately a 2° C increase in temperature and a doubling by the mid 21st Century in the annual number of days during which temperature exceeds 37°C, and a 25% increase in the number of days above 32°C, across the current geographical range of Gila Trout. At the same time summer rainfall decreases by more than 20%. These climate changes would reduce their available habitat by decreasing the size of their climate envelope. Warmer temperatures coupled with a decrease in summer precipitation would also tend to increase the intensity and frequency of forest fires that are a major threat to their survival. The climate envelope approach utilized here could be used to assess climate change threats to other rare species with limited ranges and dispersal capabilities.« less

  20. Reacting Multi-Species Gas Capability for USM3D Flow Solver

    NASA Technical Reports Server (NTRS)

    Frink, Neal T.; Schuster, David M.

    2012-01-01

    The USM3D Navier-Stokes flow solver contributed heavily to the NASA Constellation Project (CxP) as a highly productive computational tool for generating the aerodynamic databases for the Ares I and V launch vehicles and Orion launch abort vehicle (LAV). USM3D is currently limited to ideal-gas flows, which are not adequate for modeling the chemistry or temperature effects of hot-gas jet flows. This task was initiated to create an efficient implementation of multi-species gas and equilibrium chemistry into the USM3D code to improve its predictive capabilities for hot jet impingement effects. The goal of this NASA Engineering and Safety Center (NESC) assessment was to implement and validate a simulation capability to handle real-gas effects in the USM3D code. This document contains the outcome of the NESC assessment.

  1. Adaptive envelope protection methods for aircraft

    NASA Astrophysics Data System (ADS)

    Unnikrishnan, Suraj

    Carefree handling refers to the ability of a pilot to operate an aircraft without the need to continuously monitor aircraft operating limits. At the heart of all carefree handling or maneuvering systems, also referred to as envelope protection systems, are algorithms and methods for predicting future limit violations. Recently, envelope protection methods that have gained more acceptance, translate limit proximity information to its equivalent in the control channel. Envelope protection algorithms either use very small prediction horizon or are static methods with no capability to adapt to changes in system configurations. Adaptive approaches maximizing prediction horizon such as dynamic trim, are only applicable to steady-state-response critical limit parameters. In this thesis, a new adaptive envelope protection method is developed that is applicable to steady-state and transient response critical limit parameters. The approach is based upon devising the most aggressive optimal control profile to the limit boundary and using it to compute control limits. Pilot-in-the-loop evaluations of the proposed approach are conducted at the Georgia Tech Carefree Maneuver lab for transient longitudinal hub moment limit protection. Carefree maneuvering is the dual of carefree handling in the realm of autonomous Uninhabited Aerial Vehicles (UAVs). Designing a flight control system to fully and effectively utilize the operational flight envelope is very difficult. With the increasing role and demands for extreme maneuverability there is a need for developing envelope protection methods for autonomous UAVs. In this thesis, a full-authority automatic envelope protection method is proposed for limit protection in UAVs. The approach uses adaptive estimate of limit parameter dynamics and finite-time horizon predictions to detect impending limit boundary violations. Limit violations are prevented by treating the limit boundary as an obstacle and by correcting nominal control/command inputs to track a limit parameter safe-response profile near the limit boundary. The method is evaluated using software-in-the-loop and flight evaluations on the Georgia Tech unmanned rotorcraft platform---GTMax. The thesis also develops and evaluates an extension for calculating control margins based on restricting limit parameter response aggressiveness near the limit boundary.

  2. Predicting the payload capability of cable logging systems including the effect of partial suspension

    Treesearch

    Gary D. Falk

    1981-01-01

    A systematic procedure for predicting the payload capability of running, live, and standing skylines is presented. Three hand-held calculator programs are used to predict payload capability that includes the effect of partial suspension. The programs allow for predictions for downhill yarding and for yarding away from the yarder. The equations and basic principles...

  3. Small Propeller and Rotor Testing Capabilities of the NASA Langley Low Speed Aeroacoustic Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Zawodny, Nikolas S.; Haskin, Henry H.

    2017-01-01

    The Low Speed Aeroacoustic Wind Tunnel (LSAWT) at NASA Langley Research Center has recently undergone a configuration change. This change incorporates an inlet nozzle extension meant to serve the dual purposes of achieving lower free-stream velocities as well as a larger core flow region. The LSAWT, part of the NASA Langley Jet Noise Laboratory, had historically been utilized to simulate realistic forward flight conditions of commercial and military aircraft engines in an anechoic environment. The facility was modified starting in 2016 in order to expand its capabilities for the aerodynamic and acoustic testing of small propeller and unmanned aircraft system (UAS) rotor configurations. This paper describes the modifications made to the facility, its current aerodynamic and acoustic capabilities, the propeller and UAS rotor-vehicle configurations to be tested, and some preliminary predictions and experimental data for isolated propeller and UAS rotor con figurations, respectively. Isolated propeller simulations have been performed spanning a range of advance ratios to identify the theoretical propeller operational limits of the LSAWT. Performance and acoustic measurements of an isolated UAS rotor in hover conditions are found to compare favorably with previously measured data in an anechoic chamber and blade element-based acoustic predictions.

  4. Imaging characteristics of photogrammetric camera systems

    USGS Publications Warehouse

    Welch, R.; Halliday, J.

    1973-01-01

    In view of the current interest in high-altitude and space photographic systems for photogrammetric mapping, the United States Geological Survey (U.S.G.S.) undertook a comprehensive research project designed to explore the practical aspects of applying the latest image quality evaluation techniques to the analysis of such systems. The project had two direct objectives: (1) to evaluate the imaging characteristics of current U.S.G.S. photogrammetric camera systems; and (2) to develop methodologies for predicting the imaging capabilities of photogrammetric camera systems, comparing conventional systems with new or different types of systems, and analyzing the image quality of photographs. Image quality was judged in terms of a number of evaluation factors including response functions, resolving power, and the detectability and measurability of small detail. The limiting capabilities of the U.S.G.S. 6-inch and 12-inch focal length camera systems were established by analyzing laboratory and aerial photographs in terms of these evaluation factors. In the process, the contributing effects of relevant parameters such as lens aberrations, lens aperture, shutter function, image motion, film type, and target contrast procedures for analyzing image quality and predicting and comparing performance capabilities. ?? 1973.

  5. Large-scale optimization-based classification models in medicine and biology.

    PubMed

    Lee, Eva K

    2007-06-01

    We present novel optimization-based classification models that are general purpose and suitable for developing predictive rules for large heterogeneous biological and medical data sets. Our predictive model simultaneously incorporates (1) the ability to classify any number of distinct groups; (2) the ability to incorporate heterogeneous types of attributes as input; (3) a high-dimensional data transformation that eliminates noise and errors in biological data; (4) the ability to incorporate constraints to limit the rate of misclassification, and a reserved-judgment region that provides a safeguard against over-training (which tends to lead to high misclassification rates from the resulting predictive rule); and (5) successive multi-stage classification capability to handle data points placed in the reserved-judgment region. To illustrate the power and flexibility of the classification model and solution engine, and its multi-group prediction capability, application of the predictive model to a broad class of biological and medical problems is described. Applications include: the differential diagnosis of the type of erythemato-squamous diseases; predicting presence/absence of heart disease; genomic analysis and prediction of aberrant CpG island meythlation in human cancer; discriminant analysis of motility and morphology data in human lung carcinoma; prediction of ultrasonic cell disruption for drug delivery; identification of tumor shape and volume in treatment of sarcoma; discriminant analysis of biomarkers for prediction of early atherosclerois; fingerprinting of native and angiogenic microvascular networks for early diagnosis of diabetes, aging, macular degeneracy and tumor metastasis; prediction of protein localization sites; and pattern recognition of satellite images in classification of soil types. In all these applications, the predictive model yields correct classification rates ranging from 80 to 100%. This provides motivation for pursuing its use as a medical diagnostic, monitoring and decision-making tool.

  6. On the Floating Point Performance of the i860 Microprocessor

    NASA Technical Reports Server (NTRS)

    Lee, King; Kutler, Paul (Technical Monitor)

    1997-01-01

    The i860 microprocessor is a pipelined processor that can deliver two double precision floating point results every clock. It is being used in the Touchstone project to develop a teraflop computer by the year 2000. With such high computational capabilities it was expected that memory bandwidth would limit performance on many kernels. Measured performance of three kernels showed performance is less than what memory bandwidth limitations would predict. This paper develops a model that explains the discrepancy in terms of memory latencies and points to some problems involved in moving data from memory to the arithmetic pipelines.

  7. Team Problem Solving: Effects of Communication and Function Overlap

    DTIC Science & Technology

    1987-03-01

    group. At the very least, this concept helps to focus to some degree on the formidable complexity of the relationship between the available rosources... relationship with task load gains clarity, for a task load may be overwhelming to a group of limited capability, yet challenging or even gratifying to a...considered if othaer important resources were also missing. This behavior is consistent with predictions of social motivational theories such as

  8. Evaluation program for secondary spacecraft cells

    NASA Technical Reports Server (NTRS)

    Christy, D. E.; Harkness, J. D.

    1973-01-01

    A life cycle test of secondary electric batteries for spacecraft applications was conducted. A sample number of nickel cadmium batteries were subjected to general performance tests to determine the limit of their actual capabilities. Weaknesses discovered in cell design are reported and aid in research and development efforts toward improving the reliability of spacecraft batteries. A statistical analysis of the life cycle prediction and cause of failure versus test conditions is provided.

  9. Validation Process for LEWICE Coupled by Use of a Navier-stokes Solver

    NASA Technical Reports Server (NTRS)

    Wright, William B.

    2016-01-01

    A research project is underway at NASA Glenn to produce computer software that can accurately predict ice growth for many meteorological conditions for any aircraft surface. This report will present results from the latest LEWICE release, version 3.5. This program differs from previous releases in its ability to model mixed phase and ice crystal conditions such as those encountered inside an engine. It also has expanded capability to use structured grids and a new capability to use results from unstructured grid flow solvers. An extensive comparison of the results in a quantifiable manner against the database of ice shapes that have been generated in the NASA Glenn Icing Research Tunnel (IRT) has also been performed. This paper will show the differences in ice shape between LEWICE 3.5 and experimental data. In addition, comparisons will be made between the lift and drag calculated on the ice shapes from experiment and those produced by LEWICE. This report will also provide a description of both programs. Quantitative geometric comparisons are shown for horn height, horn angle, icing limit, area and leading edge thickness. Quantitative comparisons of calculated lift and drag will also be shown. The results show that the predicted results are within the accuracy limits of the experimental data for the majority of cases.

  10. Prediction and validation of blowout limits of co-flowing jet diffusion flames -- effect of dilution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karbasi, M.; Wierzba, I.

    1996-10-01

    The blowout limits of a co-flowing turbulent methane jet diffusion flame with addition of diluent in either jet fuel or surrounding air stream is studied both analytically and experimentally. Helium, nitrogen and carbon dioxide were employed as the diluents. Experiments indicated that an addition of diluents to the jet fuel or surrounding air stream decreased the stability limit of the jet diffusion flames. The strongest effect was observed with carbon dioxide as the diluent followed by nitrogen and then by helium. A model of extinction based on recognized criterion of the mixing time scale to characteristic combustion time scale ratiomore » using experimentally derived correlations is proposed. It is capable of predicting the large reduction of the jet blowout velocity due to a relatively small increase in the co-flow stream velocity along with an increase in the concentration of diluent in either the jet fuel or surrounding air stream. Experiments were carried out to validate the model. The predicted blowout velocities of turbulent jet diffusion flames obtained using this model are in good agreement with the corresponding experimental data.« less

  11. Prediction of North Pacific Height Anomalies During Strong Madden-Julian Oscillation Events

    NASA Astrophysics Data System (ADS)

    Kai-Chih, T.; Barnes, E. A.; Maloney, E. D.

    2017-12-01

    The Madden Julian Oscillation (MJO) creates strong variations in extratropical atmospheric circulations that have important implications for subseasonal-to-seasonal prediction. In particular, certain MJO phases are characterized by a consistent modulation of geopotential height in the North Pacific and adjacent regions across different MJO events. Until recently, only limited research has examined the relationship between these robust MJO tropical-extratropical teleconnections and model prediction skill. In this study, reanalysis data (MERRA and ERA-Interim) and ECMWF ensemble hindcasts are used to demonstrate that robust teleconnections in specific MJO phases and time lags are also characterized by excellent agreement in the prediction of geopotential height anoma- lies across model ensemble members at forecast leads of up to 3 weeks. These periods of enhanced prediction capabilities extend the possibility for skillful extratropical weather prediction beyond traditional 10-13 day limits. Furthermore, we also examine the phase dependency of teleconnection robustness by using Linear Baroclinic Model (LBM) and the result is consistent with the ensemble hindcasts : the anomalous heating of MJO phase 2 (phase 6) can consistently generate positive (negative) geopotential height anomalies around the extratropical Pacific with a lead of 15-20 days, while other phases are more sensitive to the variaion of the mean state.

  12. A discrete element method-based approach to predict the breakage of coal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gupta, Varun; Sun, Xin; Xu, Wei

    Pulverization is an essential pre-combustion technique employed for solid fuels, such as coal, to reduce particle sizes. Smaller particles ensure rapid and complete combustion, leading to low carbon emissions. Traditionally, the resulting particle size distributions from pulverizers have been informed by empirical or semi-empirical approaches that rely on extensive data gathered over several decades during operations or experiments. However, the predictive capabilities for new coals and processes are limited. This work presents a Discrete Element Method based computational framework to predict particle size distribution resulting from the breakage of coal particles characterized by the coal’s physical properties. The effect ofmore » certain operating parameters on the breakage behavior of coal particles also is examined.« less

  13. Bringing modeling to the masses: A web based system to predict potential species distributions

    USGS Publications Warehouse

    Graham, Jim; Newman, Greg; Kumar, Sunil; Jarnevich, Catherine S.; Young, Nick; Crall, Alycia W.; Stohlgren, Thomas J.; Evangelista, Paul

    2010-01-01

    Predicting current and potential species distributions and abundance is critical for managing invasive species, preserving threatened and endangered species, and conserving native species and habitats. Accurate predictive models are needed at local, regional, and national scales to guide field surveys, improve monitoring, and set priorities for conservation and restoration. Modeling capabilities, however, are often limited by access to software and environmental data required for predictions. To address these needs, we built a comprehensive web-based system that: (1) maintains a large database of field data; (2) provides access to field data and a wealth of environmental data; (3) accesses values in rasters representing environmental characteristics; (4) runs statistical spatial models; and (5) creates maps that predict the potential species distribution. The system is available online at www.niiss.org, and provides web-based tools for stakeholders to create potential species distribution models and maps under current and future climate scenarios.

  14. Species Diversity and Functional Prediction of Surface Bacterial Communities on Aging Flue-Cured Tobaccos.

    PubMed

    Wang, Fan; Zhao, Hongwei; Xiang, Haiying; Wu, Lijun; Men, Xiao; Qi, Chang; Chen, Guoqiang; Zhang, Haibo; Wang, Yi; Xian, Mo

    2018-06-05

    Microbes on aging flue-cured tobaccos (ATFs) improve the aroma and other qualities desirable in products. Understanding the relevant organisms would picture microbial community diversity, metabolic potential, and their applications. However, limited efforts have been made on characterizing the microbial quality and functional profiling. Herein, we present our investigation of the bacterial diversity and predicted potential genetic capability of the bacteria from two AFTs using 16S rRNA gene sequences and phylogenetic investigation of communities by reconstruction of unobserved states (PICRUSt) software. The results show that dominant bacteria from AFT surfaces were classified into 48 genera, 36 families, and 7 phyla. In addition, Bacillus spp. was found prevalent on both ATFs. Furthermore, PICRUSt predictions of bacterial community functions revealed many attractive metabolic capacities in the AFT microbiota, including several involved in the biosynthesis of flavors and fragrances and the degradation of harmful compounds, such as nicotine and nitrite. These results provide insights into the importance of AFT bacteria in determining product qualities and indicate specific microbial species with predicted enzymatic capabilities for the production of high-efficiency flavors, the degradation of undesirable compounds, and the provision of nicotine and nitrite tolerance which suggest fruitful areas of investigation into the manipulation of AFT microbiota for AFT and other product improvements.

  15. Energy absorption capability and crashworthiness of composite material structures: A review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carruthers, J.J.; Kettle, A.P.; Robinson, A.M.

    1998-10-01

    The controlled brittle failure of thermosetting fiber-reinforced polymer composites can provide a very efficient energy absorption mechanism. Consequently, the use of these materials in crashworthy vehicle designs has been the subject of considerable interest. In this respect, their more widespread application has been limited by the complexity of their collapse behavior. This article reviews the current level of understanding i this field, including the correlations between failure mode and energy absorption, the principal material, geometric, and physical parameters relevant to crashworthy design and methods of predicting the energy absorption capability of polymer composites. Areas which require further investigation are identified.more » This review article contains 70 references.« less

  16. Simulating boundary layer transition with low-Reynolds-number k-epsilon turbulence models. I - An evaluation of prediction characteristics. II - An approach to improving the predictions

    NASA Technical Reports Server (NTRS)

    Schmidt, R. C.; Patankar, S. V.

    1991-01-01

    The capability of two k-epsilon low-Reynolds number (LRN) turbulence models, those of Jones and Launder (1972) and Lam and Bremhorst (1981), to predict transition in external boundary-layer flows subject to free-stream turbulence is analyzed. Both models correctly predict the basic qualitative aspects of boundary-layer transition with free stream turbulence, but for calculations started at low values of certain defined Reynolds numbers, the transition is generally predicted at unrealistically early locations. Also, the methods predict transition lengths significantly shorter than those found experimentally. An approach to overcoming these deficiencies without abandoning the basic LRN k-epsilon framework is developed. This approach limits the production term in the turbulent kinetic energy equation and is based on a simple stability criterion. It is correlated to the free-stream turbulence value. The modification is shown to improve the qualitative and quantitative characteristics of the transition predictions.

  17. A review of the ionospheric model for the long wave prediction capability

    NASA Astrophysics Data System (ADS)

    Ferguson, J. A.

    1992-11-01

    The Naval Command, Control, and Ocean Surveillance Center's Long Wave Prediction Capability (LWPC) has a built-in ionospheric model. The latter was defined after a review of the literature comparing measurements with calculations. Subsequent to this original specification of the ionospheric model in the LWPC, a new collection of data were obtained and analyzed. The new data were collected aboard a merchant ship named the Callaghan during a series of trans-Atlantic trips over a period of a year. This report presents a detailed analysis of the ionospheric model currently in use by the LWPC and the new model suggested by the shipboard measurements. We conclude that, although the fits to measurements are almost the same between the two models examined, the current LWPC model should be used because it is better than the new model for nighttime conditions at long ranges. This conclusion supports the primary use of the LWPC model for coverage assessment that requires a valid model at the limits of a transmitter's reception.

  18. Shuttle TPS thermal performance and analysis methodology

    NASA Technical Reports Server (NTRS)

    Neuenschwander, W. E.; Mcbride, D. U.; Armour, G. A.

    1983-01-01

    Thermal performance of the thermal protection system was approximately as predicted. The only extensive anomalies were filler bar scorching and over-predictions in the high Delta p gap heating regions of the orbiter. A technique to predict filler bar scorching has been developed that can aid in defining a solution. Improvement in high Delta p gap heating methodology is still under study. Minor anomalies were also examined for improvements in modeling techniques and prediction capabilities. These include improved definition of low Delta p gap heating, an analytical model for inner mode line convection heat transfer, better modeling of structure, and inclusion of sneak heating. The limited number of problems related to penetration items that presented themselves during orbital flight tests were resolved expeditiously, and designs were changed and proved successful within the time frame of that program.

  19. BALLIST: A computer program to empirically predict the bumper thickness required to prevent perforation of the Space Station by orbital debris

    NASA Technical Reports Server (NTRS)

    Rule, William Keith

    1991-01-01

    A computer program called BALLIST that is intended to be a design tool for engineers is described. BALLlST empirically predicts the bumper thickness required to prevent perforation of the Space Station pressure wall by a projectile (such as orbital debris) as a function of the projectile's velocity. 'Ballistic' limit curves (bumper thickness vs. projectile velocity) are calculated and are displayed on the screen as well as being stored in an ASCII file. A Whipple style of spacecraft wall configuration is assumed. The predictions are based on a database of impact test results. NASA/Marshall Space Flight Center currently has the capability to generate such test results. Numerical simulation results of impact conditions that can not be tested (high velocities or large particles) can also be used for predictions.

  20. A method of predicting the energy-absorption capability of composite subfloor beams

    NASA Technical Reports Server (NTRS)

    Farley, Gary L.

    1987-01-01

    A simple method of predicting the energy-absorption capability of composite subfloor beam structure was developed. The method is based upon the weighted sum of the energy-absorption capability of constituent elements of a subfloor beam. An empirical data base of energy absorption results from circular and square cross section tube specimens were used in the prediction capability. The procedure is applicable to a wide range of subfloor beam structure. The procedure was demonstrated on three subfloor beam concepts. Agreement between test and prediction was within seven percent for all three cases.

  1. MPFit: Computational Tool for Predicting Moonlighting Proteins.

    PubMed

    Khan, Ishita; McGraw, Joshua; Kihara, Daisuke

    2017-01-01

    An increasing number of proteins have been found which are capable of performing two or more distinct functions. These proteins, known as moonlighting proteins, have drawn much attention recently as they may play critical roles in disease pathways and development. However, because moonlighting proteins are often found serendipitously, our understanding of moonlighting proteins is still quite limited. In order to lay the foundation for systematic moonlighting proteins studies, we developed MPFit, a software package for predicting moonlighting proteins from their omics features including protein-protein and gene interaction networks. Here, we describe and demonstrate the algorithm of MPFit, the idea behind it, and provide instruction for using the software.

  2. Muscle Synergies Facilitate Computational Prediction of Subject-Specific Walking Motions

    PubMed Central

    Meyer, Andrew J.; Eskinazi, Ilan; Jackson, Jennifer N.; Rao, Anil V.; Patten, Carolynn; Fregly, Benjamin J.

    2016-01-01

    Researchers have explored a variety of neurorehabilitation approaches to restore normal walking function following a stroke. However, there is currently no objective means for prescribing and implementing treatments that are likely to maximize recovery of walking function for any particular patient. As a first step toward optimizing neurorehabilitation effectiveness, this study develops and evaluates a patient-specific synergy-controlled neuromusculoskeletal simulation framework that can predict walking motions for an individual post-stroke. The main question we addressed was whether driving a subject-specific neuromusculoskeletal model with muscle synergy controls (5 per leg) facilitates generation of accurate walking predictions compared to a model driven by muscle activation controls (35 per leg) or joint torque controls (5 per leg). To explore this question, we developed a subject-specific neuromusculoskeletal model of a single high-functioning hemiparetic subject using instrumented treadmill walking data collected at the subject’s self-selected speed of 0.5 m/s. The model included subject-specific representations of lower-body kinematic structure, foot–ground contact behavior, electromyography-driven muscle force generation, and neural control limitations and remaining capabilities. Using direct collocation optimal control and the subject-specific model, we evaluated the ability of the three control approaches to predict the subject’s walking kinematics and kinetics at two speeds (0.5 and 0.8 m/s) for which experimental data were available from the subject. We also evaluated whether synergy controls could predict a physically realistic gait period at one speed (1.1 m/s) for which no experimental data were available. All three control approaches predicted the subject’s walking kinematics and kinetics (including ground reaction forces) well for the model calibration speed of 0.5 m/s. However, only activation and synergy controls could predict the subject’s walking kinematics and kinetics well for the faster non-calibration speed of 0.8 m/s, with synergy controls predicting the new gait period the most accurately. When used to predict how the subject would walk at 1.1 m/s, synergy controls predicted a gait period close to that estimated from the linear relationship between gait speed and stride length. These findings suggest that our neuromusculoskeletal simulation framework may be able to bridge the gap between patient-specific muscle synergy information and resulting functional capabilities and limitations. PMID:27790612

  3. Robust multiscale prediction of Po River discharge using a twofold AR-NN approach

    NASA Astrophysics Data System (ADS)

    Alessio, Silvia; Taricco, Carla; Rubinetti, Sara; Zanchettin, Davide; Rubino, Angelo; Mancuso, Salvatore

    2017-04-01

    The Mediterranean area is among the regions most exposed to hydroclimatic changes, with a likely increase of frequency and duration of droughts in the last decades and potentially substantial future drying according to climate projections. However, significant decadal variability is often superposed or even dominates these long-term hydrological trend as observed, for instance, in North Italian precipitation and river discharge records. The capability to accurately predict such decadal changes is, therefore, of utmost environmental and social importance. In order to forecast short and noisy hydroclimatic time series, we apply a twofold statistical approach that we improved with respect to previous works [1]. Our prediction strategy consists in the application of two independent methods that use autoregressive models and feed-forward neural networks. Since all prediction methods work better on clean signals, the predictions are not performed directly on the series, but rather on each significant variability components extracted with Singular Spectrum Analysis (SSA). In this contribution, we will illustrate the multiscale prediction approach and its application to the case of decadal prediction of annual-average Po River discharges (Italy). The discharge record is available for the last 209 years and allows to work with both interannual and decadal time-scale components. Fifteen-year forecasts obtained with both methods robustly indicate a prominent dry period in the second half of the 2020s. We will discuss advantages and limitations of the proposed statistical approach in the light of the current capabilities of decadal climate prediction systems based on numerical climate models, toward an integrated dynamical and statistical approach for the interannual-to-decadal prediction of hydroclimate variability in medium-size river basins. [1] Alessio et. al., Natural variability and anthropogenic effects in a Central Mediterranean core, Clim. of the Past, 8, 831-839, 2012.

  4. Role of Extracellular miR-122 in Breast Cancer Metastasis

    DTIC Science & Technology

    2015-02-01

    endothelial, lung fibroblasts, and brain astrocyte niche cells through secreting exosomal miR-122, a miRNA whose level in the circulation predicts metastasis...Our results demonstrate that cancer cells are capable of influencing how niche cells metabolize glucose through exosome secretion of miR-122 and the... exosome , miRNA, glucose metabolism, PKM2, GLUT1, niche adaption 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18. NUMBER OF PAGES

  5. The U.S. Navy’s Arctic Roadmap: Adapting to Climate Change in the High North

    DTIC Science & Technology

    2011-05-01

    relative to baseline period 1951-1980, from:The Copenhagen Diagnosis , 2009 UNCLASSIFIED 4 Why the Navy Cares Near-term  Increasing Arctic maritime...limiting factor 9 • Shipping, oil, & gas extraction to grow after 2030 • Tourism & maritime research will increase the most • Fishing to grow but only...Interagency Collaboration Earth System Prediction Capability ONR Initiatives UNCLASSIFIED 18Demonstrating leadership Navy Engagement 18 USPACOM

  6. KC-135 Winglet Program Review

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The results of a joint NASA/USAF program to develop flight test winglets on a KC-135 aircraft are reviewed. The winglet development from concept through wind tunnel and flight tests is discussed. Predicted, wind tunnel, and flight test results are compared for the performance, loads and flutter characteristics of the winglets. The flight test winglets had a variable winglet cant and incidence angle capability which enabled a limited evaluation of the effects of these geometry changes.

  7. Initial Results from a Search for Lunar Radio Emission from Interactions of >= 10(exp 19) eV Neutrinos and Cosmic Rays

    NASA Technical Reports Server (NTRS)

    Gorham, P. W.; Liewer, K. M.; Naudet, C. J.

    2000-01-01

    Using the NASA Goldstone 70m antenna DSS 14 both singly and in coincidence with the 34 m antenna DSS 13 (21.7 km to the southeast), we have acquired approximately 12 hrs of livetime in a search for predicted pulsed radio emission from extremely-high energy cascades induced by neutrinos or cosmic rays in the lunar regolith. In about 4 hrs of single antenna observations, we reduced our sensitivity to impulsive terrestrial interference to a negligible level by use of a veto afforded by the unique capability of DSS 14. In the 8 hrs of dual-antenna observations, terrestrial interference is eliminated as a background. In both observing modes the thermal noise floor limits the sensitivity. We detected no events above statistical background. We report here initial limits based on these data which begin to constrain several predictions of the flux of EHE neutrinos.

  8. Modeling of microjoule and millijoule energy LIDARs with PMT/SiPM/APD detectors: a sensitivity analysis.

    PubMed

    Agishev, Ravil

    2018-05-10

    This paper demonstrates a renewed concept and applications of the generalized methodology for atmospheric light detection and ranging (LIDAR) capability prediction as a continuation of a series of our previous works, where the dimensionless parameterization appeared as a tool for comparing systems of a different scale, design, and applications. The modernized concept applied to microscale and milliscale LIDARs with relatively new silicon photomultiplier detectors and traditional photomultiplier tube and avalanche photodiode detectors allowed prediction of the remote sensing instruments' performance and limitations. Such a generalized, uniform, and objective concept is applied for evaluation of the increasingly popular class of limited-energy LIDARs using the best optical detectors, operating on different targets (back-scatter or topographic, static or dynamic) and under intense sky background conditions. It can be used in the LIDAR community to compare different instruments and select the most suitable and effective ones for specific applications.

  9. Electrical test prediction using hybrid metrology and machine learning

    NASA Astrophysics Data System (ADS)

    Breton, Mary; Chao, Robin; Muthinti, Gangadhara Raja; de la Peña, Abraham A.; Simon, Jacques; Cepler, Aron J.; Sendelbach, Matthew; Gaudiello, John; Emans, Susan; Shifrin, Michael; Etzioni, Yoav; Urenski, Ronen; Lee, Wei Ti

    2017-03-01

    Electrical test measurement in the back-end of line (BEOL) is crucial for wafer and die sorting as well as comparing intended process splits. Any in-line, nondestructive technique in the process flow to accurately predict these measurements can significantly improve mean-time-to-detect (MTTD) of defects and improve cycle times for yield and process learning. Measuring after BEOL metallization is commonly done for process control and learning, particularly with scatterometry (also called OCD (Optical Critical Dimension)), which can solve for multiple profile parameters such as metal line height or sidewall angle and does so within patterned regions. This gives scatterometry an advantage over inline microscopy-based techniques, which provide top-down information, since such techniques can be insensitive to sidewall variations hidden under the metal fill of the trench. But when faced with correlation to electrical test measurements that are specific to the BEOL processing, both techniques face the additional challenge of sampling. Microscopy-based techniques are sampling-limited by their small probe size, while scatterometry is traditionally limited (for microprocessors) to scribe targets that mimic device ground rules but are not necessarily designed to be electrically testable. A solution to this sampling challenge lies in a fast reference-based machine learning capability that allows for OCD measurement directly of the electrically-testable structures, even when they are not OCD-compatible. By incorporating such direct OCD measurements, correlation to, and therefore prediction of, resistance of BEOL electrical test structures is significantly improved. Improvements in prediction capability for multiple types of in-die electrically-testable device structures is demonstrated. To further improve the quality of the prediction of the electrical resistance measurements, hybrid metrology using the OCD measurements as well as X-ray metrology (XRF) is used. Hybrid metrology is the practice of combining information from multiple sources in order to enable or improve the measurement of one or more critical parameters. Here, the XRF measurements are used to detect subtle changes in barrier layer composition and thickness that can have second-order effects on the electrical resistance of the test structures. By accounting for such effects with the aid of the X-ray-based measurements, further improvement in the OCD correlation to electrical test measurements is achieved. Using both types of solution incorporation of fast reference-based machine learning on nonOCD-compatible test structures, and hybrid metrology combining OCD with XRF technology improvement in BEOL cycle time learning could be accomplished through improved prediction capability.

  10. NEXT Ion Thruster Performance Dispersion Analyses

    NASA Technical Reports Server (NTRS)

    Soulas, George C.; Patterson, Michael J.

    2008-01-01

    The NEXT ion thruster is a low specific mass, high performance thruster with a nominal throttling range of 0.5 to 7 kW. Numerous engineering model and one prototype model thrusters have been manufactured and tested. Of significant importance to propulsion system performance is thruster-to-thruster performance dispersions. This type of information can provide a bandwidth of expected performance variations both on a thruster and a component level. Knowledge of these dispersions can be used to more conservatively predict thruster service life capability and thruster performance for mission planning, facilitate future thruster performance comparisons, and verify power processor capabilities are compatible with the thruster design. This study compiles the test results of five engineering model thrusters and one flight-like thruster to determine unit-to-unit dispersions in thruster performance. Component level performance dispersion analyses will include discharge chamber voltages, currents, and losses; accelerator currents, electron backstreaming limits, and perveance limits; and neutralizer keeper and coupling voltages and the spot-to-plume mode transition flow rates. Thruster level performance dispersion analyses will include thrust efficiency.

  11. Integrated System Health Management (ISHM) for Test Stand and J-2X Engine: Core Implementation

    NASA Technical Reports Server (NTRS)

    Figueroa, Jorge F.; Schmalzel, John L.; Aguilar, Robert; Shwabacher, Mark; Morris, Jon

    2008-01-01

    ISHM capability enables a system to detect anomalies, determine causes and effects, predict future anomalies, and provides an integrated awareness of the health of the system to users (operators, customers, management, etc.). NASA Stennis Space Center, NASA Ames Research Center, and Pratt & Whitney Rocketdyne have implemented a core ISHM capability that encompasses the A1 Test Stand and the J-2X Engine. The implementation incorporates all aspects of ISHM; from anomaly detection (e.g. leaks) to root-cause-analysis based on failure mode and effects analysis (FMEA), to a user interface for an integrated visualization of the health of the system (Test Stand and Engine). The implementation provides a low functional capability level (FCL) in that it is populated with few algorithms and approaches for anomaly detection, and root-cause trees from a limited FMEA effort. However, it is a demonstration of a credible ISHM capability, and it is inherently designed for continuous and systematic augmentation of the capability. The ISHM capability is grounded on an integrating software environment used to create an ISHM model of the system. The ISHM model follows an object-oriented approach: includes all elements of the system (from schematics) and provides for compartmentalized storage of information associated with each element. For instance, a sensor object contains a transducer electronic data sheet (TEDS) with information that might be used by algorithms and approaches for anomaly detection, diagnostics, etc. Similarly, a component, such as a tank, contains a Component Electronic Data Sheet (CEDS). Each element also includes a Health Electronic Data Sheet (HEDS) that contains health-related information such as anomalies and health state. Some practical aspects of the implementation include: (1) near real-time data flow from the test stand data acquisition system through the ISHM model, for near real-time detection of anomalies and diagnostics, (2) insertion of the J-2X predictive model providing predicted sensor values for comparison with measured values and use in anomaly detection and diagnostics, and (3) insertion of third-party anomaly detection algorithms into the integrated ISHM model.

  12. High-Throughput Models for Exposure-Based Chemical ...

    EPA Pesticide Factsheets

    The United States Environmental Protection Agency (U.S. EPA) must characterize potential risks to human health and the environment associated with manufacture and use of thousands of chemicals. High-throughput screening (HTS) for biological activity allows the ToxCast research program to prioritize chemical inventories for potential hazard. Similar capabilities for estimating exposure potential would support rapid risk-based prioritization for chemicals with limited information; here, we propose a framework for high-throughput exposure assessment. To demonstrate application, an analysis was conducted that predicts human exposure potential for chemicals and estimates uncertainty in these predictions by comparison to biomonitoring data. We evaluated 1936 chemicals using far-field mass balance human exposure models (USEtox and RAIDAR) and an indicator for indoor and/or consumer use. These predictions were compared to exposures inferred by Bayesian analysis from urine concentrations for 82 chemicals reported in the National Health and Nutrition Examination Survey (NHANES). Joint regression on all factors provided a calibrated consensus prediction, the variance of which serves as an empirical determination of uncertainty for prioritization on absolute exposure potential. Information on use was found to be most predictive; generally, chemicals above the limit of detection in NHANES had consumer/indoor use. Coupled with hazard HTS, exposure HTS can place risk earlie

  13. Evaluation of the heat transfer module (FAHT) of Failure Analysis Nonlinear Thermal And Structural Integrated Code (FANTASTIC)

    NASA Technical Reports Server (NTRS)

    Keyhani, Majid

    1989-01-01

    The heat transfer module of FANTASTIC Code (FAHT) is studied and evaluated to the extend possible during the ten weeks duration of this project. A brief background of the previous studies is given and the governing equations as modeled in FAHT are discussed. FAHT's capabilities and limitations based on these equations and its coding methodology are explained in detail. It is established that with improper choice of element size and time step FAHT's temperature field prediction at some nodes will be below the initial condition. The source of this unrealistic temperature prediction is identified and a procedure is proposed for avoiding this phenomenon. It is further shown that the proposed procedure will converge to an accurate prediction upon mesh refinement. Unfortunately due to lack of time FAHT's ability to accurately account for pyrolysis and surface ablation has not been verified. Therefore, at the present time it can be stated with confidence that FAHT can accurately predict the temperature field for a transient multi-dimensional, orthotropic material with directional dependence, variable property, with nonlinear boundary condition. Such a prediction will provide an upper limit for the temperature field in an ablating decomposing nozzle liner. The pore pressure field, however, will not be known.

  14. The genome editing toolbox: a spectrum of approaches for targeted modification.

    PubMed

    Cheng, Joseph K; Alper, Hal S

    2014-12-01

    The increase in quality, quantity, and complexity of recombinant products heavily drives the need to predictably engineer model and complex (mammalian) cell systems. However, until recently, limited tools offered the ability to precisely manipulate their genomes, thus impeding the full potential of rational cell line development processes. Targeted genome editing can combine the advances in synthetic and systems biology with current cellular hosts to further push productivity and expand the product repertoire. This review highlights recent advances in targeted genome editing techniques, discussing some of their capabilities and limitations and their potential to aid advances in pharmaceutical biotechnology. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Acoustic Predictions of Manned and Unmanned Rotorcraft Using the Comprehensive Analytical Rotorcraft Model for Acoustics (CARMA) Code System

    NASA Technical Reports Server (NTRS)

    Boyd, D. Douglas, Jr.; Burley, Casey L.; Conner, David A.

    2005-01-01

    The Comprehensive Analytical Rotorcraft Model for Acoustics (CARMA) is being developed under the Quiet Aircraft Technology Project within the NASA Vehicle Systems Program. The purpose of CARMA is to provide analysis tools for the design and evaluation of efficient low-noise rotorcraft, as well as support the development of safe, low-noise flight operations. The baseline prediction system of CARMA is presented and current capabilities are illustrated for a model rotor in a wind tunnel, a rotorcraft in flight and for a notional coaxial rotor configuration; however, a complete validation of the CARMA system capabilities with respect to a variety of measured databases is beyond the scope of this work. For the model rotor illustration, predicted rotor airloads and acoustics for a BO-105 model rotor are compared to test data from HART-II. For the flight illustration, acoustic data from an MD-520N helicopter flight test, which was conducted at Eglin Air Force Base in September 2003, are compared with CARMA full vehicle flight predictions. Predicted acoustic metrics at three microphone locations are compared for limited level flight and descent conditions. Initial acoustic predictions using CARMA for a notional coaxial rotor system are made. The effect of increasing the vertical separation between the rotors on the predicted airloads and acoustic results are shown for both aerodynamically non-interacting and aerodynamically interacting rotors. The sensitivity of including the aerodynamic interaction effects of each rotor on the other, especially when the rotors are in close proximity to one another is initially examined. The predicted coaxial rotor noise is compared to that of a conventional single rotor system of equal thrust, where both are of reasonable size for an unmanned aerial vehicle (UAV).

  16. Optimal design and critical analysis of a high-resolution video plenoptic demonstrator

    NASA Astrophysics Data System (ADS)

    Drazic, Valter; Sacré, Jean-Jacques; Schubert, Arno; Bertrand, Jérôme; Blondé, Etienne

    2012-01-01

    A plenoptic camera is a natural multiview acquisition device also capable of measuring distances by correlating a set of images acquired under different parallaxes. Its single lens and single sensor architecture have two downsides: limited resolution and limited depth sensitivity. As a first step and in order to circumvent those shortcomings, we investigated how the basic design parameters of a plenoptic camera optimize both the resolution of each view and its depth-measuring capability. In a second step, we built a prototype based on a very high resolution Red One® movie camera with an external plenoptic adapter and a relay lens. The prototype delivered five video views of 820 × 410. The main limitation in our prototype is view crosstalk due to optical aberrations that reduce the depth accuracy performance. We simulated some limiting optical aberrations and predicted their impact on the performance of the camera. In addition, we developed adjustment protocols based on a simple pattern and analysis of programs that investigated the view mapping and amount of parallax crosstalk on the sensor on a pixel basis. The results of these developments enabled us to adjust the lenslet array with a submicrometer precision and to mark the pixels of the sensor where the views do not register properly.

  17. The National Transonic Facility: A Research Retrospective

    NASA Technical Reports Server (NTRS)

    Wahls, R. A.

    2001-01-01

    An overview of the National Transonic Facility (NTF) from a research utilization perspective is provided. The facility was born in the 1970s from an internationally recognized need for a high Reynolds number test capability based on previous experiences with preflight predictions of aerodynamic characteristics and an anticipated need in support of research and development for future aerospace vehicle systems. Selection of the cryogenic concept to meet the need, unique capabilities of the facility, and the eventual research utilization of the facility are discussed. The primary purpose of the paper is to expose the range of investigations that have used the NTF since being declared operational in late 1984; limited research results are included, though many more can be found in the references.

  18. State variable theories based on Hart's formulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korhonen, M.A.; Hannula, S.P.; Li, C.Y.

    In this paper a review of the development of a state variable theory for nonelastic deformation is given. The physical and phenomenological basis of the theory and the constitutive equations describing macroplastic, microplastic, anelastic and grain boundary sliding enhanced deformation are presented. The experimental and analytical evaluation of different parameters in the constitutive equations are described in detail followed by a review of the extensive experimental work on different materials. The technological aspects of the state variable approach are highlighted by examples of the simulative and predictive capabilities of the theory. Finally, a discussion of general capabilities, limitations and futuremore » developments of the theory and particularly the possible extensions to cover an even wider range of deformation or deformation-related phenomena is presented.« less

  19. On the limits of statistical learning: Intertrial contextual cueing is confined to temporally close contingencies.

    PubMed

    Thomas, Cyril; Didierjean, André; Maquestiaux, François; Goujon, Annabelle

    2018-04-12

    Since the seminal study by Chun and Jiang (Cognitive Psychology, 36, 28-71, 1998), a large body of research based on the contextual-cueing paradigm has shown that the cognitive system is capable of extracting statistical contingencies from visual environments. Most of these studies have focused on how individuals learn regularities found within an intratrial temporal window: A context predicts the target position within a given trial. However, Ono, Jiang, and Kawahara (Journal of Experimental Psychology, 31, 703-712, 2005) provided evidence of an intertrial implicit-learning effect when a distractor configuration in preceding trials N - 1 predicted the target location in trials N. The aim of the present study was to gain further insight into this effect by examining whether it occurs when predictive relationships are impeded by interfering task-relevant noise (Experiments 2 and 3) or by a long delay (Experiments 1, 4, and 5). Our results replicated the intertrial contextual-cueing effect, which occurred in the condition of temporally close contingencies. However, there was no evidence of integration across long-range spatiotemporal contingencies, suggesting a temporal limitation of statistical learning.

  20. Spatial predictive mapping using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Noack, S.; Knobloch, A.; Etzold, S. H.; Barth, A.; Kallmeier, E.

    2014-11-01

    The modelling or prediction of complex geospatial phenomena (like formation of geo-hazards) is one of the most important tasks for geoscientists. But in practice it faces various difficulties, caused mainly by the complexity of relationships between the phenomena itself and the controlling parameters, as well by limitations of our knowledge about the nature of physical/ mathematical relationships and by restrictions regarding accuracy and availability of data. In this situation methods of artificial intelligence, like artificial neural networks (ANN) offer a meaningful alternative modelling approach compared to the exact mathematical modelling. In the past, the application of ANN technologies in geosciences was primarily limited due to difficulties to integrate it into geo-data processing algorithms. In consideration of this background, the software advangeo® was developed to provide a normal GIS user with a powerful tool to use ANNs for prediction mapping and data preparation within his standard ESRI ArcGIS environment. In many case studies, such as land use planning, geo-hazards analysis and prevention, mineral potential mapping, agriculture & forestry advangeo® has shown its capabilities and strengths. The approach is able to add considerable value to existing data.

  1. Modeling and predicting intertidal variations of the salinity field in the Bay/Delta

    USGS Publications Warehouse

    Knowles, Noah; Uncles, Reginald J.

    1995-01-01

    One approach to simulating daily to monthly variability in the bay is the development of intertidal model using tidally-averaged equations and a time step on the order of the day.  An intertidal numerical model of the bay's physics, capable of portraying seasonal and inter-annual variability, would have several uses.  Observations are limited in time and space, so simulation could help fill the gaps.  Also, the ability to simulate multi-year episodes (eg, an extended drought) could provide insight into the response of the ecosystem to such events.  Finally, such a model could be used in a forecast mode wherein predicted delta flow is used as model input, and predicted salinity distribution is output with estimates days and months in advance.  This note briefly introduces such a tidally-averaged model (Uncles and Peterson, in press) and a corresponding predictive scheme for baywide forecasting.

  2. Study on model current predictive control method of PV grid- connected inverters systems with voltage sag

    NASA Astrophysics Data System (ADS)

    Jin, N.; Yang, F.; Shang, S. Y.; Tao, T.; Liu, J. S.

    2016-08-01

    According to the limitations of the LVRT technology of traditional photovoltaic inverter existed, this paper proposes a low voltage ride through (LVRT) control method based on model current predictive control (MCPC). This method can effectively improve the photovoltaic inverter output characteristics and response speed. The MCPC method of photovoltaic grid-connected inverter designed, the sum of the absolute value of the predictive current and the given current error is adopted as the cost function with the model predictive control method. According to the MCPC, the optimal space voltage vector is selected. Photovoltaic inverter has achieved automatically switches of priority active or reactive power control of two control modes according to the different operating states, which effectively improve the inverter capability of LVRT. The simulation and experimental results proves that the proposed method is correct and effective.

  3. Improving the Accuracy of Predicting Maximal Oxygen Consumption (VO2pk)

    NASA Technical Reports Server (NTRS)

    Downs, Meghan E.; Lee, Stuart M. C.; Ploutz-Snyder, Lori; Feiveson, Alan

    2016-01-01

    Maximal oxygen (VO2pk) is the maximum amount of oxygen that the body can use during intense exercise and is used for benchmarking endurance exercise capacity. The most accurate method to determineVO2pk requires continuous measurements of ventilation and gas exchange during an exercise test to maximal effort, which necessitates expensive equipment, a trained staff, and time to set-up the equipment. For astronauts, accurate VO2pk measures are important to assess mission critical task performance capabilities and to prescribe exercise intensities to optimize performance. Currently, astronauts perform submaximal exercise tests during flight to predict VO2pk; however, while submaximal VO2pk prediction equations provide reliable estimates of mean VO2pk for populations, they can be unacceptably inaccurate for a given individual. The error in current predictions and logistical limitations of measuring VO2pk, particularly during spaceflight, highlights the need for improved estimation methods.

  4. Enhanced capabilities and modified users manual for axial-flow compressor conceptual design code CSPAN

    NASA Technical Reports Server (NTRS)

    Glassman, Arthur J.; Lavelle, Thomas M.

    1995-01-01

    Modifications made to the axial-flow compressor conceptual design code CSPAN are documented in this report. Endwall blockage and stall margin predictions were added. The loss-coefficient model was upgraded. Default correlations for rotor and stator solidity and aspect-ratio inputs and for stator-exit tangential velocity inputs were included in the code along with defaults for aerodynamic design limits. A complete description of input and output along with sample cases are included.

  5. Advanced planning activity. [for interplanetary flight and space exploration

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Selected mission concepts for interplanetary exploration through 1985 were examined, including: (1) Jupiter orbiter performance characteristics; (2) solar electric propulsion missions to Mercury, Venus, Neptune, and Uranus; (3) space shuttle planetary missions; (4) Pioneer entry probes to Saturn and Uranus; (5) rendezvous with Comet Kohoutek and Comet Encke; (6) space tug capabilities; and (7) a Pioneer mission to Mars in 1979. Mission options, limitations, and performance predictions are assessed, along with probable configurational, boost, and propulsion requirements.

  6. Biomedical systems analysis program

    NASA Technical Reports Server (NTRS)

    1979-01-01

    Biomedical monitoring programs which were developed to provide a system analysis context for a unified hypothesis for adaptation to space flight are presented and discussed. A real-time system of data analysis and decision making to assure the greatest possible crew safety and mission success is described. Information about man's abilities, limitations, and characteristic reactions to weightless space flight was analyzed and simulation models were developed. The predictive capabilities of simulation models for fluid-electrolyte regulation, erythropoiesis regulation, and calcium regulation are discussed.

  7. Large-Scale Chemical Similarity Networks for Target Profiling of Compounds Identified in Cell-Based Chemical Screens

    PubMed Central

    Lo, Yu-Chen; Senese, Silvia; Li, Chien-Ming; Hu, Qiyang; Huang, Yong; Damoiseaux, Robert; Torres, Jorge Z.

    2015-01-01

    Target identification is one of the most critical steps following cell-based phenotypic chemical screens aimed at identifying compounds with potential uses in cell biology and for developing novel disease therapies. Current in silico target identification methods, including chemical similarity database searches, are limited to single or sequential ligand analysis that have limited capabilities for accurate deconvolution of a large number of compounds with diverse chemical structures. Here, we present CSNAP (Chemical Similarity Network Analysis Pulldown), a new computational target identification method that utilizes chemical similarity networks for large-scale chemotype (consensus chemical pattern) recognition and drug target profiling. Our benchmark study showed that CSNAP can achieve an overall higher accuracy (>80%) of target prediction with respect to representative chemotypes in large (>200) compound sets, in comparison to the SEA approach (60–70%). Additionally, CSNAP is capable of integrating with biological knowledge-based databases (Uniprot, GO) and high-throughput biology platforms (proteomic, genetic, etc) for system-wise drug target validation. To demonstrate the utility of the CSNAP approach, we combined CSNAP's target prediction with experimental ligand evaluation to identify the major mitotic targets of hit compounds from a cell-based chemical screen and we highlight novel compounds targeting microtubules, an important cancer therapeutic target. The CSNAP method is freely available and can be accessed from the CSNAP web server (http://services.mbi.ucla.edu/CSNAP/). PMID:25826798

  8. Classification and disease prediction via mathematical programming

    NASA Astrophysics Data System (ADS)

    Lee, Eva K.; Wu, Tsung-Lin

    2007-11-01

    In this chapter, we present classification models based on mathematical programming approaches. We first provide an overview on various mathematical programming approaches, including linear programming, mixed integer programming, nonlinear programming and support vector machines. Next, we present our effort of novel optimization-based classification models that are general purpose and suitable for developing predictive rules for large heterogeneous biological and medical data sets. Our predictive model simultaneously incorporates (1) the ability to classify any number of distinct groups; (2) the ability to incorporate heterogeneous types of attributes as input; (3) a high-dimensional data transformation that eliminates noise and errors in biological data; (4) the ability to incorporate constraints to limit the rate of misclassification, and a reserved-judgment region that provides a safeguard against over-training (which tends to lead to high misclassification rates from the resulting predictive rule) and (5) successive multi-stage classification capability to handle data points placed in the reserved judgment region. To illustrate the power and flexibility of the classification model and solution engine, and its multigroup prediction capability, application of the predictive model to a broad class of biological and medical problems is described. Applications include: the differential diagnosis of the type of erythemato-squamous diseases; predicting presence/absence of heart disease; genomic analysis and prediction of aberrant CpG island meythlation in human cancer; discriminant analysis of motility and morphology data in human lung carcinoma; prediction of ultrasonic cell disruption for drug delivery; identification of tumor shape and volume in treatment of sarcoma; multistage discriminant analysis of biomarkers for prediction of early atherosclerois; fingerprinting of native and angiogenic microvascular networks for early diagnosis of diabetes, aging, macular degeneracy and tumor metastasis; prediction of protein localization sites; and pattern recognition of satellite images in classification of soil types. In all these applications, the predictive model yields correct classification rates ranging from 80% to 100%. This provides motivation for pursuing its use as a medical diagnostic, monitoring and decision-making tool.

  9. Estimating power capability of aged lithium-ion batteries in presence of communication delays

    NASA Astrophysics Data System (ADS)

    Fridholm, Björn; Wik, Torsten; Kuusisto, Hannes; Klintberg, Anton

    2018-04-01

    Efficient control of electrified powertrains requires accurate estimation of the power capability of the battery for the next few seconds into the future. When implemented in a vehicle, the power estimation is part of a control loop that may contain several networked controllers which introduces time delays that may jeopardize stability. In this article, we present and evaluate an adaptive power estimation method that robustly can handle uncertain health status and time delays. A theoretical analysis shows that stability of the closed loop system can be lost if the resistance of the model is under-estimated. Stability can, however, be restored by filtering the estimated power at the expense of slightly reduced bandwidth of the signal. The adaptive algorithm is experimentally validated in lab tests using an aged lithium-ion cell subject to a high power load profile in temperatures from -20 to +25 °C. The upper voltage limit was set to 4.15 V and the lower voltage limit to 2.6 V, where significant non-linearities are occurring and the validity of the model is limited. After an initial transient when the model parameters are adapted, the prediction accuracy is within ± 2 % of the actually available power.

  10. On the Conditioning of Machine-Learning-Assisted Turbulence Modeling

    NASA Astrophysics Data System (ADS)

    Wu, Jinlong; Sun, Rui; Wang, Qiqi; Xiao, Heng

    2017-11-01

    Recently, several researchers have demonstrated that machine learning techniques can be used to improve the RANS modeled Reynolds stress by training on available database of high fidelity simulations. However, obtaining improved mean velocity field remains an unsolved challenge, restricting the predictive capability of current machine-learning-assisted turbulence modeling approaches. In this work we define a condition number to evaluate the model conditioning of data-driven turbulence modeling approaches, and propose a stability-oriented machine learning framework to model Reynolds stress. Two canonical flows, the flow in a square duct and the flow over periodic hills, are investigated to demonstrate the predictive capability of the proposed framework. The satisfactory prediction performance of mean velocity field for both flows demonstrates the predictive capability of the proposed framework for machine-learning-assisted turbulence modeling. With showing the capability of improving the prediction of mean flow field, the proposed stability-oriented machine learning framework bridges the gap between the existing machine-learning-assisted turbulence modeling approaches and the demand of predictive capability of turbulence models in real applications.

  11. DEM modeling of ball mills with experimental validation: influence of contact parameters on charge motion and power draw

    NASA Astrophysics Data System (ADS)

    Boemer, Dominik; Ponthot, Jean-Philippe

    2017-01-01

    Discrete element method simulations of a 1:5-scale laboratory ball mill are presented in this paper to study the influence of the contact parameters on the charge motion and the power draw. The position density limit is introduced as an efficient mathematical tool to describe and to compare the macroscopic charge motion in different scenarios, i.a. with different values of the contact parameters. While the charge motion and the power draw are relatively insensitive to the stiffness and the damping coefficient of the linear spring-slider-damper contact law, the coefficient of friction has a strong influence since it controls the sliding propensity of the charge. Based on the experimental calibration and validation by charge motion photographs and power draw measurements, the descriptive and predictive capabilities of the position density limit and the discrete element method are demonstrated, i.e. the real position of the charge is precisely delimited by the respective position density limit and the power draw can be predicted with an accuracy of about 5 %.

  12. Reverse Engineering Crosswind Limits - A New Flight Test Technique?

    NASA Technical Reports Server (NTRS)

    Asher, Troy A.; Willliams, Timothy L.; Strovers, Brian K.

    2013-01-01

    During modification of a Gulfstream III test bed aircraft for an experimental flap project, all roll spoiler hardware had to be removed to accommodate the test article. In addition to evaluating the effects on performance and flying qualities resulting from the modification, the test team had to determine crosswind limits for an airplane previously certified with roll spoilers. Predictions for the modified aircraft indicated the maximum amount of steady state sideslip available during the approach and landing phase would be limited by aileron authority rather than by rudder. Operating out of a location that tends to be very windy, an arbitrary and conservative wind limit would have either been overly restrictive or potentially unsafe if chosen poorly. When determining a crosswind limit, how much reserve roll authority was necessary? Would the aircraft, as configured, have suitable handling qualities for long-term use as a flying test bed? To answer these questions, the test team combined two typical flight test techniques into a new maneuver called the sideslip-to-bank maneuver, and was able to gather flying qualities data, evaluate aircraft response and measure trends for various crosswind scenarios. This paper will describe the research conducted, the maneuver, flight conditions, predictions, and results from this in-flight evaluation of crosswind capability.

  13. A discrete element method-based approach to predict the breakage of coal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gupta, Varun; Sun, Xin; Xu, Wei

    Pulverization is an essential pre-combustion technique employed for solid fuels, such as coal, to reduce particle sizes. Smaller particles ensure rapid and complete combustion, leading to low carbon emissions. Traditionally, the resulting particle size distributions from pulverizers have been determined by empirical or semi-empirical approaches that rely on extensive data gathered over several decades during operations or experiments, with limited predictive capabilities for new coals and processes. Our work presents a Discrete Element Method (DEM)-based computational approach to model coal particle breakage with experimentally characterized coal physical properties. We also examined the effect of select operating parameters on the breakagemore » behavior of coal particles.« less

  14. A discrete element method-based approach to predict the breakage of coal

    DOE PAGES

    Gupta, Varun; Sun, Xin; Xu, Wei; ...

    2017-08-05

    Pulverization is an essential pre-combustion technique employed for solid fuels, such as coal, to reduce particle sizes. Smaller particles ensure rapid and complete combustion, leading to low carbon emissions. Traditionally, the resulting particle size distributions from pulverizers have been determined by empirical or semi-empirical approaches that rely on extensive data gathered over several decades during operations or experiments, with limited predictive capabilities for new coals and processes. Our work presents a Discrete Element Method (DEM)-based computational approach to model coal particle breakage with experimentally characterized coal physical properties. We also examined the effect of select operating parameters on the breakagemore » behavior of coal particles.« less

  15. An evaluation of HEMT potential for millimeter-wave signal sources using interpolation and harmonic balance techniques

    NASA Technical Reports Server (NTRS)

    Kwon, Youngwoo; Pavlidis, Dimitris; Tutt, Marcel N.

    1991-01-01

    A large-signal analysis method based on an harmonic balance technique and a 2-D cubic spline interpolation function has been developed and applied to the prediction of InP-based HEMT oscillator performance for frequencies extending up to the submillimeter-wave range. The large-signal analysis method uses a limited number of DC and small-signal S-parameter data and allows the accurate characterization of HEMT large-signal behavior. The method has been validated experimentally using load-pull measurement. Oscillation frequency, power performance, and load requirements are discussed, with an operation capability of 300 GHz predicted using state-of-the-art devices (fmax is approximately equal to 450 GHz).

  16. Numerical Predictions of Wind Turbine Power and Aerodynamic Loads for the NREL Phase II and IV Combined Experiment Rotor

    NASA Technical Reports Server (NTRS)

    Duque, Earl P. N.; Johnson, Wayne; vanDam, C. P.; Chao, David D.; Cortes, Regina; Yee, Karen

    1999-01-01

    Accurate, reliable and robust numerical predictions of wind turbine rotor power remain a challenge to the wind energy industry. The literature reports various methods that compare predictions to experiments. The methods vary from Blade Element Momentum Theory (BEM), Vortex Lattice (VL), to variants of Reynolds-averaged Navier-Stokes (RaNS). The BEM and VL methods consistently show discrepancies in predicting rotor power at higher wind speeds mainly due to inadequacies with inboard stall and stall delay models. The RaNS methodologies show promise in predicting blade stall. However, inaccurate rotor vortex wake convection, boundary layer turbulence modeling and grid resolution has limited their accuracy. In addition, the inherently unsteady stalled flow conditions become computationally expensive for even the best endowed research labs. Although numerical power predictions have been compared to experiment. The availability of good wind turbine data sufficient for code validation experimental data that has been extracted from the IEA Annex XIV download site for the NREL Combined Experiment phase II and phase IV rotor. In addition, the comparisons will show data that has been further reduced into steady wind and zero yaw conditions suitable for comparisons to "steady wind" rotor power predictions. In summary, the paper will present and discuss the capabilities and limitations of the three numerical methods and make available a database of experimental data suitable to help other numerical methods practitioners validate their own work.

  17. A study of the limitations of linear theory methods as applied to sonic boom calculations

    NASA Technical Reports Server (NTRS)

    Darden, Christine M.

    1990-01-01

    Current sonic boom minimization theories have been reviewed to emphasize the capabilities and flexibilities of the methods. Flexibility is important because it is necessary for the designer to meet optimized area constraints while reducing the impact on vehicle aerodynamic performance. Preliminary comparisons of sonic booms predicted for two Mach 3 concepts illustrate the benefits of shaping. Finally, for very simple bodies of revolution, sonic boom predictions were made using two methods - a modified linear theory method and a nonlinear method - for signature shapes which were both farfield N-waves and midfield waves. Preliminary analysis on these simple bodies verified that current modified linear theory prediction methods become inadequate for predicting midfield signatures for Mach numbers above 3. The importance of impulse is sonic boom disturbance and the importance of three-dimensional effects which could not be simulated with the bodies of revolution will determine the validity of current modified linear theory methods in predicting midfield signatures at lower Mach numbers.

  18. International journal of computational fluid dynamics real-time prediction of unsteady flow based on POD reduced-order model and particle filter

    NASA Astrophysics Data System (ADS)

    Kikuchi, Ryota; Misaka, Takashi; Obayashi, Shigeru

    2016-04-01

    An integrated method consisting of a proper orthogonal decomposition (POD)-based reduced-order model (ROM) and a particle filter (PF) is proposed for real-time prediction of an unsteady flow field. The proposed method is validated using identical twin experiments of an unsteady flow field around a circular cylinder for Reynolds numbers of 100 and 1000. In this study, a PF is employed (ROM-PF) to modify the temporal coefficient of the ROM based on observation data because the prediction capability of the ROM alone is limited due to the stability issue. The proposed method reproduces the unsteady flow field several orders faster than a reference numerical simulation based on Navier-Stokes equations. Furthermore, the effects of parameters, related to observation and simulation, on the prediction accuracy are studied. Most of the energy modes of the unsteady flow field are captured, and it is possible to stably predict the long-term evolution with ROM-PF.

  19. Predicting skin sensitization potential and inter-laboratory reproducibility of a human Cell Line Activation Test (h-CLAT) in the European Cosmetics Association (COLIPA) ring trials.

    PubMed

    Sakaguchi, Hitoshi; Ryan, Cindy; Ovigne, Jean-Marc; Schroeder, Klaus R; Ashikaga, Takao

    2010-09-01

    Regulatory policies in Europe prohibited the testing of cosmetic ingredients in animals for a number of toxicological endpoints. Currently no validated non-animal test methods exist for skin sensitization. Evaluation of changes in cell surface marker expression in dendritic cell (DC)-surrogate cell lines represents one non-animal approach. The human Cell Line Activation Test (h-CLAT) examines the level of CD86 and CD54 expression on the surface of THP-1 cells, a human monocytic leukemia cell line, following 24h of chemical exposure. To examine protocol transferability, between-lab reproducibility, and predictive capacity, the h-CLAT has been evaluated by five independent laboratories in several ring trials (RTs) coordinated by the European Cosmetics Association (COLIPA). The results of the first and second RTs demonstrated that the protocol was transferable and basically had good between-lab reproducibility and predictivity, but there were some false negative data. To improve performance, protocol and prediction model were modified. Using the modified prediction model in the first and second RT, accuracy was improved. However, about 15% of the outcomes were not correctly identified, which exposes some of the limitations of the assay. For the chemicals evaluated, the limitation may due to chemical being a weak allergen or having low solubility (ex. alpha-hexylcinnamaldehyde). The third RT evaluated the modified prediction model and satisfactory results were obtained. From the RT data, the feasibility of utilizing cell lines as surrogate DC in development of in vitro skin sensitization methods shows promise. The data also support initiating formal pre-validation of the h-CLAT in order to fully understand the capabilities and limitations of the assay. Copyright 2010 Elsevier Ltd. All rights reserved.

  20. Assessment of acquired capability for suicide in clinical practice.

    PubMed

    Rimkeviciene, Jurgita; Hawgood, Jacinta; O'Gorman, John; De Leo, Diego

    2016-12-01

    The Interpersonal Psychological Theory of suicide proposes that the interaction between Thwarted Belongingness, Perceived Burdensomeness, and Acquired Capability for Suicide (ACS) predicts proximal risk of death by suicide. Instruments to assess all three constructs are available. However, research on the validity of one of them, the acquired capability for suicide scale (ACSS), has been limited, especially in terms of its clinical relevance. This study aimed to explore the utility of the different versions of the ACSS in clinical assessment. Three versions of the scale were investigated, the full 20-item version, a 7-item version and a single item version representing self-perceived capability for suicide. In a sample of patients recruited from a clinic specialising in the treatment of suicidality and in a community sample, all versions of the ACSS were found to show reasonable levels of reliability and to correlate as expected with reports of suicidal ideation, self-harm, and attempted suicide. The item assessing self-perceived acquired capacity for suicide showed highest correlations with all levels of suicidal behaviour. However, no version of the ACSS on its own showed a capacity to indicate suicide attempts in the combined sample. It is concluded that the versions of the scale have construct validity, but their clinical utility is limited. An assessment using a single item on self-perceived ACS outperforms the full and shortened versions of ACSS in clinical settings and can be recommended with caution for clinicians interested in assessing this characteristic.

  1. Coupled rotor/airframe vibration analysis

    NASA Technical Reports Server (NTRS)

    Sopher, R.; Studwell, R. E.; Cassarino, S.; Kottapalli, S. B. R.

    1982-01-01

    A coupled rotor/airframe vibration analysis developed as a design tool for predicting helicopter vibrations and a research tool to quantify the effects of structural properties, aerodynamic interactions, and vibration reduction devices on vehicle vibration levels is described. The analysis consists of a base program utilizing an impedance matching technique to represent the coupled rotor/airframe dynamics of the system supported by inputs from several external programs supplying sophisticated rotor and airframe aerodynamic and structural dynamic representation. The theoretical background, computer program capabilities and limited correlation results are presented in this report. Correlation results using scale model wind tunnel results show that the analysis can adequately predict trends of vibration variations with airspeed and higher harmonic control effects. Predictions of absolute values of vibration levels were found to be very sensitive to modal characteristics and results were not representative of measured values.

  2. Short stack modeling of degradation in solid oxide fuel cells. Part II. Sensitivity and interaction analysis

    NASA Astrophysics Data System (ADS)

    Gazzarri, J. I.; Kesler, O.

    In the first part of this two-paper series, we presented a numerical model of the impedance behaviour of a solid oxide fuel cell (SOFC) aimed at simulating the change in the impedance spectrum induced by contact degradation at the interconnect-electrode, and at the electrode-electrolyte interfaces. The purpose of that investigation was to develop a non-invasive diagnostic technique to identify degradation modes in situ. In the present paper, we appraise the predictive capabilities of the proposed method in terms of its robustness to uncertainties in the input parameters, many of which are very difficult to measure independently. We applied this technique to the degradation modes simulated in Part I, in addition to anode sulfur poisoning. Electrode delamination showed the highest robustness to input parameter variations, followed by interconnect oxidation and interconnect detachment. The most sensitive degradation mode was sulfur poisoning, due to strong parameter interactions. In addition, we simulate several simultaneous two-degradation-mode scenarios, assessing the method's capabilities and limitations for the prediction of electrochemical behaviour of SOFC's undergoing multiple simultaneous degradation modes.

  3. Review of the ionospheric model for the long wave prediction capability. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferguson, J.A.

    1992-11-01

    The Naval Command, Control and Ocean Surveillance Center's Long Wave Prediction Capability (LWPC) has a built-in ionospheric model. The latter was defined after a review of the literature comparing measurements with calculations. Subsequent to this original specification of the ionospheric model in the LWPC, a new collection of data were obtained and analyzed. The new data were collected aboard a merchant ship named the Callaghan during a series of trans-Atlantic trips over a period of a year. This report presents a detailed analysis of the ionospheric model currently in use by the LWPC and the new model suggested by themore » shipboard measurements. We conclude that, although the fits to measurements are almost the same between the two models examined, the current LWPC model should be used because it is better than the new model for nighttime conditions at long ranges. This conclusion supports the primary use of the LWPC model for coverage assessment that requires a valid model at the limits of a transmitter's reception.... Communications, Very low frequency and low frequency, High voltage, Antennas, Measurement.« less

  4. Environmental Conditions Associated with Elevated Vibrio parahaemolyticus Concentrations in Great Bay Estuary, New Hampshire.

    PubMed

    Urquhart, Erin A; Jones, Stephen H; Yu, Jong W; Schuster, Brian M; Marcinkiewicz, Ashley L; Whistler, Cheryl A; Cooper, Vaughn S

    2016-01-01

    Reports from state health departments and the Centers for Disease Control and Prevention indicate that the annual number of reported human vibriosis cases in New England has increased in the past decade. Concurrently, there has been a shift in both the spatial distribution and seasonal detection of Vibrio spp. throughout the region based on limited monitoring data. To determine environmental factors that may underlie these emerging conditions, this study focuses on a long-term database of Vibrio parahaemolyticus concentrations in oyster samples generated from data collected from the Great Bay Estuary, New Hampshire over a period of seven consecutive years. Oyster samples from two distinct sites were analyzed for V. parahaemolyticus abundance, noting significant relationships with various biotic and abiotic factors measured during the same period of study. We developed a predictive modeling tool capable of estimating the likelihood of V. parahaemolyticus presence in coastal New Hampshire oysters. Results show that the inclusion of chlorophyll a concentration to an empirical model otherwise employing only temperature and salinity variables, offers improved predictive capability for modeling the likelihood of V. parahaemolyticus in the Great Bay Estuary.

  5. Omics Approaches To Probe Microbiota and Drug Metabolism Interactions.

    PubMed

    Nichols, Robert G; Hume, Nicole E; Smith, Philip B; Peters, Jeffrey M; Patterson, Andrew D

    2016-12-19

    The drug metabolism field has long recognized the beneficial and sometimes deleterious influence of microbiota in the absorption, distribution, metabolism, and excretion of drugs. Early pioneering work with the sulfanilamide precursor prontosil pointed toward the necessity not only to better understand the metabolic capabilities of the microbiota but also, importantly, to identify the specific microbiota involved in the generation and metabolism of drugs. However, technological limitations important for cataloging the microbiota community as well as for understanding and/or predicting their metabolic capabilities hindered progress. Current advances including mass spectrometry-based metabolite profiling as well as culture-independent sequence-based identification and functional analysis of microbiota have begun to shed light on microbial metabolism. In this review, case studies will be presented to highlight key aspects (e.g., microbiota identification, metabolic function and prediction, metabolite identification, and profiling) that have helped to clarify how the microbiota might impact or be impacted by drug metabolism. Lastly, a perspective of the future of this field is presented that takes into account what important knowledge is lacking and how to tackle these problems.

  6. Improved non-local electron thermal transport model for two-dimensional radiation hydrodynamics simulations

    NASA Astrophysics Data System (ADS)

    Cao, Duc; Moses, Gregory; Delettrez, Jacques

    2015-08-01

    An implicit, non-local thermal conduction algorithm based on the algorithm developed by Schurtz, Nicolai, and Busquet (SNB) [Schurtz et al., Phys. Plasmas 7, 4238 (2000)] for non-local electron transport is presented and has been implemented in the radiation-hydrodynamics code DRACO. To study the model's effect on DRACO's predictive capability, simulations of shot 60 303 from OMEGA are completed using the iSNB model, and the computed shock speed vs. time is compared to experiment. Temperature outputs from the iSNB model are compared with the non-local transport model of Goncharov et al. [Phys. Plasmas 13, 012702 (2006)]. Effects on adiabat are also examined in a polar drive surrogate simulation. Results show that the iSNB model is not only capable of flux-limitation but also preheat prediction while remaining numerically robust and sacrificing little computational speed. Additionally, the results provide strong incentive to further modify key parameters within the SNB theory, namely, the newly introduced non-local mean free path. This research was supported by the Laboratory for Laser Energetics of the University of Rochester.

  7. Nonlinear rocket motor stability prediction: Limit amplitude, triggering, and mean pressure shifta)

    NASA Astrophysics Data System (ADS)

    Flandro, Gary A.; Fischbach, Sean R.; Majdalani, Joseph

    2007-09-01

    High-amplitude pressure oscillations in solid propellant rocket motor combustion chambers display nonlinear effects including: (1) limit cycle behavior in which the fluctuations may dwell for a considerable period of time near their peak amplitude, (2) elevated mean chamber pressure (DC shift), and (3) a triggering amplitude above which pulsing will cause an apparently stable system to transition to violent oscillations. Along with the obvious undesirable vibrations, these features constitute the most damaging impact of combustion instability on system reliability and structural integrity. The physical mechanisms behind these phenomena and their relationship to motor geometry and physical parameters must, therefore, be fully understood if instability is to be avoided in the design process, or if effective corrective measures must be devised during system development. Predictive algorithms now in use have limited ability to characterize the actual time evolution of the oscillations, and they do not supply the motor designer with information regarding peak amplitudes or the associated critical triggering amplitudes. A pivotal missing element is the ability to predict the mean pressure shift; clearly, the designer requires information regarding the maximum chamber pressure that might be experienced during motor operation. In this paper, a comprehensive nonlinear combustion instability model is described that supplies vital information. The central role played by steep-fronted waves is emphasized. The resulting algorithm provides both detailed physical models of nonlinear instability phenomena and the critically needed predictive capability. In particular, the origin of the DC shift is revealed.

  8. On Nonlinear Combustion Instability in Liquid Propellant Rocket Motors

    NASA Technical Reports Server (NTRS)

    Sims, J. D. (Technical Monitor); Flandro, Gary A.; Majdalani, Joseph; Sims, Joseph D.

    2004-01-01

    All liquid propellant rocket instability calculations in current use have limited value in the predictive sense and serve mainly as a correlating framework for the available data sets. The well-known n-t model first introduced by Crocco and Cheng in 1956 is still used as the primary analytical tool of this type. A multitude of attempts to establish practical analytical methods have achieved only limited success. These methods usually produce only stability boundary maps that are of little use in making critical design decisions in new motor development programs. Recent progress in understanding the mechanisms of combustion instability in solid propellant rockets"' provides a firm foundation for a new approach to prediction, diagnosis, and correction of the closely related problems in liquid motor instability. For predictive tools to be useful in the motor design process, they must have the capability to accurately determine: 1) time evolution of the pressure oscillations and limit amplitude, 2) critical triggering pulse amplitude, and 3) unsteady heat transfer rates at injector surfaces and chamber walls. The method described in this paper relates these critical motor characteristics directly to system design parameters. Inclusion of mechanisms such as wave steepening, vorticity production and transport, and unsteady detonation wave phenomena greatly enhance the representation of key features of motor chamber oscillatory behavior. The basic theoretical model is described and preliminary computations are compared to experimental data. A plan to develop the new predictive method into a comprehensive analysis tool is also described.

  9. Wind Tunnel Test Results of 25 Foot Tilt Rotor During Autorotation

    NASA Technical Reports Server (NTRS)

    Marr, R. L.

    1976-01-01

    A 25 foot diameter tilt rotor was tested in the 40 by 80 foot large scale wind tunnel. The test confirmed the predicted autorotation capability of the XV-15 tilt rotor aircraft. Autorotations were made at 60, 80, and 100 knots. A limited evaluation of lateral cyclic was made. Test data indicate a minimum rate of descent of 2,200 feet per minute at 60 knots at the XV-15 design gross weight of 13,000 pounds.

  10. Particle bed reactor modeling

    NASA Technical Reports Server (NTRS)

    Sapyta, Joe; Reid, Hank; Walton, Lew

    1993-01-01

    The topics are presented in viewgraph form and include the following: particle bed reactor (PBR) core cross section; PBR bleed cycle; fuel and moderator flow paths; PBR modeling requirements; characteristics of PBR and nuclear thermal propulsion (NTP) modeling; challenges for PBR and NTP modeling; thermal hydraulic computer codes; capabilities for PBR/reactor application; thermal/hydralic codes; limitations; physical correlations; comparison of predicted friction factor and experimental data; frit pressure drop testing; cold frit mask factor; decay heat flow rate; startup transient simulation; and philosophy of systems modeling.

  11. Multi-Scale Three-Dimensional Variational Data Assimilation System for Coastal Ocean Prediction

    NASA Technical Reports Server (NTRS)

    Li, Zhijin; Chao, Yi; Li, P. Peggy

    2012-01-01

    A multi-scale three-dimensional variational data assimilation system (MS-3DVAR) has been formulated and the associated software system has been developed for improving high-resolution coastal ocean prediction. This system helps improve coastal ocean prediction skill, and has been used in support of operational coastal ocean forecasting systems and field experiments. The system has been developed to improve the capability of data assimilation for assimilating, simultaneously and effectively, sparse vertical profiles and high-resolution remote sensing surface measurements into coastal ocean models, as well as constraining model biases. In this system, the cost function is decomposed into two separate units for the large- and small-scale components, respectively. As such, data assimilation is implemented sequentially from large to small scales, the background error covariance is constructed to be scale-dependent, and a scale-dependent dynamic balance is incorporated. This scheme then allows effective constraining large scales and model bias through assimilating sparse vertical profiles, and small scales through assimilating high-resolution surface measurements. This MS-3DVAR enhances the capability of the traditional 3DVAR for assimilating highly heterogeneously distributed observations, such as along-track satellite altimetry data, and particularly maximizing the extraction of information from limited numbers of vertical profile observations.

  12. Predictive representations can link model-based reinforcement learning to model-free mechanisms.

    PubMed

    Russek, Evan M; Momennejad, Ida; Botvinick, Matthew M; Gershman, Samuel J; Daw, Nathaniel D

    2017-09-01

    Humans and animals are capable of evaluating actions by considering their long-run future rewards through a process described using model-based reinforcement learning (RL) algorithms. The mechanisms by which neural circuits perform the computations prescribed by model-based RL remain largely unknown; however, multiple lines of evidence suggest that neural circuits supporting model-based behavior are structurally homologous to and overlapping with those thought to carry out model-free temporal difference (TD) learning. Here, we lay out a family of approaches by which model-based computation may be built upon a core of TD learning. The foundation of this framework is the successor representation, a predictive state representation that, when combined with TD learning of value predictions, can produce a subset of the behaviors associated with model-based learning, while requiring less decision-time computation than dynamic programming. Using simulations, we delineate the precise behavioral capabilities enabled by evaluating actions using this approach, and compare them to those demonstrated by biological organisms. We then introduce two new algorithms that build upon the successor representation while progressively mitigating its limitations. Because this framework can account for the full range of observed putatively model-based behaviors while still utilizing a core TD framework, we suggest that it represents a neurally plausible family of mechanisms for model-based evaluation.

  13. Predictive representations can link model-based reinforcement learning to model-free mechanisms

    PubMed Central

    Botvinick, Matthew M.

    2017-01-01

    Humans and animals are capable of evaluating actions by considering their long-run future rewards through a process described using model-based reinforcement learning (RL) algorithms. The mechanisms by which neural circuits perform the computations prescribed by model-based RL remain largely unknown; however, multiple lines of evidence suggest that neural circuits supporting model-based behavior are structurally homologous to and overlapping with those thought to carry out model-free temporal difference (TD) learning. Here, we lay out a family of approaches by which model-based computation may be built upon a core of TD learning. The foundation of this framework is the successor representation, a predictive state representation that, when combined with TD learning of value predictions, can produce a subset of the behaviors associated with model-based learning, while requiring less decision-time computation than dynamic programming. Using simulations, we delineate the precise behavioral capabilities enabled by evaluating actions using this approach, and compare them to those demonstrated by biological organisms. We then introduce two new algorithms that build upon the successor representation while progressively mitigating its limitations. Because this framework can account for the full range of observed putatively model-based behaviors while still utilizing a core TD framework, we suggest that it represents a neurally plausible family of mechanisms for model-based evaluation. PMID:28945743

  14. Fractional viscoelasticity of soft elastomers and auxetic foams

    NASA Astrophysics Data System (ADS)

    Solheim, Hannah; Stanisauskis, Eugenia; Miles, Paul; Oates, William

    2018-03-01

    Dielectric elastomers are commonly implemented in adaptive structures due to their unique capabilities for real time control of a structure's shape, stiffness, and damping. These active polymers are often used in applications where actuator control or dynamic tunability are important, making an accurate understanding of the viscoelastic behavior critical. This challenge is complicated as these elastomers often operate over a broad range of deformation rates. Whereas research has demonstrated success in applying a nonlinear viscoelastic constitutive model to characterize the behavior of Very High Bond (VHB) 4910, robust predictions of the viscoelastic response over the entire range of time scales is still a significant challenge. An alternative formulation for viscoelastic modeling using fractional order calculus has shown significant improvement in predictive capabilities. While fractional calculus has been explored theoretically in the field of linear viscoelasticity, limited experimental validation and statistical evaluation of the underlying phenomena have been considered. In the present study, predictions across several orders of magnitude in deformation rates are validated against data using a single set of model parameters. Moreover, we illustrate the fractional order is material dependent by running complementary experiments and parameter estimation on the elastomer VHB 4949 as well as an auxetic foam. All results are statistically validated using Bayesian uncertainty methods to obtain posterior densities for the fractional order as well as the hyperelastic parameters.

  15. Survival, growth and reproduction of non-native Nile tilapia II: fundamental niche projections and invasion potential in the northern Gulf of Mexico.

    PubMed

    Lowe, Michael R; Wu, Wei; Peterson, Mark S; Brown-Peterson, Nancy J; Slack, William T; Schofield, Pamela J

    2012-01-01

    Understanding the fundamental niche of invasive species facilitates our ability to predict both dispersal patterns and invasion success and therefore provides the basis for better-informed conservation and management policies. Here we focus on Nile tilapia (Oreochromis niloticus Linnaeus, 1758), one of the most widely cultured fish worldwide and a species that has escaped local aquaculture facilities to become established in a coastal-draining river in Mississippi (northern Gulf of Mexico). Using empirical physiological data, logistic regression models were developed to predict the probabilities of Nile tilapia survival, growth, and reproduction at different combinations of temperature (14 and 30°C) and salinity (0-60, by increments of 10). These predictive models were combined with kriged seasonal salinity data derived from multiple long-term data sets to project the species' fundamental niche in Mississippi coastal waters during normal salinity years (averaged across all years) and salinity patterns in extremely wet and dry years (which might emerge more frequently under scenarios of climate change). The derived fundamental niche projections showed that during the summer, Nile tilapia is capable of surviving throughout Mississippi's coastal waters but growth and reproduction were limited to river mouths (or upriver). Overwinter survival was also limited to river mouths. The areas where Nile tilapia could survive, grow, and reproduce increased during extremely wet years (2-368%) and decreased during extremely dry years (86-92%) in the summer with a similar pattern holding for overwinter survival. These results indicate that Nile tilapia is capable of 1) using saline waters to gain access to other watersheds throughout the region and 2) establishing populations in nearshore, low-salinity waters, particularly in the western portion of coastal Mississippi.

  16. Survival, Growth and Reproduction of Non-Native Nile Tilapia II: Fundamental Niche Projections and Invasion Potential in the Northern Gulf of Mexico

    PubMed Central

    Lowe, Michael R.; Wu, Wei; Peterson, Mark S.; Brown-Peterson, Nancy J.; Slack, William T.; Schofield, Pamela J.

    2012-01-01

    Understanding the fundamental niche of invasive species facilitates our ability to predict both dispersal patterns and invasion success and therefore provides the basis for better-informed conservation and management policies. Here we focus on Nile tilapia (Oreochromis niloticus Linnaeus, 1758), one of the most widely cultured fish worldwide and a species that has escaped local aquaculture facilities to become established in a coastal-draining river in Mississippi (northern Gulf of Mexico). Using empirical physiological data, logistic regression models were developed to predict the probabilities of Nile tilapia survival, growth, and reproduction at different combinations of temperature (14 and 30°C) and salinity (0–60, by increments of 10). These predictive models were combined with kriged seasonal salinity data derived from multiple long-term data sets to project the species' fundamental niche in Mississippi coastal waters during normal salinity years (averaged across all years) and salinity patterns in extremely wet and dry years (which might emerge more frequently under scenarios of climate change). The derived fundamental niche projections showed that during the summer, Nile tilapia is capable of surviving throughout Mississippi's coastal waters but growth and reproduction were limited to river mouths (or upriver). Overwinter survival was also limited to river mouths. The areas where Nile tilapia could survive, grow, and reproduce increased during extremely wet years (2–368%) and decreased during extremely dry years (86–92%) in the summer with a similar pattern holding for overwinter survival. These results indicate that Nile tilapia is capable of 1) using saline waters to gain access to other watersheds throughout the region and 2) establishing populations in nearshore, low-salinity waters, particularly in the western portion of coastal Mississippi. PMID:22848533

  17. Survival, growth and reproduction of non-native Nile tilapia II: fundamental niche projections and invasion potential in the northern Gulf of Mexico

    USGS Publications Warehouse

    Lowe, Michael R.; Wu, Wei; Peterson, Mark S.; Brown-Peterson, Nancy J.; Slack, William T.; Schofield, Pamela J.

    2012-01-01

    Understanding the fundamental niche of invasive species facilitates our ability to predict both dispersal patterns and invasion success and therefore provides the basis for better-informed conservation and management policies. Here we focus on Nile tilapia (Oreochromis niloticus Linnaeus, 1758), one of the most widely cultured fish worldwide and a species that has escaped local aquaculture facilities to become established in a coastal-draining river in Mississippi (northern Gulf of Mexico). Using empirical physiological data, logistic regression models were developed to predict the probabilities of Nile tilapia survival, growth, and reproduction at different combinations of temperature (14 and 30°C) and salinity (0–60, by increments of 10). These predictive models were combined with kriged seasonal salinity data derived from multiple long-term data sets to project the species' fundamental niche in Mississippi coastal waters during normal salinity years (averaged across all years) and salinity patterns in extremely wet and dry years (which might emerge more frequently under scenarios of climate change). The derived fundamental niche projections showed that during the summer, Nile tilapia is capable of surviving throughout Mississippi's coastal waters but growth and reproduction were limited to river mouths (or upriver). Overwinter survival was also limited to river mouths. The areas where Nile tilapia could survive, grow, and reproduce increased during extremely wet years (2–368%) and decreased during extremely dry years (86–92%) in the summer with a similar pattern holding for overwinter survival. These results indicate that Nile tilapia is capable of 1) using saline waters to gain access to other watersheds throughout the region and 2) establishing populations in nearshore, low-salinity waters, particularly in the western portion of coastal Mississippi.

  18. Evaluating the applicability of using daily forecasts from seasonal prediction systems (SPSs) for agriculture: a case study of Nepal's Terai with the NCEP CFSv2

    NASA Astrophysics Data System (ADS)

    Jha, Prakash K.; Athanasiadis, Panos; Gualdi, Silvio; Trabucco, Antonio; Mereu, Valentina; Shelia, Vakhtang; Hoogenboom, Gerrit

    2018-03-01

    Ensemble forecasts from dynamic seasonal prediction systems (SPSs) have the potential to improve decision-making for crop management to help cope with interannual weather variability. Because the reliability of crop yield predictions based on seasonal weather forecasts depends on the quality of the forecasts, it is essential to evaluate forecasts prior to agricultural applications. This study analyses the potential of Climate Forecast System version 2 (CFSv2) in predicting the Indian summer monsoon (ISM) for producing meteorological variables relevant to crop modeling. The focus area was Nepal's Terai region, and the local hindcasts were compared with weather station and reanalysis data. The results showed that the CFSv2 model accurately predicts monthly anomalies of daily maximum and minimum air temperature (Tmax and Tmin) as well as incoming total surface solar radiation (Srad). However, the daily climatologies of the respective CFSv2 hindcasts exhibit significant systematic biases compared to weather station data. The CFSv2 is less capable of predicting monthly precipitation anomalies and simulating the respective intra-seasonal variability over the growing season. Nevertheless, the observed daily climatologies of precipitation fall within the ensemble spread of the respective daily climatologies of CFSv2 hindcasts. These limitations in the CFSv2 seasonal forecasts, primarily in precipitation, restrict the potential application for predicting the interannual variability of crop yield associated with weather variability. Despite these limitations, ensemble averaging of the simulated yield using all CFSv2 members after applying bias correction may lead to satisfactory yield predictions.

  19. Power capability evaluation for lithium iron phosphate batteries based on multi-parameter constraints estimation

    NASA Astrophysics Data System (ADS)

    Wang, Yujie; Pan, Rui; Liu, Chang; Chen, Zonghai; Ling, Qiang

    2018-01-01

    The battery power capability is intimately correlated with the climbing, braking and accelerating performance of the electric vehicles. Accurate power capability prediction can not only guarantee the safety but also regulate driving behavior and optimize battery energy usage. However, the nonlinearity of the battery model is very complex especially for the lithium iron phosphate batteries. Besides, the hysteresis loop in the open-circuit voltage curve is easy to cause large error in model prediction. In this work, a multi-parameter constraints dynamic estimation method is proposed to predict the battery continuous period power capability. A high-fidelity battery model which considers the battery polarization and hysteresis phenomenon is presented to approximate the high nonlinearity of the lithium iron phosphate battery. Explicit analyses of power capability with multiple constraints are elaborated, specifically the state-of-energy is considered in power capability assessment. Furthermore, to solve the problem of nonlinear system state estimation, and suppress noise interference, the UKF based state observer is employed for power capability prediction. The performance of the proposed methodology is demonstrated by experiments under different dynamic characterization schedules. The charge and discharge power capabilities of the lithium iron phosphate batteries are quantitatively assessed under different time scales and temperatures.

  20. Early warning of limit-exceeding concentrations of cyanobacteria and cyanotoxins in drinking water reservoirs by inferential modelling.

    PubMed

    Recknagel, Friedrich; Orr, Philip T; Bartkow, Michael; Swanepoel, Annelie; Cao, Hongqing

    2017-11-01

    An early warning scheme is proposed that runs ensembles of inferential models for predicting the cyanobacterial population dynamics and cyanotoxin concentrations in drinking water reservoirs on a diel basis driven by in situ sonde water quality data. When the 10- to 30-day-ahead predicted concentrations of cyanobacteria cells or cyanotoxins exceed pre-defined limit values, an early warning automatically activates an action plan considering in-lake control, e.g. intermittent mixing and ad hoc water treatment in water works, respectively. Case studies of the sub-tropical Lake Wivenhoe (Australia) and the Mediterranean Vaal Reservoir (South Africa) demonstrate that ensembles of inferential models developed by the hybrid evolutionary algorithm HEA are capable of up to 30days forecasts of cyanobacteria and cyanotoxins using data collected in situ. The resulting models for Dolicospermum circinale displayed validity for up to 10days ahead, whilst concentrations of Cylindrospermopsis raciborskii and microcystins were successfully predicted up to 30days ahead. Implementing the proposed scheme for drinking water reservoirs enhances current water quality monitoring practices by solely utilising in situ monitoring data, in addition to cyanobacteria and cyanotoxin measurements. Access to routinely measured cyanotoxin data allows for development of models that predict explicitly cyanotoxin concentrations that avoid to inadvertently model and predict non-toxic cyanobacterial strains. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Molecular dynamics modeling framework for overcoming nanoshape retention limits of imprint lithography

    NASA Astrophysics Data System (ADS)

    Cherala, Anshuman; Sreenivasan, S. V.

    2018-12-01

    Complex nanoshaped structures (nanoshape structures here are defined as shapes enabled by sharp corners with radius of curvature <5 nm) have been shown to enable emerging nanoscale applications in energy, electronics, optics, and medicine. This nanoshaped fabrication at high throughput is well beyond the capabilities of advanced optical lithography. While the highest-resolution e-beam processes (Gaussian beam tools with non-chemically amplified resists) can achieve <5 nm resolution, this is only available at very low throughputs. Large-area e-beam processes, needed for photomasks and imprint templates, are limited to 18 nm half-pitch lines and spaces and 20 nm half-pitch hole patterns. Using nanoimprint lithography, we have previously demonstrated the ability to fabricate precise diamond-like nanoshapes with 3 nm radius corners over large areas. An exemplary shaped silicon nanowire ultracapacitor device was fabricated with these nanoshaped structures, wherein the half-pitch was 100 nm. The device significantly exceeded standard nanowire capacitor performance (by 90%) due to relative increase in surface area per unit projected area, enabled by the nanoshape. Going beyond the previous work, in this paper we explore the scaling of these nanoshaped structures to 10 nm half-pitch and below. At these scales a new "shape retention" resolution limit is observed due to polymer relaxation in imprint resists, which cannot be predicted with a linear elastic continuum model. An all-atom molecular dynamics model of the nanoshape structure was developed here to study this shape retention phenomenon and accurately predict the polymer relaxation. The atomistic framework is an essential modeling and design tool to extend the capability of imprint lithography to sub-10 nm nanoshapes. This framework has been used here to propose process refinements that maximize shape retention, and design template assist features (design for nanoshape retention) to achieve targeted nanoshapes.

  2. Synthesis of User Needs for Arctic Sea Ice Predictions

    NASA Astrophysics Data System (ADS)

    Wiggins, H. V.; Turner-Bogren, E. J.; Sheffield Guy, L.

    2017-12-01

    Forecasting Arctic sea ice on sub-seasonal to seasonal scales in a changing Arctic is of interest to a diverse range of stakeholders. However, sea ice forecasting is still challenging due to high variability in weather and ocean conditions and limits to prediction capabilities; the science needs for observations and modeling are extensive. At a time of challenged science funding, one way to prioritize sea ice prediction efforts is to examine the information needs of various stakeholder groups. This poster will present a summary and synthesis of existing surveys, reports, and other literature that examines user needs for sea ice predictions. The synthesis will include lessons learned from the Sea Ice Prediction Network (a collaborative, multi-agency-funded project focused on seasonal Arctic sea ice predictions), the Sea Ice for Walrus Outlook (a resource for Alaska Native subsistence hunters and coastal communities, that provides reports on weather and sea ice conditions), and other efforts. The poster will specifically compare the scales and variables of sea ice forecasts currently available, as compared to what information is requested by various user groups.

  3. Summer drought predictability over Europe: empirical versus dynamical forecasts

    NASA Astrophysics Data System (ADS)

    Turco, Marco; Ceglar, Andrej; Prodhomme, Chloé; Soret, Albert; Toreti, Andrea; Doblas-Reyes Francisco, J.

    2017-08-01

    Seasonal climate forecasts could be an important planning tool for farmers, government and insurance companies that can lead to better and timely management of seasonal climate risks. However, climate seasonal forecasts are often under-used, because potential users are not well aware of the capabilities and limitations of these products. This study aims at assessing the merits and caveats of a statistical empirical method, the ensemble streamflow prediction system (ESP, an ensemble based on reordering historical data) and an operational dynamical forecast system, the European Centre for Medium-Range Weather Forecasts—System 4 (S4) in predicting summer drought in Europe. Droughts are defined using the Standardized Precipitation Evapotranspiration Index for the month of August integrated over 6 months. Both systems show useful and mostly comparable deterministic skill. We argue that this source of predictability is mostly attributable to the observed initial conditions. S4 shows only higher skill in terms of ability to probabilistically identify drought occurrence. Thus, currently, both approaches provide useful information and ESP represents a computationally fast alternative to dynamical prediction applications for drought prediction.

  4. Off-Gas Adsorption Model Capabilities and Recommendations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lyon, Kevin L.; Welty, Amy K.; Law, Jack

    2016-03-01

    Off-gas treatment is required to reduce emissions from aqueous fuel reprocessing. Evaluating the products of innovative gas adsorption research requires increased computational simulation capability to more effectively transition from fundamental research to operational design. Early modeling efforts produced the Off-Gas SeParation and REcoverY (OSPREY) model that, while efficient in terms of computation time, was of limited value for complex systems. However, the computational and programming lessons learned in development of the initial model were used to develop Discontinuous Galerkin OSPREY (DGOSPREY), a more effective model. Initial comparisons between OSPREY and DGOSPREY show that, while OSPREY does reasonably well to capturemore » the initial breakthrough time, it displays far too much numerical dispersion to accurately capture the real shape of the breakthrough curves. DGOSPREY is a much better tool as it utilizes a more stable set of numerical methods. In addition, DGOSPREY has shown the capability to capture complex, multispecies adsorption behavior, while OSPREY currently only works for a single adsorbing species. This capability makes DGOSPREY ultimately a more practical tool for real world simulations involving many different gas species. While DGOSPREY has initially performed very well, there is still need for improvement. The current state of DGOSPREY does not include any micro-scale adsorption kinetics and therefore assumes instantaneous adsorption. This is a major source of error in predicting water vapor breakthrough because the kinetics of that adsorption mechanism is particularly slow. However, this deficiency can be remedied by building kinetic kernels into DGOSPREY. Another source of error in DGOSPREY stems from data gaps in single species, such as Kr and Xe, isotherms. Since isotherm data for each gas is currently available at a single temperature, the model is unable to predict adsorption at temperatures outside of the set of data currently available. Thus, in order to improve the predictive capabilities of the model, there is a need for more single-species adsorption isotherms at different temperatures, in addition to extending the model to include adsorption kinetics. This report provides background information about the modeling process and a path forward for further model improvement in terms of accuracy and user interface.« less

  5. In-flight Evaluation of Aerodynamic Predictions of an Air-launched Space Booster

    NASA Technical Reports Server (NTRS)

    Curry, Robert E.; Mendenhall, Michael R.; Moulton, Bryan

    1992-01-01

    Several analytical aerodynamic design tools that were applied to the Pegasus (registered trademark) air-launched space booster were evaluated using flight measurements. The study was limited to existing codes and was conducted with limited computational resources. The flight instrumentation was constrained to have minimal impact on the primary Pegasus missions. Where appropriate, the flight measurements were compared with computational data. Aerodynamic performance and trim data from the first two flights were correlated with predictions. Local measurements in the wing and wing-body interference region were correlated with analytical data. This complex flow region includes the effect of aerothermal heating magnification caused by the presence of a corner vortex and interaction of the wing leading edge shock and fuselage boundary layer. The operation of the first two missions indicates that the aerodynamic design approach for Pegasus was adequate, and data show that acceptable margins were available. Additionally, the correlations provide insight into the capabilities of these analytical tools for more complex vehicles in which the design margins may be more stringent.

  6. In-flight evaluation of aerodynamic predictions of an air-launched space booster

    NASA Technical Reports Server (NTRS)

    Curry, Robert E.; Mendenhall, Michael R.; Moulton, Bryan

    1993-01-01

    Several analytical aerodynamic design tools that were applied to the Pegasus air-launched space booster were evaluated using flight measurements. The study was limited to existing codes and was conducted with limited computational resources. The flight instrumentation was constrained to have minimal impact on the primary Pegasus missions. Where appropriate, the flight measurements were compared with computational data. Aerodynamic performance and trim data from the first two flights were correlated with predictions. Local measurements in the wing and wing-body interference region were correlated with analytical data. This complex flow region includes the effect of aerothermal heating magnification caused by the presence of a corner vortex and interaction of the wing leading edge shock and fuselage boundary layer. The operation of the first two missions indicates that the aerodynamic design approach for Pegasus was adequate, and data show that acceptable margins were available. Additionally, the correlations provide insight into the capabilities of these analytical tools for more complex vehicles in which design margins may be more stringent.

  7. Computational Aerothermodynamic Design Issues for Hypersonic Vehicles

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.; Weilmuenster, K. James; Hamilton, H. Harris, II; Olynick, David R.; Venkatapathy, Ethiraj

    1997-01-01

    A brief review of the evolutionary progress in computational aerothermodynamics is presented. The current status of computational aerothermodynamics is then discussed, with emphasis on its capabilities and limitations for contributions to the design process of hypersonic vehicles. Some topics to be highlighted include: (1) aerodynamic coefficient predictions with emphasis on high temperature gas effects; (2) surface heating and temperature predictions for thermal protection system (TPS) design in a high temperature, thermochemical nonequilibrium environment; (3) methods for extracting and extending computational fluid dynamic (CFD) solutions for efficient utilization by all members of a multidisciplinary design team; (4) physical models; (5) validation process and error estimation; and (6) gridding and solution generation strategies. Recent experiences in the design of X-33 will be featured. Computational aerothermodynamic contributions to Mars Pathfinder, METEOR, and Stardust (Comet Sample return) will also provide context for this discussion. Some of the barriers that currently limit computational aerothermodynamics to a predominantly reactive mode in the design process will also be discussed, with the goal of providing focus for future research.

  8. Computational Aerothermodynamic Design Issues for Hypersonic Vehicles

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.; Weilmuenster, K. James; Hamilton, H. Harris, II; Olynick, David R.; Venkatapathy, Ethiraj

    2005-01-01

    A brief review of the evolutionary progress in computational aerothermodynamics is presented. The current status of computational aerothermodynamics is then discussed, with emphasis on its capabilities and limitations for contributions to the design process of hypersonic vehicles. Some topics to be highlighted include: (1) aerodynamic coefficient predictions with emphasis on high temperature gas effects; (2) surface heating and temperature predictions for thermal protection system (TPS) design in a high temperature, thermochemical nonequilibrium environment; (3) methods for extracting and extending computational fluid dynamic (CFD) solutions for efficient utilization by all members of a multidisciplinary design team; (4) physical models; (5) validation process and error estimation; and (6) gridding and solution generation strategies. Recent experiences in the design of X-33 will be featured. Computational aerothermodynamic contributions to Mars Path finder, METEOR, and Stardust (Comet Sample return) will also provide context for this discussion. Some of the barriers that currently limit computational aerothermodynamics to a predominantly reactive mode in the design process will also be discussed, with the goal of providing focus for future research.

  9. Computational Aerothermodynamic Design Issues for Hypersonic Vehicles

    NASA Technical Reports Server (NTRS)

    Olynick, David R.; Venkatapathy, Ethiraj

    2004-01-01

    A brief review of the evolutionary progress in computational aerothermodynamics is presented. The current status of computational aerothermodynamics is then discussed, with emphasis on its capabilities and limitations for contributions to the design process of hypersonic vehicles. Some topics to be highlighted include: (1) aerodynamic coefficient predictions with emphasis on high temperature gas effects; (2) surface heating and temperature predictions for thermal protection system (TPS) design in a high temperature, thermochemical nonequilibrium environment; (3) methods for extracting and extending computational fluid dynamic (CFD) solutions for efficient utilization by all members of a multidisciplinary design team; (4) physical models; (5) validation process and error estimation; and (6) gridding and solution generation strategies. Recent experiences in the design of X-33 will be featured. Computational aerothermodynamic contributions to Mars Pathfinder, METEOR, and Stardust (Comet Sample return) will also provide context for this discussion. Some of the barriers that currently limit computational aerothermodynamics to a predominantly reactive mode in the design process will also be discussed, with the goal of providing focus for future research.

  10. Using early biomarker data to predict long-term bone mineral density: application of semi-mechanistic bone cycle model on denosumab data.

    PubMed

    Zheng, Jenny; van Schaick, Erno; Wu, Liviawati Sutjandra; Jacqmin, Philippe; Perez Ruixo, Juan Jose

    2015-08-01

    Osteoporosis is a chronic skeletal disease characterized by low bone strength resulting in increased fracture risk. New treatments for osteoporosis are still an unmet medical need because current available treatments have various limitations. Bone mineral density (BMD) is an important endpoint for evaluating new osteoporosis treatments; however, the BMD response is often slower and less profound than that of bone turnover markers (BTMs). If the relationship between BTMs and BMD can be quantified, the BMD response can be predicted by the changes in BTM after a single dose; therefore, a decision based on BMD changes can be informed early. We have applied a bone cycle model to a phase 2 denosumab dose-ranging study in osteopenic women to quantitatively link serum denosumab pharmacokinetics, BTMs, and lumbar spine (LS) BMD. The data from two phase 3 denosumab studies in patients with low bone mass, FREEDOM and DEFEND, were used for external validation. Both internal and external visual predictive checks demonstrated that the model was capable of predicting LS BMD at the denosumab regimen of 60 mg every 6 months. It has been demonstrated that the model, in combination with the changes in BTMs observed from a single-dose study in men, is capable of predicting long-term BMD outcomes (e.g., LS BMD response in men after 1 year of treatment) in different populations. We propose that this model can be used to inform drug development decisions for osteoporosis treatment early via evaluating LS BMD response when BTM data become available in early trials.

  11. A Man-Machine System for Contemporary Counseling Practice: Diagnosis and Prediction.

    ERIC Educational Resources Information Center

    Roach, Arthur J.

    This paper looks at present and future capabilities for diagnosis and prediction in computer-based guidance efforts and reviews the problems and potentials which will accompany the implementation of such capabilities. In addition to necessary procedural refinement in prediction, future developments in computer-based educational and career…

  12. Development of a Higher Fidelity Model for the Cascade Distillation Subsystem (CDS)

    NASA Technical Reports Server (NTRS)

    Perry, Bruce; Anderson, Molly

    2014-01-01

    Significant improvements have been made to the ACM model of the CDS, enabling accurate predictions of dynamic operations with fewer assumptions. The model has been utilized to predict how CDS performance would be impacted by changing operating parameters, revealing performance trade-offs and possibilities for improvement. CDS efficiency is driven by the THP coefficient of performance, which in turn is dependent on heat transfer within the system. Based on the remaining limitations of the simulation, priorities for further model development include: center dot Relaxing the assumption of total condensation center dot Incorporating dynamic simulation capability for the buildup of dissolved inert gasses in condensers center dot Examining CDS operation with more complex feeds center dot Extending heat transfer analysis to all surfaces

  13. Predictive searching algorithm for Fourier ptychography

    NASA Astrophysics Data System (ADS)

    Li, Shunkai; Wang, Yifan; Wu, Weichen; Liang, Yanmei

    2017-12-01

    By capturing a set of low-resolution images under different illumination angles and stitching them together in the Fourier domain, Fourier ptychography (FP) is capable of providing high-resolution image with large field of view. Despite its validity, long acquisition time limits its real-time application. We proposed an incomplete sampling scheme in this paper, termed the predictive searching algorithm to shorten the acquisition and recovery time. Informative sub-regions of the sample’s spectrum are searched and the corresponding images of the most informative directions are captured for spectrum expansion. Its effectiveness is validated by both simulated and experimental results, whose data requirement is reduced by ˜64% to ˜90% without sacrificing image reconstruction quality compared with the conventional FP method.

  14. Use of Smoothed Measured Winds to Predict and Assess Launch Environments

    NASA Technical Reports Server (NTRS)

    Cordova, Henry S.; Leahy, Frank; Adelfang, Stanley; Roberts, Barry; Starr, Brett; Duffin, Paul; Pueri, Daniel

    2011-01-01

    Since many of the larger launch vehicles are operated near their design limits during the ascent phase of flight to optimize payload to orbit, it often becomes necessary to verify that the vehicle will remain within certification limits during the ascent phase as part of the go/no-go review made prior to launch. This paper describes the approach used to predict Ares I-X launch vehicle structural air loads and controllability prior to launch which represents a distinct departure from the methodology of the Space Shuttle and Evolved Expendable Launch Vehicle (EELV) programs. Protection for uncertainty of key environment and trajectory parameters is added to the nominal assessment of launch capability to ensure that critical launch trajectory variables would be within the integrated vehicle certification envelopes. This process was applied by the launch team as a key element of the launch day go/no-go recommendation. Pre-launch assessments of vehicle launch capability for NASA's Space Shuttle and the EELV heavy lift versions require the use of a high-resolution wind profile measurements, which have relatively small sample size compared with low-resolution profile databases (which include low-resolution balloons and radar wind profilers). The approach described in this paper has the potential to allow the pre-launch assessment team to use larger samples of wind measurements from low-resolution wind profile databases that will improve the accuracy of pre-launch assessments of launch availability with no degradation of mission assurance or launch safety.

  15. Inverse models: A necessary next step in ground-water modeling

    USGS Publications Warehouse

    Poeter, E.P.; Hill, M.C.

    1997-01-01

    Inverse models using, for example, nonlinear least-squares regression, provide capabilities that help modelers take full advantage of the insight available from ground-water models. However, lack of information about the requirements and benefits of inverse models is an obstacle to their widespread use. This paper presents a simple ground-water flow problem to illustrate the requirements and benefits of the nonlinear least-squares repression method of inverse modeling and discusses how these attributes apply to field problems. The benefits of inverse modeling include: (1) expedited determination of best fit parameter values; (2) quantification of the (a) quality of calibration, (b) data shortcomings and needs, and (c) confidence limits on parameter estimates and predictions; and (3) identification of issues that are easily overlooked during nonautomated calibration.Inverse models using, for example, nonlinear least-squares regression, provide capabilities that help modelers take full advantage of the insight available from ground-water models. However, lack of information about the requirements and benefits of inverse models is an obstacle to their widespread use. This paper presents a simple ground-water flow problem to illustrate the requirements and benefits of the nonlinear least-squares regression method of inverse modeling and discusses how these attributes apply to field problems. The benefits of inverse modeling include: (1) expedited determination of best fit parameter values; (2) quantification of the (a) quality of calibration, (b) data shortcomings and needs, and (c) confidence limits on parameter estimates and predictions; and (3) identification of issues that are easily overlooked during nonautomated calibration.

  16. Open Rotor Noise Prediction at NASA Langley - Capabilities, Research and Development

    NASA Technical Reports Server (NTRS)

    Farassat, Fereidoun

    2010-01-01

    The high fuel prices of recent years have caused the operating cost of the airlines to soar. In an effort to bring down the fuel consumption, the major aircraft engine manufacturers are now taking a fresh look at open rotors for the propulsion of future airliners. Open rotors, also known as propfans or unducted fans, can offer up to 30 per cent improvement in efficiency compared to high bypass engines of 1980 vintage currently in use in most civilian aircraft. NASA Langley researchers have contributed significantly to the development of aeroacoustic technology of open rotors. This report discusses the current noise prediction technology at Langley and reviews the input data requirements, strengths and limitations of each method as well as the associated problems in need of attention by the researchers. We present a brief history of research on the aeroacoustics of rotating blade machinery at Langley Research Center. We then discuss the available noise prediction codes for open rotors developed at NASA Langley and their capabilities. In particular, we present the two useful formulations used for the computation of noise from subsonic and supersonic surfaces. Here we discuss the open rotor noise prediction codes ASSPIN and one based on Ffowcs Williams-Hawkings equation with penetrable data surface (FW - Hpds). The scattering of sound from surfaces near the rotor are calculated using the fast scattering code (FSC) which is also discussed in this report. Plans for further improvements of these codes are given.

  17. Knowledge Based Cloud FE Simulation of Sheet Metal Forming Processes.

    PubMed

    Zhou, Du; Yuan, Xi; Gao, Haoxiang; Wang, Ailing; Liu, Jun; El Fakir, Omer; Politis, Denis J; Wang, Liliang; Lin, Jianguo

    2016-12-13

    The use of Finite Element (FE) simulation software to adequately predict the outcome of sheet metal forming processes is crucial to enhancing the efficiency and lowering the development time of such processes, whilst reducing costs involved in trial-and-error prototyping. Recent focus on the substitution of steel components with aluminum alloy alternatives in the automotive and aerospace sectors has increased the need to simulate the forming behavior of such alloys for ever more complex component geometries. However these alloys, and in particular their high strength variants, exhibit limited formability at room temperature, and high temperature manufacturing technologies have been developed to form them. Consequently, advanced constitutive models are required to reflect the associated temperature and strain rate effects. Simulating such behavior is computationally very expensive using conventional FE simulation techniques. This paper presents a novel Knowledge Based Cloud FE (KBC-FE) simulation technique that combines advanced material and friction models with conventional FE simulations in an efficient manner thus enhancing the capability of commercial simulation software packages. The application of these methods is demonstrated through two example case studies, namely: the prediction of a material's forming limit under hot stamping conditions, and the tool life prediction under multi-cycle loading conditions.

  18. Knowledge Based Cloud FE Simulation of Sheet Metal Forming Processes

    PubMed Central

    Zhou, Du; Yuan, Xi; Gao, Haoxiang; Wang, Ailing; Liu, Jun; El Fakir, Omer; Politis, Denis J.; Wang, Liliang; Lin, Jianguo

    2016-01-01

    The use of Finite Element (FE) simulation software to adequately predict the outcome of sheet metal forming processes is crucial to enhancing the efficiency and lowering the development time of such processes, whilst reducing costs involved in trial-and-error prototyping. Recent focus on the substitution of steel components with aluminum alloy alternatives in the automotive and aerospace sectors has increased the need to simulate the forming behavior of such alloys for ever more complex component geometries. However these alloys, and in particular their high strength variants, exhibit limited formability at room temperature, and high temperature manufacturing technologies have been developed to form them. Consequently, advanced constitutive models are required to reflect the associated temperature and strain rate effects. Simulating such behavior is computationally very expensive using conventional FE simulation techniques. This paper presents a novel Knowledge Based Cloud FE (KBC-FE) simulation technique that combines advanced material and friction models with conventional FE simulations in an efficient manner thus enhancing the capability of commercial simulation software packages. The application of these methods is demonstrated through two example case studies, namely: the prediction of a material's forming limit under hot stamping conditions, and the tool life prediction under multi-cycle loading conditions. PMID:28060298

  19. A variable capacitance based modeling and power capability predicting method for ultracapacitor

    NASA Astrophysics Data System (ADS)

    Liu, Chang; Wang, Yujie; Chen, Zonghai; Ling, Qiang

    2018-01-01

    Methods of accurate modeling and power capability predicting for ultracapacitors are of great significance in management and application of lithium-ion battery/ultracapacitor hybrid energy storage system. To overcome the simulation error coming from constant capacitance model, an improved ultracapacitor model based on variable capacitance is proposed, where the main capacitance varies with voltage according to a piecewise linear function. A novel state-of-charge calculation approach is developed accordingly. After that, a multi-constraint power capability prediction is developed for ultracapacitor, in which a Kalman-filter-based state observer is designed for tracking ultracapacitor's real-time behavior. Finally, experimental results verify the proposed methods. The accuracy of the proposed model is verified by terminal voltage simulating results under different temperatures, and the effectiveness of the designed observer is proved by various test conditions. Additionally, the power capability prediction results of different time scales and temperatures are compared, to study their effects on ultracapacitor's power capability.

  20. Optimal design and critical analysis of a high resolution video plenoptic demonstrator

    NASA Astrophysics Data System (ADS)

    Drazic, Valter; Sacré, Jean-Jacques; Bertrand, Jérôme; Schubert, Arno; Blondé, Etienne

    2011-03-01

    A plenoptic camera is a natural multi-view acquisition device also capable of measuring distances by correlating a set of images acquired under different parallaxes. Its single lens and single sensor architecture have two downsides: limited resolution and depth sensitivity. In a very first step and in order to circumvent those shortcomings, we have investigated how the basic design parameters of a plenoptic camera optimize both the resolution of each view and also its depth measuring capability. In a second step, we built a prototype based on a very high resolution Red One® movie camera with an external plenoptic adapter and a relay lens. The prototype delivered 5 video views of 820x410. The main limitation in our prototype is view cross talk due to optical aberrations which reduce the depth accuracy performance. We have simulated some limiting optical aberrations and predicted its impact on the performances of the camera. In addition, we developed adjustment protocols based on a simple pattern and analyzing programs which investigate the view mapping and amount of parallax crosstalk on the sensor on a pixel basis. The results of these developments enabled us to adjust the lenslet array with a sub micrometer precision and to mark the pixels of the sensor where the views do not register properly.

  1. A Deep Space Orbit Determination Software: Overview and Event Prediction Capability

    NASA Astrophysics Data System (ADS)

    Kim, Youngkwang; Park, Sang-Young; Lee, Eunji; Kim, Minsik

    2017-06-01

    This paper presents an overview of deep space orbit determination software (DSODS), as well as validation and verification results on its event prediction capabilities. DSODS was developed in the MATLAB object-oriented programming environment to support the Korea Pathfinder Lunar Orbiter (KPLO) mission. DSODS has three major capabilities: celestial event prediction for spacecraft, orbit determination with deep space network (DSN) tracking data, and DSN tracking data simulation. To achieve its functionality requirements, DSODS consists of four modules: orbit propagation (OP), event prediction (EP), data simulation (DS), and orbit determination (OD) modules. This paper explains the highest-level data flows between modules in event prediction, orbit determination, and tracking data simulation processes. Furthermore, to address the event prediction capability of DSODS, this paper introduces OP and EP modules. The role of the OP module is to handle time and coordinate system conversions, to propagate spacecraft trajectories, and to handle the ephemerides of spacecraft and celestial bodies. Currently, the OP module utilizes the General Mission Analysis Tool (GMAT) as a third-party software component for highfidelity deep space propagation, as well as time and coordinate system conversions. The role of the EP module is to predict celestial events, including eclipses, and ground station visibilities, and this paper presents the functionality requirements of the EP module. The validation and verification results show that, for most cases, event prediction errors were less than 10 millisec when compared with flight proven mission analysis tools such as GMAT and Systems Tool Kit (STK). Thus, we conclude that DSODS is capable of predicting events for the KPLO in real mission applications.

  2. Evaluation of classifier topologies for the real-time classification of simultaneous limb motions.

    PubMed

    Ortiz-Catalan, Max; Branemark, Rickard; Hakansson, Bo

    2013-01-01

    The prediction of motion intent through the decoding of myoelectric signals has the potential to improve the functionally of limb prostheses. Considerable research on individual motion classifiers has been done to exploit this idea. A drawback with the individual prediction approach, however, is its limitation to serial control, which is slow, cumbersome, and unnatural. In this work, different classifier topologies suitable for the decoding of mixed classes, and thus capable of predicting simultaneous motions, were investigated in real-time. These topologies resulted in higher offline accuracies than previously achieved, but more importantly, positive indications of their suitability for real-time systems were found. Furthermore, in order to facilitate further development, benchmarking, and cooperation, the algorithms and data generated in this study are freely available as part of BioPatRec, an open source framework for the development of advanced prosthetic control strategies.

  3. Estimating the Reliability of Electronic Parts in High Radiation Fields

    NASA Technical Reports Server (NTRS)

    Everline, Chester; Clark, Karla; Man, Guy; Rasmussen, Robert; Johnston, Allan; Kohlhase, Charles; Paulos, Todd

    2008-01-01

    Radiation effects on materials and electronic parts constrain the lifetime of flight systems visiting Europa. Understanding mission lifetime limits is critical to the design and planning of such a mission. Therefore, the operational aspects of radiation dose are a mission success issue. To predict and manage mission lifetime in a high radiation environment, system engineers need capable tools to trade radiation design choices against system design and reliability, and science achievements. Conventional tools and approaches provided past missions with conservative designs without the ability to predict their lifetime beyond the baseline mission.This paper describes a more systematic approach to understanding spacecraft design margin, allowing better prediction of spacecraft lifetime. This is possible because of newly available electronic parts radiation effects statistics and an enhanced spacecraft system reliability methodology. This new approach can be used in conjunction with traditional approaches for mission design. This paper describes the fundamentals of the new methodology.

  4. Working memory capacity as controlled attention in tactical decision making.

    PubMed

    Furley, Philip A; Memmert, Daniel

    2012-06-01

    The controlled attention theory of working memory capacity (WMC, Engle 2002) suggests that WMC represents a domain free limitation in the ability to control attention and is predictive of an individual's capability of staying focused, avoiding distraction and impulsive errors. In the present paper we test the predictive power of WMC in computer-based sport decision-making tasks. Experiment 1 demonstrated that high-WMC athletes were better able at focusing their attention on tactical decision making while blocking out irrelevant auditory distraction. Experiment 2 showed that high-WMC athletes were more successful at adapting their tactical decision making according to the situation instead of relying on prepotent inappropriate decisions. The present results provide additional but also unique support for the controlled attention theory of WMC by demonstrating that WMC is predictive of controlling attention in complex settings among different modalities and highlight the importance of working memory in tactical decision making.

  5. Fatigue life and crack growth prediction methodology

    NASA Technical Reports Server (NTRS)

    Newman, J. C., Jr.; Phillips, E. P.; Everett, R. A., Jr.

    1993-01-01

    The capabilities of a plasticity-induced crack-closure model and life-prediction code to predict fatigue crack growth and fatigue lives of metallic materials are reviewed. Crack-tip constraint factors, to account for three-dimensional effects, were selected to correlate large-crack growth rate data as a function of the effective-stress-intensity factor range (delta(K(sub eff))) under constant-amplitude loading. Some modifications to the delta(K(sub eff))-rate relations were needed in the near threshold regime to fit small-crack growth rate behavior and endurance limits. The model was then used to calculate small- and large-crack growth rates, and in some cases total fatigue lives, for several aluminum and titanium alloys under constant-amplitude, variable-amplitude, and spectrum loading. Fatigue lives were calculated using the crack growth relations and microstructural features like those that initiated cracks. Results from the tests and analyses agreed well.

  6. Improved Geothermometry Through Multivariate Reaction-path Modeling and Evaluation of Geomicrobiological Influences on Geochemical Temperature Indicators: Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mattson, Earl; Smith, Robert; Fujita, Yoshiko

    2015-03-01

    The project was aimed at demonstrating that the geothermometric predictions can be improved through the application of multi-element reaction path modeling that accounts for lithologic and tectonic settings, while also accounting for biological influences on geochemical temperature indicators. The limited utilization of chemical signatures by individual traditional geothermometer in the development of reservoir temperature estimates may have been constraining their reliability for evaluation of potential geothermal resources. This project, however, was intended to build a geothermometry tool which can integrate multi-component reaction path modeling with process-optimization capability that can be applied to dilute, low-temperature water samples to consistently predict reservoirmore » temperature within ±30 °C. The project was also intended to evaluate the extent to which microbiological processes can modulate the geochemical signals in some thermal waters and influence the geothermometric predictions.« less

  7. Predicting New Materials for Hydrogen Storage Application

    PubMed Central

    Vajeeston, Ponniah; Ravindran, Ponniah; Fjellvåg, Helmer

    2009-01-01

    Knowledge about the ground-state crystal structure is a prerequisite for the rational understanding of solid-state properties of new materials. To act as an efficient energy carrier, hydrogen should be absorbed and desorbed in materials easily and in high quantities. Owing to the complexity in structural arrangements and difficulties involved in establishing hydrogen positions by x-ray diffraction methods, the structural information of hydrides are very limited compared to other classes of materials (like oxides, intermetallics, etc.). This can be overcome by conducting computational simulations combined with selected experimental study which can save environment, money, and man power. The predicting capability of first-principles density functional theory (DFT) is already well recognized and in many cases structural and thermodynamic properties of single/multi component system are predicted. This review will focus on possible new classes of materials those have high hydrogen content, demonstrate the ability of DFT to predict crystal structure, and search for potential meta-stable phases. Stabilization of such meta-stable phases is also discussed.

  8. Exploring predictive performance: A reanalysis of the geospace model transition challenge

    NASA Astrophysics Data System (ADS)

    Welling, D. T.; Anderson, B. J.; Crowley, G.; Pulkkinen, A. A.; Rastätter, L.

    2017-01-01

    The Pulkkinen et al. (2013) study evaluated the ability of five different geospace models to predict surface dB/dt as a function of upstream solar drivers. This was an important step in the assessment of research models for predicting and ultimately preventing the damaging effects of geomagnetically induced currents. Many questions remain concerning the capabilities of these models. This study presents a reanalysis of the Pulkkinen et al. (2013) results in an attempt to better understand the models' performance. The range of validity of the models is determined by examining the conditions corresponding to the empirical input data. It is found that the empirical conductance models on which global magnetohydrodynamic models rely are frequently used outside the limits of their input data. The prediction error for the models is sorted as a function of solar driving and geomagnetic activity. It is found that all models show a bias toward underprediction, especially during active times. These results have implications for future research aimed at improving operational forecast models.

  9. Life prediction technologies for aeronautical propulsion systems

    NASA Technical Reports Server (NTRS)

    Mcgaw, Michael A.

    1987-01-01

    Fatigue and fracture problems continue to occur in aeronautical gas turbine engines. Components whose useful life is limited by these failure modes include turbine hot-section blades, vanes and disks. Safety considerations dictate that catastrophic failures be avoided, while economic considerations dictate that noncatastrophic failures occur as infrequently as possible. The design decision is therefore in making the tradeoff between engine performance and durability. The NASA Lewis Research Center has contributed to the aeropropulsion industry in the areas of life prediction technology for 30 years, developing creep and fatigue life prediction methodologies for hot-section materials. Emphasis is placed on the development of methods capable of handling both thermal and mechanical fatigue under severe environments. Recent accomplishments include the development of more accurate creep-fatigue life prediction methods such as the total strain version of Lewis' Strainrange Partitioning (SRP) and the HOST-developed Cyclic Damage Accumulation (CDA) model. Other examples include the Double Damage Curve Approach (DDCA), which provides greatly improved accuracy for cumulative fatigue design rules.

  10. Risk Factors Analysis and Death Prediction in Some Life-Threatening Ailments Using Chi-Square Case-Based Reasoning (χ2 CBR) Model.

    PubMed

    Adeniyi, D A; Wei, Z; Yang, Y

    2018-01-30

    A wealth of data are available within the health care system, however, effective analysis tools for exploring the hidden patterns in these datasets are lacking. To alleviate this limitation, this paper proposes a simple but promising hybrid predictive model by suitably combining the Chi-square distance measurement with case-based reasoning technique. The study presents the realization of an automated risk calculator and death prediction in some life-threatening ailments using Chi-square case-based reasoning (χ 2 CBR) model. The proposed predictive engine is capable of reducing runtime and speeds up execution process through the use of critical χ 2 distribution value. This work also showcases the development of a novel feature selection method referred to as frequent item based rule (FIBR) method. This FIBR method is used for selecting the best feature for the proposed χ 2 CBR model at the preprocessing stage of the predictive procedures. The implementation of the proposed risk calculator is achieved through the use of an in-house developed PHP program experimented with XAMP/Apache HTTP server as hosting server. The process of data acquisition and case-based development is implemented using the MySQL application. Performance comparison between our system, the NBY, the ED-KNN, the ANN, the SVM, the Random Forest and the traditional CBR techniques shows that the quality of predictions produced by our system outperformed the baseline methods studied. The result of our experiment shows that the precision rate and predictive quality of our system in most cases are equal to or greater than 70%. Our result also shows that the proposed system executes faster than the baseline methods studied. Therefore, the proposed risk calculator is capable of providing useful, consistent, faster, accurate and efficient risk level prediction to both the patients and the physicians at any time, online and on a real-time basis.

  11. Aerodynamics and thermal physics of helicopter ice accretion

    NASA Astrophysics Data System (ADS)

    Han, Yiqiang

    Ice accretion on aircraft introduces significant loss in airfoil performance. Reduced lift-to- drag ratio reduces the vehicle capability to maintain altitude and also limits its maneuverability. Current ice accretion performance degradation modeling approaches are calibrated only to a limited envelope of liquid water content, impact velocity, temperature, and water droplet size; consequently inaccurate aerodynamic performance degradations are estimated. The reduced ice accretion prediction capabilities in the glaze ice regime are primarily due to a lack of knowledge of surface roughness induced by ice accretion. A comprehensive understanding of the ice roughness effects on airfoil heat transfer, ice accretion shapes, and ultimately aerodynamics performance is critical for the design of ice protection systems. Surface roughness effects on both heat transfer and aerodynamic performance degradation on airfoils have been experimentally evaluated. Novel techniques, such as ice molding and casting methods and transient heat transfer measurement using non-intrusive thermal imaging methods, were developed at the Adverse Environment Rotor Test Stand (AERTS) facility at Penn State. A novel heat transfer scaling method specifically for turbulent flow regime was also conceived. A heat transfer scaling parameter, labeled as Coefficient of Stanton and Reynolds Number (CSR = Stx/Rex --0.2), has been validated against reference data found in the literature for rough flat plates with Reynolds number (Re) up to 1x107, for rough cylinders with Re ranging from 3x104 to 4x106, and for turbine blades with Re from 7.5x105 to 7x106. This is the first time that the effect of Reynolds number is shown to be successfully eliminated on heat transfer magnitudes measured on rough surfaces. Analytical models for ice roughness distribution, heat transfer prediction, and aerodynamics performance degradation due to ice accretion have also been developed. The ice roughness prediction model was developed based on a set of 82 experimental measurements and also compared to existing predictions tools. Two reference predictions found in the literature yielded 76% and 54% discrepancy with respect to experimental testing, whereas the proposed ice roughness prediction model resulted in a 31% minimum accuracy in prediction. It must be noted that the accuracy of the proposed model is within the ice shape reproduction uncertainty of icing facilities. Based on the new ice roughness prediction model and the CSR heat transfer scaling method, an icing heat transfer model was developed. The approach achieved high accuracy in heat transfer prediction compared to experiments conducted at the AERTS facility. The discrepancy between predictions and experimental results was within +/-15%, which was within the measurement uncertainty range of the facility. By combining both the ice roughness and heat transfer predictions, and incorporating the modules into an existing ice prediction tool (LEWICE), improved prediction capability was obtained, especially for the glaze regime. With the available ice shapes accreted at the AERTS facility and additional experiments found in the literature, 490 sets of experimental ice shapes and corresponding aerodynamics testing data were available. A physics-based performance degradation empirical tool was developed and achieved a mean absolute deviation of 33% when compared to the entire experimental dataset, whereas 60% to 243% discrepancies were observed using legacy drag penalty prediction tools. Rotor torque predictions coupling Blade Element Momentum Theory and the proposed drag performance degradation tool was conducted on a total of 17 validation cases. The coupled prediction tool achieved a 10% predicting error for clean rotor conditions, and 16% error for iced rotor conditions. It was shown that additional roughness element could affect the measured drag by up to 25% during experimental testing, emphasizing the need of realistic ice structures during aerodynamics modeling and testing for ice accretion.

  12. ProTSAV: A protein tertiary structure analysis and validation server.

    PubMed

    Singh, Ankita; Kaushik, Rahul; Mishra, Avinash; Shanker, Asheesh; Jayaram, B

    2016-01-01

    Quality assessment of predicted model structures of proteins is as important as the protein tertiary structure prediction. A highly efficient quality assessment of predicted model structures directs further research on function. Here we present a new server ProTSAV, capable of evaluating predicted model structures based on some popular online servers and standalone tools. ProTSAV furnishes the user with a single quality score in case of individual protein structure along with a graphical representation and ranking in case of multiple protein structure assessment. The server is validated on ~64,446 protein structures including experimental structures from RCSB and predicted model structures for CASP targets and from public decoy sets. ProTSAV succeeds in predicting quality of protein structures with a specificity of 100% and a sensitivity of 98% on experimentally solved structures and achieves a specificity of 88%and a sensitivity of 91% on predicted protein structures of CASP11 targets under 2Å.The server overcomes the limitations of any single server/method and is seen to be robust in helping in quality assessment. ProTSAV is freely available at http://www.scfbio-iitd.res.in/software/proteomics/protsav.jsp. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Optimization of Gas Metal Arc Welding (GMAW) Process for Maximum Ballistic Limit in MIL A46100 Steel Welded All-Metal Armor

    NASA Astrophysics Data System (ADS)

    Grujicic, M.; Ramaswami, S.; Snipes, J. S.; Yavari, R.; Yen, C.-F.; Cheeseman, B. A.

    2015-01-01

    Our recently developed multi-physics computational model for the conventional gas metal arc welding (GMAW) joining process has been upgraded with respect to its predictive capabilities regarding the process optimization for the attainment of maximum ballistic limit within the weld. The original model consists of six modules, each dedicated to handling a specific aspect of the GMAW process, i.e., (a) electro-dynamics of the welding gun; (b) radiation-/convection-controlled heat transfer from the electric arc to the workpiece and mass transfer from the filler metal consumable electrode to the weld; (c) prediction of the temporal evolution and the spatial distribution of thermal and mechanical fields within the weld region during the GMAW joining process; (d) the resulting temporal evolution and spatial distribution of the material microstructure throughout the weld region; (e) spatial distribution of the as-welded material mechanical properties; and (f) spatial distribution of the material ballistic limit. In the present work, the model is upgraded through the introduction of the seventh module in recognition of the fact that identification of the optimum GMAW process parameters relative to the attainment of the maximum ballistic limit within the weld region entails the use of advanced optimization and statistical sensitivity analysis methods and tools. The upgraded GMAW process model is next applied to the case of butt welding of MIL A46100 (a prototypical high-hardness armor-grade martensitic steel) workpieces using filler metal electrodes made of the same material. The predictions of the upgraded GMAW process model pertaining to the spatial distribution of the material microstructure and ballistic limit-controlling mechanical properties within the MIL A46100 butt weld are found to be consistent with general expectations and prior observations.

  14. Helicopter noise regulations: An industry perspective

    NASA Technical Reports Server (NTRS)

    Wagner, R. A.

    1978-01-01

    A review of helicopter noise measurement programs and noise reduction/economic studies of FAA is given along with a critique of a study which addresses the economic impact of noise reduction on helicopter noise. Modification of several helicopters to reduce noise and demonstrate the economic impact of the application of the current state-of-the-art technology is discussed. Specific helicopters described include Boeing Vertol 347 Helicopter, Hughes OH-6 Helicopter, and Hughes 269C Helicopter. Other topics covered include: (1) noise trends and possible noise limits; (2) accuracy of helicopter noise prediction techniques; (3) limited change possibilities of derivatives; and (4) rotor impulsive noise. The unique operational capabilities of helicopters and the implications relative to noise regulations and certification are discussed.

  15. Modeling users' activity on Twitter networks: validation of Dunbar's number

    NASA Astrophysics Data System (ADS)

    Goncalves, Bruno; Perra, Nicola; Vespignani, Alessandro

    2012-02-01

    Microblogging and mobile devices appear to augment human social capabilities, which raises the question whether they remove cognitive or biological constraints on human communication. In this paper we analyze a dataset of Twitter conversations collected across six months involving 1.7 million individuals and test the theoretical cognitive limit on the number of stable social relationships known as Dunbar's number. We find that the data are in agreement with Dunbar's result; users can entertain a maximum of 100-200 stable relationships. Thus, the ``economy of attention'' is limited in the online world by cognitive and biological constraints as predicted by Dunbar's theory. We propose a simple model for users' behavior that includes finite priority queuing and time resources that reproduces the observed social behavior.

  16. Confident Surgical Decision Making in Temporal Lobe Epilepsy by Heterogeneous Classifier Ensembles

    PubMed Central

    Fakhraei, Shobeir; Soltanian-Zadeh, Hamid; Jafari-Khouzani, Kourosh; Elisevich, Kost; Fotouhi, Farshad

    2015-01-01

    In medical domains with low tolerance for invalid predictions, classification confidence is highly important and traditional performance measures such as overall accuracy cannot provide adequate insight into classifications reliability. In this paper, a confident-prediction rate (CPR) which measures the upper limit of confident predictions has been proposed based on receiver operating characteristic (ROC) curves. It has been shown that heterogeneous ensemble of classifiers improves this measure. This ensemble approach has been applied to lateralization of focal epileptogenicity in temporal lobe epilepsy (TLE) and prediction of surgical outcomes. A goal of this study is to reduce extraoperative electrocorticography (eECoG) requirement which is the practice of using electrodes placed directly on the exposed surface of the brain. We have shown that such goal is achievable with application of data mining techniques. Furthermore, all TLE surgical operations do not result in complete relief from seizures and it is not always possible for human experts to identify such unsuccessful cases prior to surgery. This study demonstrates the capability of data mining techniques in prediction of undesirable outcome for a portion of such cases. PMID:26609547

  17. Development of polyparameter linear free energy relationship models for octanol-air partition coefficients of diverse chemicals.

    PubMed

    Jin, Xiaochen; Fu, Zhiqiang; Li, Xuehua; Chen, Jingwen

    2017-03-22

    The octanol-air partition coefficient (K OA ) is a key parameter describing the partition behavior of organic chemicals between air and environmental organic phases. As the experimental determination of K OA is costly, time-consuming and sometimes limited by the availability of authentic chemical standards for the compounds to be determined, it becomes necessary to develop credible predictive models for K OA . In this study, a polyparameter linear free energy relationship (pp-LFER) model for predicting K OA at 298.15 K and a novel model incorporating pp-LFERs with temperature (pp-LFER-T model) were developed from 795 log K OA values for 367 chemicals at different temperatures (263.15-323.15 K), and were evaluated with the OECD guidelines on QSAR model validation and applicability domain description. Statistical results show that both models are well-fitted, robust and have good predictive capabilities. Particularly, the pp-LFER model shows a strong predictive ability for polyfluoroalkyl substances and organosilicon compounds, and the pp-LFER-T model maintains a high predictive accuracy within a wide temperature range (263.15-323.15 K).

  18. Theory of Maxwell's fish eye with mutually interacting sources and drains

    NASA Astrophysics Data System (ADS)

    Leonhardt, Ulf; Sahebdivan, Sahar

    2015-11-01

    Maxwell's fish eye is predicted to image with a resolution not limited by the wavelength of light. However, interactions between sources and drains may ruin the subwavelength imaging capabilities of this and similar absolute optical instruments. Nevertheless, as we show in this paper, at resonance frequencies of the device, an array of drains may resolve a single source, or alternatively, a single drain may scan an array of sources, no matter how narrowly spaced they are. It seems that near-field information can be obtained from far-field distances.

  19. FORMAC integration program: A special applications package used in developing techniques of orbital decay and long term ephemeris prediction for satellites in earth orbit

    NASA Technical Reports Server (NTRS)

    Rowe, C. K.

    1971-01-01

    The symbolic manipulation capabilities of the FORMAC (Formula Manipulation Compiler) language are employed to expand and analytically evaluate integrals. The program integration is effected by expanding the integral(s) into a series of subintegrals and then substituting a pre-derived and pre-coded solution for that particular subintegral. Derivation of the integral solutions necessary for precoding is included, as is a discussion of the FORMAC system limitations encountered in the programming effort.

  20. A new bead-spring model for simulation of semi-flexible macromolecules

    NASA Astrophysics Data System (ADS)

    Saadat, Amir; Khomami, Bamin

    2016-11-01

    A bead-spring model for semi-flexible macromolecules is developed to overcome the deficiencies of the current coarse-grained bead-spring models. Specifically, model improvements are achieved through incorporation of a bending potential. The new model is designed to accurately describe the correlation along the backbone of the chain, segmental length, and force-extension behavior of the macromolecule even at the limit of 1 Kuhn step per spring. The relaxation time of different Rouse modes is used to demonstrate the capabilities of the new model in predicting chain dynamics.

  1. Load Index Metrics for an Optimized Management of Web Services: A Systematic Evaluation

    PubMed Central

    Souza, Paulo S. L.; Santana, Regina H. C.; Santana, Marcos J.; Zaluska, Ed; Faical, Bruno S.; Estrella, Julio C.

    2013-01-01

    The lack of precision to predict service performance through load indices may lead to wrong decisions regarding the use of web services, compromising service performance and raising platform cost unnecessarily. This paper presents experimental studies to qualify the behaviour of load indices in the web service context. The experiments consider three services that generate controlled and significant server demands, four levels of workload for each service and six distinct execution scenarios. The evaluation considers three relevant perspectives: the capability for representing recent workloads, the capability for predicting near-future performance and finally stability. Eight different load indices were analysed, including the JMX Average Time index (proposed in this paper) specifically designed to address the limitations of the other indices. A systematic approach is applied to evaluate the different load indices, considering a multiple linear regression model based on the stepwise-AIC method. The results show that the load indices studied represent the workload to some extent; however, in contrast to expectations, most of them do not exhibit a coherent correlation with service performance and this can result in stability problems. The JMX Average Time index is an exception, showing a stable behaviour which is tightly-coupled to the service runtime for all executions. Load indices are used to predict the service runtime and therefore their inappropriate use can lead to decisions that will impact negatively on both service performance and execution cost. PMID:23874776

  2. Independent Qualification of the CIAU Tool Based on the Uncertainty Estimate in the Prediction of Angra 1 NPP Inadvertent Load Rejection Transient

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borges, Ronaldo C.; D'Auria, Francesco; Alvim, Antonio Carlos M.

    2002-07-01

    The Code with - the capability of - Internal Assessment of Uncertainty (CIAU) is a tool proposed by the 'Dipartimento di Ingegneria Meccanica, Nucleare e della Produzione (DIMNP)' of the University of Pisa. Other Institutions including the nuclear regulatory body from Brazil, 'Comissao Nacional de Energia Nuclear', contributed to the development of the tool. The CIAU aims at providing the currently available Relap5/Mod3.2 system code with the integrated capability of performing not only relevant transient calculations but also the related estimates of uncertainty bands. The Uncertainty Methodology based on Accuracy Extrapolation (UMAE) is used to characterize the uncertainty in themore » prediction of system code calculations for light water reactors and is internally coupled with the above system code. Following an overview of the CIAU development, the present paper deals with the independent qualification of the tool. The qualification test is performed by estimating the uncertainty bands that should envelope the prediction of the Angra 1 NPP transient RES-11. 99 originated by an inadvertent complete load rejection that caused the reactor scram when the unit was operating at 99% of nominal power. The current limitation of the 'error' database, implemented into the CIAU prevented a final demonstration of the qualification. However, all the steps for the qualification process are demonstrated. (authors)« less

  3. A method for testing whether model predictions fall within a prescribed factor of true values, with an application to pesticide leaching

    USGS Publications Warehouse

    Parrish, Rudolph S.; Smith, Charles N.

    1990-01-01

    A quantitative method is described for testing whether model predictions fall within a specified factor of true values. The technique is based on classical theory for confidence regions on unknown population parameters and can be related to hypothesis testing in both univariate and multivariate situations. A capability index is defined that can be used as a measure of predictive capability of a model, and its properties are discussed. The testing approach and the capability index should facilitate model validation efforts and permit comparisons among competing models. An example is given for a pesticide leaching model that predicts chemical concentrations in the soil profile.

  4. Building a Predictive Capability for Decision-Making that Supports MultiPEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carmichael, Joshua Daniel

    Multi-phenomenological explosion monitoring (multiPEM) is a developing science that uses multiple geophysical signatures of explosions to better identify and characterize their sources. MultiPEM researchers seek to integrate explosion signatures together to provide stronger detection, parameter estimation, or screening capabilities between different sources or processes. This talk will address forming a predictive capability for screening waveform explosion signatures to support multiPEM.

  5. Predictive modeling capabilities from incident powder and laser to mechanical properties for laser directed energy deposition

    NASA Astrophysics Data System (ADS)

    Shin, Yung C.; Bailey, Neil; Katinas, Christopher; Tan, Wenda

    2018-05-01

    This paper presents an overview of vertically integrated comprehensive predictive modeling capabilities for directed energy deposition processes, which have been developed at Purdue University. The overall predictive models consist of vertically integrated several modules, including powder flow model, molten pool model, microstructure prediction model and residual stress model, which can be used for predicting mechanical properties of additively manufactured parts by directed energy deposition processes with blown powder as well as other additive manufacturing processes. Critical governing equations of each model and how various modules are connected are illustrated. Various illustrative results along with corresponding experimental validation results are presented to illustrate the capabilities and fidelity of the models. The good correlations with experimental results prove the integrated models can be used to design the metal additive manufacturing processes and predict the resultant microstructure and mechanical properties.

  6. Predictive modeling capabilities from incident powder and laser to mechanical properties for laser directed energy deposition

    NASA Astrophysics Data System (ADS)

    Shin, Yung C.; Bailey, Neil; Katinas, Christopher; Tan, Wenda

    2018-01-01

    This paper presents an overview of vertically integrated comprehensive predictive modeling capabilities for directed energy deposition processes, which have been developed at Purdue University. The overall predictive models consist of vertically integrated several modules, including powder flow model, molten pool model, microstructure prediction model and residual stress model, which can be used for predicting mechanical properties of additively manufactured parts by directed energy deposition processes with blown powder as well as other additive manufacturing processes. Critical governing equations of each model and how various modules are connected are illustrated. Various illustrative results along with corresponding experimental validation results are presented to illustrate the capabilities and fidelity of the models. The good correlations with experimental results prove the integrated models can be used to design the metal additive manufacturing processes and predict the resultant microstructure and mechanical properties.

  7. Predicting Risk for Suicide: A Preliminary Examination of Non-Suicidal Self-Injury and the Acquired Capability Construct in a College Sample.

    PubMed

    Brackman, Emily H; Morris, Blair W; Andover, Margaret S

    2016-01-01

    The interpersonal psychological theory of suicide provides a useful framework for considering the relationship between non-suicidal self-injury and suicide. Researchers propose that NSSI increases acquired capability for suicide. We predicted that both NSSI frequency and the IPTS acquired capability construct (decreased fear of death and increased pain tolerance) would separately interact with suicidal ideation to predict suicide attempts. Undergraduate students (N = 113) completed self-report questionnaires, and a subsample (n = 66) also completed a pain sensitivity task. NSSI frequency significantly moderated the association between suicidal ideation and suicide attempts. However, in a separate model, acquired capability did not moderate this relationship. Our understanding of the relationship between suicidal ideation and suicidal behavior can be enhanced by factors associated with NSSI that are distinct from the acquired capability construct.

  8. Connectotyping: Model Based Fingerprinting of the Functional Connectome

    PubMed Central

    Miranda-Dominguez, Oscar; Mills, Brian D.; Carpenter, Samuel D.; Grant, Kathleen A.; Kroenke, Christopher D.; Nigg, Joel T.; Fair, Damien A.

    2014-01-01

    A better characterization of how an individual’s brain is functionally organized will likely bring dramatic advances to many fields of study. Here we show a model-based approach toward characterizing resting state functional connectivity MRI (rs-fcMRI) that is capable of identifying a so-called “connectotype”, or functional fingerprint in individual participants. The approach rests on a simple linear model that proposes the activity of a given brain region can be described by the weighted sum of its functional neighboring regions. The resulting coefficients correspond to a personalized model-based connectivity matrix that is capable of predicting the timeseries of each subject. Importantly, the model itself is subject specific and has the ability to predict an individual at a later date using a limited number of non-sequential frames. While we show that there is a significant amount of shared variance between models across subjects, the model’s ability to discriminate an individual is driven by unique connections in higher order control regions in frontal and parietal cortices. Furthermore, we show that the connectotype is present in non-human primates as well, highlighting the translational potential of the approach. PMID:25386919

  9. Environmental Conditions Associated with Elevated Vibrio parahaemolyticus Concentrations in Great Bay Estuary, New Hampshire

    PubMed Central

    Urquhart, Erin A.; Jones, Stephen H.; Yu, Jong W.; Schuster, Brian M.; Marcinkiewicz, Ashley L.; Whistler, Cheryl A.; Cooper, Vaughn S.

    2016-01-01

    Reports from state health departments and the Centers for Disease Control and Prevention indicate that the annual number of reported human vibriosis cases in New England has increased in the past decade. Concurrently, there has been a shift in both the spatial distribution and seasonal detection of Vibrio spp. throughout the region based on limited monitoring data. To determine environmental factors that may underlie these emerging conditions, this study focuses on a long-term database of Vibrio parahaemolyticus concentrations in oyster samples generated from data collected from the Great Bay Estuary, New Hampshire over a period of seven consecutive years. Oyster samples from two distinct sites were analyzed for V. parahaemolyticus abundance, noting significant relationships with various biotic and abiotic factors measured during the same period of study. We developed a predictive modeling tool capable of estimating the likelihood of V. parahaemolyticus presence in coastal New Hampshire oysters. Results show that the inclusion of chlorophyll a concentration to an empirical model otherwise employing only temperature and salinity variables, offers improved predictive capability for modeling the likelihood of V. parahaemolyticus in the Great Bay Estuary. PMID:27144925

  10. Compilation of reinforced carbon-carbon transatlantic abort landing arc jet test results

    NASA Technical Reports Server (NTRS)

    Milhoan, James D.; Pham, Vuong T.; Yuen, Eric H.

    1993-01-01

    This document consists of the entire test database generated to support the Reinforced Carbon-Carbon Transatlantic Abort Landing Study. RCC components used for orbiter nose cap and wing leading edge thermal protection were originally designed to have a multi-mission entry capability of 2800 F. Increased orbiter range capability required a predicted increase in excess of 3300 F. Three test series were conducted. Test series #1 used ENKA-based RCC specimens coated with silicon carbide, treated with tetraethyl orthosilicate, sealed with Type A surface enhancement, and tested at 3000-3400 F with surface pressure of 60-101 psf. Series #2 used ENKA- or AVTEX-based RCC, with and without silicon carbide, Type A or double Type AA surface enhancement, all impregnated with TEOS, and at temperatures from 1440-3350 F with pressures from 100-350 psf. Series #3 tested ENKA-based RCC, with and without silicon carbide coating. No specimens were treated with TEOS or sealed with Type A. Surface temperatures ranged from 2690-3440 F and pressures ranged from 313-400 psf. These combined test results provided the database for establishing RCC material single-mission-limit temperature and developing surface recession correlations used to predict mass loss for abort conditions.

  11. Undergraduate Health Students' Intention to Use Evidence-Based Practice After Graduation: A Systematic Review of Predictive Modeling Studies.

    PubMed

    Ramis, Mary-Anne; Chang, Anne; Nissen, Lisa

    2018-04-01

    Incorporating evidence-based practice (EBP) into clinical decision making and professional practice is a requirement for many health disciplines, yet research across health disciplines on factors that influence and predict student intention to use EBP following graduation has not been previously synthesized. To synthesize research on factors that influence development of EBP behaviors and subsequently predict undergraduate students' intention toward EBP uptake. A systematic review of prediction modeling studies was conducted according to a protocol previously published on the Prospero database: https://www.crd.york.ac.uk/PROSPERO/. The outcome variable was undergraduate students' future use or intention to use EBP. Evidence synthesis methods were guided by resources from the Cochrane Methods Prognosis Group Web site (https://prognosismethods.cochrane.org). Only three studies were found to meet inclusion criteria for the review. Factors relating to EBP capability, EBP attitudes, as well as clinical and academic support were identified as influential toward students' intention to use evidence in practice. Heterogeneity limited data pooling, consequently, results are presented in narrative and tabular form. Although using a developing method, this review presents a unique contribution to further discussions regarding students' intention to use EBP following graduation. Despite limitations, consideration of identified factors for undergraduate curriculum could support student's intention to use EBP in their respective clinical environments. © 2017 Sigma Theta Tau International.

  12. Benchmark data sets for structure-based computational target prediction.

    PubMed

    Schomburg, Karen T; Rarey, Matthias

    2014-08-25

    Structure-based computational target prediction methods identify potential targets for a bioactive compound. Methods based on protein-ligand docking so far face many challenges, where the greatest probably is the ranking of true targets in a large data set of protein structures. Currently, no standard data sets for evaluation exist, rendering comparison and demonstration of improvements of methods cumbersome. Therefore, we propose two data sets and evaluation strategies for a meaningful evaluation of new target prediction methods, i.e., a small data set consisting of three target classes for detailed proof-of-concept and selectivity studies and a large data set consisting of 7992 protein structures and 72 drug-like ligands allowing statistical evaluation with performance metrics on a drug-like chemical space. Both data sets are built from openly available resources, and any information needed to perform the described experiments is reported. We describe the composition of the data sets, the setup of screening experiments, and the evaluation strategy. Performance metrics capable to measure the early recognition of enrichments like AUC, BEDROC, and NSLR are proposed. We apply a sequence-based target prediction method to the large data set to analyze its content of nontrivial evaluation cases. The proposed data sets are used for method evaluation of our new inverse screening method iRAISE. The small data set reveals the method's capability and limitations to selectively distinguish between rather similar protein structures. The large data set simulates real target identification scenarios. iRAISE achieves in 55% excellent or good enrichment a median AUC of 0.67 and RMSDs below 2.0 Å for 74% and was able to predict the first true target in 59 out of 72 cases in the top 2% of the protein data set of about 8000 structures.

  13. The Evolving Landscape of HIV Drug Resistance Diagnostics for Expanding Testing in Resource-Limited Settings.

    PubMed

    Inzaule, Seth C; Hamers, Ralph L; Paredes, Roger; Yang, Chunfu; Schuurman, Rob; Rinke de Wit, Tobias F

    2017-01-01

    Global scale-up of antiretroviral treatment has dramatically changed the prospects of HIV/AIDS disease, rendering life-long chronic care and treatment a reality for millions of HIV-infected patients. Affordable technologies to monitor antiretroviral treatment are needed to ensure long-term durability of limited available drug regimens. HIV drug resistance tests can complement existing strategies in optimizing clinical decision-making for patients with treatment failure, in addition to facilitating population-based surveillance of HIV drug resistance. This review assesses the current landscape of HIV drug resistance technologies and discusses the strengths and limitations of existing assays available for expanding testing in resource-limited settings. These include sequencing-based assays (Sanger sequencing assays and nextgeneration sequencing), point mutation assays, and genotype-free data-based prediction systems. Sanger assays are currently considered the gold standard genotyping technology, though only available at a limited number of resource-limited setting reference and regional laboratories, but high capital and test costs have limited their wide expansion. Point mutation assays present opportunities for simplified laboratory assays, but HIV genetic variability, extensive codon redundancy at or near the mutation target sites with limited multiplexing capability have restricted their utility. Next-generation sequencing, despite high costs, may have potential to reduce the testing cost significantly through multiplexing in high-throughput facilities, although the level of bioinformatics expertise required for data analysis is currently still complex and expensive and lacks standardization. Web-based genotype-free prediction systems may provide enhanced antiretroviral treatment decision-making without the need for laboratory testing, but require further clinical field evaluation and implementation scientific research in resource-limited settings.

  14. The United States should forego a damage-limitation capability against China

    NASA Astrophysics Data System (ADS)

    Glaser, Charles L.

    2017-11-01

    Bottom Lines • THE KEY STRATEGIC NUCLEAR CHOICE. Whether to attempt to preserve its damage-limitation capability against China is the key strategic nuclear choice facing the United States. The answer is much less clear-cut than when the United States faced the Soviet Union during the Cold War. • FEASIBILITY OF DAMAGE LIMITATION. Although technology has advanced significantly over the past three decades, future military competition between the U.S. and Chinese forces will favor large-scale nuclear retaliation over significant damage limitation. • BENEFITS AND RISKS OF A DAMAGE-LIMITATION CAPABILITY. The benefits provided by a modest damage-limitation capability would be small, because the United States can meet its most important regional deterrent requirements without one. In comparison, the risks, which include an increased probability of accidental and unauthorized Chinese attacks, as well as strained U.S.—China relations, would be large. • FOREGO DAMAGE LIMITATION. These twin findings—the poor prospects for prevailing in the military competition, and the small benefits and likely overall decrease in U.S. security—call for a U.S. policy that foregoes efforts to preserve or enhance its damage-limitation capability.

  15. Atomically informed nonlocal semi-discrete variational Peierls-Nabarro model for planar core dislocations

    PubMed Central

    Liu, Guisen; Cheng, Xi; Wang, Jian; Chen, Kaiguo; Shen, Yao

    2017-01-01

    Prediction of Peierls stress associated with dislocation glide is of fundamental concern in understanding and designing the plasticity and mechanical properties of crystalline materials. Here, we develop a nonlocal semi-discrete variational Peierls-Nabarro (SVPN) model by incorporating the nonlocal atomic interactions into the semi-discrete variational Peierls framework. The nonlocal kernel is simplified by limiting the nonlocal atomic interaction in the nearest neighbor region, and the nonlocal coefficient is directly computed from the dislocation core structure. Our model is capable of accurately predicting the displacement profile, and the Peierls stress, of planar-extended core dislocations in face-centered cubic structures. Our model could be extended to study more complicated planar-extended core dislocations, such as <110> {111} dislocations in Al-based and Ti-based intermetallic compounds. PMID:28252102

  16. Owens-Illinois liquid solar collector materials assessment

    NASA Technical Reports Server (NTRS)

    Nichols, R. L.

    1978-01-01

    From the beginning, it was noted that the baseline drawings for the liquid solar collector exhibited a distinct weakness concerning materials specification where elastomers, plastics, and foam insulation materials were utilized. A relatively small effort by a competent design organization would alleviate this deficiency. Based on results obtained from boilout and stagnation tests on the solar simulator, it was concluded that proof testing of the collector tubes prior to use helps to predict their performance for limited service life. Fracture mechanics data are desirable for predicting extended service life and establishing a minimum proof pressure level requirement. The temperature capability of this collector system was increased as the design matured and the coating efficiency improved. This higher temperature demands the use of higher temperature materials at critical locations in the collector.

  17. The Earth Science Vision

    NASA Technical Reports Server (NTRS)

    Schoeberl, Mark; Rychekewkitsch, Michael; Andrucyk, Dennis; McConaughy, Gail; Meeson, Blanche; Hildebrand, Peter; Einaudi, Franco (Technical Monitor)

    2000-01-01

    NASA's Earth Science Enterprise's long range vision is to enable the development of a national proactive environmental predictive capability through targeted scientific research and technological innovation. Proactive environmental prediction means the prediction of environmental events and their secondary consequences. These consequences range from disasters and disease outbreak to improved food production and reduced transportation, energy and insurance costs. The economic advantage of this predictive capability will greatly outweigh the cost of development. Developing this predictive capability requires a greatly improved understanding of the earth system and the interaction of the various components of that system. It also requires a change in our approach to gathering data about the earth and a change in our current methodology in processing that data including its delivery to the customers. And, most importantly, it requires a renewed partnership between NASA and its sister agencies. We identify six application themes that summarize the potential of proactive environmental prediction. We also identify four technology themes that articulate our approach to implementing proactive environmental prediction.

  18. Experimental and Numerical Simulations of Phase Transformations Occurring During Continuous Annealing of DP Steel Strips

    NASA Astrophysics Data System (ADS)

    Wrożyna, Andrzej; Pernach, Monika; Kuziak, Roman; Pietrzyk, Maciej

    2016-04-01

    Due to their exceptional strength properties combined with good workability the Advanced High-Strength Steels (AHSS) are commonly used in automotive industry. Manufacturing of these steels is a complex process which requires precise control of technological parameters during thermo-mechanical treatment. Design of these processes can be significantly improved by the numerical models of phase transformations. Evaluation of predictive capabilities of models, as far as their applicability in simulation of thermal cycles thermal cycles for AHSS is considered, was the objective of the paper. Two models were considered. The former was upgrade of the JMAK equation while the latter was an upgrade of the Leblond model. The models can be applied to any AHSS though the examples quoted in the paper refer to the Dual Phase (DP) steel. Three series of experimental simulations were performed. The first included various thermal cycles going beyond limitations of the continuous annealing lines. The objective was to validate models behavior in more complex cooling conditions. The second set of tests included experimental simulations of the thermal cycle characteristic for the continuous annealing lines. Capability of the models to describe properly phase transformations in this process was evaluated. The third set included data from the industrial continuous annealing line. Validation and verification of models confirmed their good predictive capabilities. Since it does not require application of the additivity rule, the upgrade of the Leblond model was selected as the better one for simulation of industrial processes in AHSS production.

  19. Comparison of nozzle and afterbody surface pressures from wind tunnel and flight test of the YF-17 aircraft

    NASA Technical Reports Server (NTRS)

    Lucas, E. J.; Fanning, A. E.; Steers, L. I.

    1978-01-01

    Results are reported from the initial phase of an effort to provide an adequate technical capability to accurately predict the full scale, flight vehicle, nozzle-afterbody performance of future aircraft based on partial scale, wind tunnel testing. The primary emphasis of this initial effort is to assess the current capability and identify the cause of limitations on this capability. A direct comparison of surface pressure data is made between the results from an 0.1-scale model wind tunnel investigation and a full-scale flight test program to evaluate the current subscale testing techniques. These data were acquired at Mach numbers 0.6, 0.8, 0.9, 1.2, and 1.5 on four nozzle configurations at various vehicle pitch attitudes. Support system interference increments were also documented during the wind tunnel investigation. In general, the results presented indicate a good agreement in trend and level of the surface pressures when corrective increments are applied for known effects and surface differences between the two articles under investigation.

  20. Modeling AWSoM CMEs with EEGGL: A New Approach for Space Weather Forecasting

    NASA Astrophysics Data System (ADS)

    Jin, M.; Manchester, W.; van der Holst, B.; Sokolov, I.; Toth, G.; Vourlidas, A.; de Koning, C. A.; Gombosi, T. I.

    2015-12-01

    The major source of destructive space weather is coronal mass ejections (CMEs). However, our understanding of CMEs and their propagation in the heliosphere is limited by the insufficient observations. Therefore, the development of first-principals numerical models plays a vital role in both theoretical investigation and providing space weather forecasts. Here, we present results of the simulation of CME propagation from the Sun to 1AU by combining the analytical Gibson & Low (GL) flux rope model with the state-of-art solar wind model AWSoM. We also provide an approach for transferring this research model to a space weather forecasting tool by demonstrating how the free parameters of the GL flux rope can be prescribed based on remote observations via the new Eruptive Event Generator by Gibson-Low (EEGGL) toolkit. This capability allows us to predict the long-term evolution of the CME in interplanetary space. We perform proof-of-concept case studies to show the capability of the model to capture physical processes that determine CME evolution while also reproducing many observed features both in the corona and at 1 AU. We discuss the potential and limitations of this model as a future space weather forecasting tool.

  1. Safe Exploration Algorithms for Reinforcement Learning Controllers.

    PubMed

    Mannucci, Tommaso; van Kampen, Erik-Jan; de Visser, Cornelis; Chu, Qiping

    2018-04-01

    Self-learning approaches, such as reinforcement learning, offer new possibilities for autonomous control of uncertain or time-varying systems. However, exploring an unknown environment under limited prediction capabilities is a challenge for a learning agent. If the environment is dangerous, free exploration can result in physical damage or in an otherwise unacceptable behavior. With respect to existing methods, the main contribution of this paper is the definition of a new approach that does not require global safety functions, nor specific formulations of the dynamics or of the environment, but relies on interval estimation of the dynamics of the agent during the exploration phase, assuming a limited capability of the agent to perceive the presence of incoming fatal states. Two algorithms are presented with this approach. The first is the Safety Handling Exploration with Risk Perception Algorithm (SHERPA), which provides safety by individuating temporary safety functions, called backups. SHERPA is shown in a simulated, simplified quadrotor task, for which dangerous states are avoided. The second algorithm, denominated OptiSHERPA, can safely handle more dynamically complex systems for which SHERPA is not sufficient through the use of safety metrics. An application of OptiSHERPA is simulated on an aircraft altitude control task.

  2. Measurement Of Trailing Edge Noise using Directional Array and Coherent Output Power Methods

    NASA Technical Reports Server (NTRS)

    Hutcheson, Florence V.; Brooks, Thomas F.

    2002-01-01

    The use of a directional array of microphones for the measurement of trailing edge (TE) noise is described. The capabilities of this method are evaluated via measurements of TE noise from a NACA 63-215 airfoil model and from a cylindrical rod. This TE noise measurement approach is compared to one that is based on the cross spectral analysis of output signals from a pair of microphones (COP method). Advantages and limitations of both methods are examined. It is shown that the microphone array can accurately measures TE noise and captures its two-dimensional characteristic over a large frequency range for any TE configuration as long as noise contamination from extraneous sources is within bounds. The COP method is shown to also accurately measure TE noise but over a more limited frequency range that narrows for increased TE thickness. Finally, the applicability and generality of an airfoil self-noise prediction method was evaluated via comparison to the experimental data obtained using the COP and array measurement methods. The predicted and experimental results are shown to agree over large frequency ranges.

  3. Dark Energy Camera for Blanco

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Binder, Gary A.; /Caltech /SLAC

    2010-08-25

    In order to make accurate measurements of dark energy, a system is needed to monitor the focus and alignment of the Dark Energy Camera (DECam) to be located on the Blanco 4m Telescope for the upcoming Dark Energy Survey. One new approach under development is to fit out-of-focus star images to a point spread function from which information about the focus and tilt of the camera can be obtained. As a first test of a new algorithm using this idea, simulated star images produced from a model of DECam in the optics software Zemax were fitted. Then, real images frommore » the Mosaic II imager currently installed on the Blanco telescope were used to investigate the algorithm's capabilities. A number of problems with the algorithm were found, and more work is needed to understand its limitations and improve its capabilities so it can reliably predict camera alignment and focus.« less

  4. The WFIRST Interim Design Reference Mission: Capabilities, Constraints, and Open Questions

    NASA Technical Reports Server (NTRS)

    Kruk, Jeffrey W.

    2012-01-01

    The Project Office and Science Definition Team for the Wide-Field Infrared Survey Telescope (WFIRST) are in the midst of a pre-Phase A study to establish a Design Reference Mission (DRM). An Interim report was released in June 2011, with a final report due later in 2012. The predicted performance of the Interim DRM Observatory will be described, including optical quality, observing efficiency, and sensitivity for representative observing scenarios. Observing constraints and other limitations on performance will also be presented, with an emphasis on potential Guest Observer programs. Finally, a brief status update will be provided on open trade studies of interest to the scientific community. The final DRM may differ from the Interim DRM presented here. However, the underlying requirements of the scientific programs are not expected to change, hence the capabilities of the IDRM are likely to be maintained even if the implementation changes in significant ways.

  5. A Kolsky tension bar technique using a hollow incident tube

    NASA Astrophysics Data System (ADS)

    Guzman, O.; Frew, D. J.; Chen, W.

    2011-04-01

    Load control of the incident pulse profiles in compression Kolsky bar experiments has been widely used to subject the specimen to optimal testing conditions. Tension Kolsky bars have been used to determine dynamic material behavior since the 1960s with limited capability to shape the loading pulses due to the pulse-generating mechanisms. We developed a modified Kolsky tension bar where a hollow incident tube is used to carry the incident stress waves. The incident tube also acts as a gas gun barrel that houses the striker for impact. The main advantage of this new design is that the striker impacts on an impact cap of the incident tube. Compression pulse shapers can be attached to the impact cap, thus fully utilizing the predictive compression pulse-shaping capability in tension experiments. Using this new testing technique, the dynamic tensile material behavior for Al 6061-T6511 and TRIP 800 (transformation-induced plasticity) steel has been obtained.

  6. Effect of a timebase mismatch in two-way optical frequency transfer

    NASA Astrophysics Data System (ADS)

    Tampellini, Anna; Clivati, Cecilia; Levi, Filippo; Mura, Alberto; Calonico, Davide

    2017-12-01

    Two-way frequency transfer on optical fibers is a powerful technique for the comparison of distant clocks over long and ultra-long hauls. In contrast to traditional Doppler noise cancellation, it is capable of sustaining higher link attenuation, mitigating the need of optical amplification and regeneration and thus reducing the setup complexity. We investigate the ultimate limitations of the two-way approach on a 300 km multiplexed fiber haul, considering fully independent setups and acquisition systems at the two link ends. We derive a theoretical model to predict the performance deterioration due to a bad synchronisation of the measurements, which is confirmed by experimental results. This study demonstrates that two-way optical frequency transfer is a reliable and performing technique, capable of sustaining remote clocks comparisons at the 10-19 resolution, and is relevant for the development of a fiber network of continental scale for frequency metrology in Europe.

  7. TRAC-PF1/MOD1 pretest predictions of MIST experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boyack, B.E.; Steiner, J.L.; Siebe, D.A.

    Los Alamos National Laboratory is a participant in the Integral System Test (IST) program initiated in June 1983 to provide integral system test data on specific issues and phenomena relevant to post small-break loss-of-coolant accidents (SBLOCAs) in Babcock and Wilcox plant designs. The Multi-Loop Integral System Test (MIST) facility is the largest single component in the IST program. During Fiscal Year 1986, Los Alamos performed five MIST pretest analyses. The five experiments were chosen on the basis of their potential either to approach the facility limits or to challenge the predictive capability of the TRAC-PF1/MOD1 code. Three SBLOCA tests weremore » examined which included nominal test conditions, throttled auxiliary feedwater and asymmetric steam-generator cooldown, and reduced high-pressure-injection (HPI) capacity, respectively. Also analyzed were two ''feed-and-bleed'' cooling tests with reduced HPI and delayed HPI initiation. Results of the tests showed that the MIST facility limits would not be approached in the five tests considered. Early comparisons with preliminary test data indicate that the TRAC-PF1/MOD1 code is correctly calculating the dominant phenomena occurring in the MIST facility during the tests. Posttest analyses are planned to provide a quantitative assessment of the code's ability to predict MIST transients.« less

  8. An investigation of the interactive effects of the capability for suicide and acute agitation on suicidality in a military sample.

    PubMed

    Ribeiro, Jessica D; Bender, Theodore W; Buchman, Jennifer M; Nock, Matthew K; Rudd, M David; Bryan, Craig J; Lim, Ingrid C; Baker, Monty T; Knight, Chadwick; Gutierrez, Peter M; Joiner, Thomas E

    2015-01-01

    According to the interpersonal theory of suicide (1, 2), the difficulties inherently associated with death by suicide deter many individuals from engaging in suicidal behavior. Consistent with the notion that suicide is fearsome, acute states of heightened arousal are commonly observed in individuals immediately prior to lethal and near-lethal suicidal behavior. We suggest that among individuals who possess elevated levels of the capability for suicide, the heightened state of arousal experienced during periods of acute agitation may facilitate suicidal behavior in part because it would provide the necessary energy to approach a potentially lethal stimulus. Among individuals who are low on capability, the arousal experienced during agitation may result in further avoidance. In the present project we examine how acute agitation may interact with the capability for suicide to predict suicidality in a large military sample (n = 1,208) using hierarchical multiple regression. Results were in line with a priori hypotheses: among individuals high on capability, as agitation increases, suicidality increases whereas as agitation increases among individuals low on capability, suicidality decreases. Results held beyond the effects of thwarted belongingness, perceived burdensomeness, and suicidal cognitions. Beyond further substantiating the link between agitation and suicide, findings of the present study provide evidence for the construct validity of the acquired capability as well as offer initial evidence for moderating role of capability on the effect of agitation on suicide. Limitations of the current study highlight a need for future research that improves upon the techniques used in the present study. Implications for science and practice are discussed. © 2014 Wiley Periodicals, Inc.

  9. Energy-absorption capability of composite tubes and beams. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Farley, Gary L.; Jones, Robert M.

    1989-01-01

    In this study the objective was to develop a method of predicting the energy-absorption capability of composite subfloor beam structures. Before it is possible to develop such an analysis capability, an in-depth understanding of the crushing process of composite materials must be achieved. Many variables affect the crushing process of composite structures, such as the constituent materials' mechanical properties, specimen geometry, and crushing speed. A comprehensive experimental evaluation of tube specimens was conducted to develop insight into how composite structural elements crush and what are the controlling mechanisms. In this study the four characteristic crushing modes, transverse shearing, brittle fracturing, lamina bending, and local buckling were identified and the mechanisms that control the crushing process defined. An in-depth understanding was developed of how material properties affect energy-absorption capability. For example, an increase in fiber and matrix stiffness and failure strain can, depending upon the configuration of the tube, increase energy-absorption capability. An analysis to predict the energy-absorption capability of composite tube specimens was developed and verified. Good agreement between experiment and prediction was obtained.

  10. Modeling Users' Activity on Twitter Networks: Validation of Dunbar's Number

    PubMed Central

    Gonçalves, Bruno; Perra, Nicola; Vespignani, Alessandro

    2011-01-01

    Microblogging and mobile devices appear to augment human social capabilities, which raises the question whether they remove cognitive or biological constraints on human communication. In this paper we analyze a dataset of Twitter conversations collected across six months involving 1.7 million individuals and test the theoretical cognitive limit on the number of stable social relationships known as Dunbar's number. We find that the data are in agreement with Dunbar's result; users can entertain a maximum of 100–200 stable relationships. Thus, the ‘economy of attention’ is limited in the online world by cognitive and biological constraints as predicted by Dunbar's theory. We propose a simple model for users' behavior that includes finite priority queuing and time resources that reproduces the observed social behavior. PMID:21826200

  11. Scaling Effects of Cr(VI) Reduction Kinetics. The Role of Geochemical Heterogeneity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Li; Li, Li

    2015-10-22

    The natural subsurface is highly heterogeneous with minerals distributed in different spatial patterns. Fundamental understanding of how mineral spatial distribution patterns regulate sorption process is important for predicting the transport and fate of chemicals. Existing studies about the sorption was carried out in well-mixed batch reactors or uniformly packed columns, with few data available on the effects of spatial heterogeneities. As a result, there is a lack of data and understanding on how spatial heterogeneities control sorption processes. In this project, we aim to understand and develop modeling capabilities to predict the sorption of Cr(VI), an omnipresent contaminant in naturalmore » systems due to its natural occurrence and industrial utilization. We systematically examine the role of spatial patterns of illite, a common clay, in determining the extent of transport limitation and scaling effects associated with Cr(VI) sorption capacity and kinetics using column experiments and reactive transport modeling. Our results showed that the sorbed mass and rates can differ by an order of magnitude due to of the illite spatial heterogeneities and transport limitation. With constraints from data, we also developed the capabilities of modeling Cr(VI) in heterogeneous media. The developed model is then utilized to understand the general principles that govern the relationship between sorption and connectivity, a key measure of the spatial pattern characteristics. This correlation can be used to estimate Cr(VI) sorption characteristics in heterogeneous porous media. Insights gained here bridge gaps between laboratory and field application in hydrogeology and geochemical field, and advance predictive understanding of reactive transport processes in the natural heterogeneous subsurface. We believe that these findings will be of interest to a large number of environmental geochemists and engineers, hydrogeologists, and those interested in contaminant fate and transport, water quality and water composition, and natural attenuation processes in natural systems.« less

  12. Occultation Predictions Using CCD Strip-Scanning Astrometry

    NASA Technical Reports Server (NTRS)

    Dunham, Edward W.; Ford, C. H.; Stone, R. P. S.; McDonald, S. W.; Olkin, C. B.; Elliot, J. L.; Witteborn, Fred C. (Technical Monitor)

    1994-01-01

    We are developing the method of CCD strip-scanning astrometry for the purpose of deriving reliable advance predictions for occultations involving small objects in the outer solar system. We are using a camera system based on a Ford/Loral 2Kx2K CCD with the Crossley telescope at Lick Observatory for this work. The columns of die CCD are aligned East-West, the telescope drive is stopped, and the CCD is clocked at the same rate that the stars drift across it. In this way we obtain arbitrary length strip images 20 arcmin wide with 0.58" pixels. Since planets move mainly in RA, it is possible to obtain images of the planet and star to be occulted on the same strip well before the occultation occurs. The strip-to-strip precision (i.e. reproducibility) of positions is limited by atmospheric image motion to about 0.1" rms per strip. However, for objects that are nearby in R.A., the image motion is highly correlated and their relative positions are good to 0.02" rms per strip. We will show that the effects of atmospheric image motion on a given strip can be removed if a sufficient number of strips of a given area have been obtained. Thus, it is possible to reach an rms precision of 0.02" per strip, corresponding to about 0.3 of Pluto or Triton's angular radius. The ultimate accuracy of a prediction based on strip-scanning astrometry is currently limited by the accuracy of the positions of the stars in the astrometric network used and by systematic errors most likely due to the optical system. We will show the results of . the prediction of some recent occultations as examples of the current capabilities and limitations of this technique.

  13. THE IDEA IS TO USEMODIS IN CONJUNCTION WITH THE CURRENT LIMITED LANDSAT CAPABILITY, COMMERCIAL SATELLITES, ANDUNMANNED AERIAL VEHICLES (UAV), IN A MULTI-STAGE APPROACH TO MEET EPA INFORMATION NEEDS.REMOTE SENSING OVERVIEW: EPA CAPABILITIES, PRIORITY AGENCY APPLICATIONS, SENSOR/AIRCRAFT CAPABILITIES, COST CONSIDERATIONS, SPECTRAL AND SPATIAL RESOLUTIONS, AND TEMPORAL CONSIDERATIONS

    EPA Science Inventory

    EPA remote sensing capabilities include applied research for priority applications and technology support for operational assistance to clients across the Agency. The idea is to use MODIS in conjunction with the current limited Landsat capability, commercial satellites, and Unma...

  14. Auralization Architectures for NASA?s Next Generation Aircraft Noise Prediction Program

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.; Lopes, Leonard V.; Burley, Casey L.; Aumann, Aric R.

    2013-01-01

    Aircraft community noise is a significant concern due to continued growth in air traffic, increasingly stringent environmental goals, and operational limitations imposed by airport authorities. The assessment of human response to noise from future aircraft can only be afforded through laboratory testing using simulated flyover noise. Recent work by the authors demonstrated the ability to auralize predicted flyover noise for a state-of-the-art reference aircraft and a future hybrid wing body aircraft concept. This auralization used source noise predictions from NASA's Aircraft NOise Prediction Program (ANOPP) as input. The results from this process demonstrated that auralization based upon system noise predictions is consistent with, and complementary to, system noise predictions alone. To further develop and validate the auralization process, improvements to the interfaces between the synthesis capability and the system noise tools are required. This paper describes the key elements required for accurate noise synthesis and introduces auralization architectures for use with the next-generation ANOPP (ANOPP2). The architectures are built around a new auralization library and its associated Application Programming Interface (API) that utilize ANOPP2 APIs to access data required for auralization. The architectures are designed to make the process of auralizing flyover noise a common element of system noise prediction.

  15. Strong pinning regimes explored with large-scale Ginzburg-Landau simulations

    NASA Astrophysics Data System (ADS)

    Willa, Roland; Koshelev, Alexei E.

    Improving the current-carrying capability of superconductors requires a deep understanding of vortex pinning. Within the theory of (3D) strong pinning an ideal vortex lattice is weakly deformed by a low density np of strong defects. In this limit the critical current jc is expected to grow linearly with np and to decrease with the field B according to B-α with α 0 . 5 . In the small-field limit the (1D) strong pinning theory of isolated vortices predicts jc np0 . 5 , independent of B. We explore strong pinning by low defect densities using time-dependent Ginzburg-Landau simulations. Our numerical results suggest the existence of a wide regime, where the lattice order is destroyed and yet interactions between vortices are important. In particular, for large defects we found an extended range of power-law decay of jc (B) with α 0 . 3 , smaller than predicted. This regime requires the development of new analytical models. Exploring the behavior of jc for various defect densities and sizes, we will establish pinning regimes and applicability limits of the conventional theory. This work is supported by the U.S. Department of Energy, Office of Science, Materials Sciences and Engineering Division. R. W. acknowledges support from the Swiss National Science Foundation through the SNSF Early Postdoc Mobility Fellowship.

  16. Performance evaluation capabilities for the design of physical systems

    NASA Technical Reports Server (NTRS)

    Pilkey, W. D.; Wang, B. P.

    1972-01-01

    The results are presented of a study aimed at developing and formulating a capability for the limiting performance of large steady state systems. The accomplishments reported include: (1) development of a theory of limiting performance of large systems subject to steady state inputs; (2) application and modification of PERFORM, the computational capability for the limiting performance of systems with transient inputs; and (3) demonstration that use of an inherently smooth control force for a limiting performance calculation improves the system identification phase of the design process for physical systems subjected to transient loading.

  17. Calibration of limited-area ensemble precipitation forecasts for hydrological predictions

    NASA Astrophysics Data System (ADS)

    Diomede, Tommaso; Marsigli, Chiara; Montani, Andrea; Nerozzi, Fabrizio; Paccagnella, Tiziana

    2015-04-01

    The main objective of this study is to investigate the impact of calibration for limited-area ensemble precipitation forecasts, to be used for driving discharge predictions up to 5 days in advance. A reforecast dataset, which spans 30 years, based on the Consortium for Small Scale Modeling Limited-Area Ensemble Prediction System (COSMO-LEPS) was used for testing the calibration strategy. Three calibration techniques were applied: quantile-to-quantile mapping, linear regression, and analogs. The performance of these methodologies was evaluated in terms of statistical scores for the precipitation forecasts operationally provided by COSMO-LEPS in the years 2003-2007 over Germany, Switzerland, and the Emilia-Romagna region (northern Italy). The analog-based method seemed to be preferred because of its capability of correct position errors and spread deficiencies. A suitable spatial domain for the analog search can help to handle model spatial errors as systematic errors. However, the performance of the analog-based method may degrade in cases where a limited training dataset is available. A sensitivity test on the length of the training dataset over which to perform the analog search has been performed. The quantile-to-quantile mapping and linear regression methods were less effective, mainly because the forecast-analysis relation was not so strong for the available training dataset. A comparison between the calibration based on the deterministic reforecast and the calibration based on the full operational ensemble used as training dataset has been considered, with the aim to evaluate whether reforecasts are really worthy for calibration, given that their computational cost is remarkable. The verification of the calibration process was then performed by coupling ensemble precipitation forecasts with a distributed rainfall-runoff model. This test was carried out for a medium-sized catchment located in Emilia-Romagna, showing a beneficial impact of the analog-based method on the reduction of missed events for discharge predictions.

  18. Validation metrics for turbulent plasma transport

    DOE PAGES

    Holland, C.

    2016-06-22

    Developing accurate models of plasma dynamics is essential for confident predictive modeling of current and future fusion devices. In modern computer science and engineering, formal verification and validation processes are used to assess model accuracy and establish confidence in the predictive capabilities of a given model. This paper provides an overview of the key guiding principles and best practices for the development of validation metrics, illustrated using examples from investigations of turbulent transport in magnetically confined plasmas. Particular emphasis is given to the importance of uncertainty quantification and its inclusion within the metrics, and the need for utilizing synthetic diagnosticsmore » to enable quantitatively meaningful comparisons between simulation and experiment. As a starting point, the structure of commonly used global transport model metrics and their limitations is reviewed. An alternate approach is then presented, which focuses upon comparisons of predicted local fluxes, fluctuations, and equilibrium gradients against observation. Furthermore, the utility of metrics based upon these comparisons is demonstrated by applying them to gyrokinetic predictions of turbulent transport in a variety of discharges performed on the DIII-D tokamak, as part of a multi-year transport model validation activity.« less

  19. Validation metrics for turbulent plasma transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holland, C.

    Developing accurate models of plasma dynamics is essential for confident predictive modeling of current and future fusion devices. In modern computer science and engineering, formal verification and validation processes are used to assess model accuracy and establish confidence in the predictive capabilities of a given model. This paper provides an overview of the key guiding principles and best practices for the development of validation metrics, illustrated using examples from investigations of turbulent transport in magnetically confined plasmas. Particular emphasis is given to the importance of uncertainty quantification and its inclusion within the metrics, and the need for utilizing synthetic diagnosticsmore » to enable quantitatively meaningful comparisons between simulation and experiment. As a starting point, the structure of commonly used global transport model metrics and their limitations is reviewed. An alternate approach is then presented, which focuses upon comparisons of predicted local fluxes, fluctuations, and equilibrium gradients against observation. Furthermore, the utility of metrics based upon these comparisons is demonstrated by applying them to gyrokinetic predictions of turbulent transport in a variety of discharges performed on the DIII-D tokamak, as part of a multi-year transport model validation activity.« less

  20. Chemical vapor deposition modeling: An assessment of current status

    NASA Technical Reports Server (NTRS)

    Gokoglu, Suleyman A.

    1991-01-01

    The shortcomings of earlier approaches that assumed thermochemical equilibrium and used chemical vapor deposition (CVD) phase diagrams are pointed out. Significant advancements in predictive capabilities due to recent computational developments, especially those for deposition rates controlled by gas phase mass transport, are demonstrated. The importance of using the proper boundary conditions is stressed, and the availability and reliability of gas phase and surface chemical kinetic information are emphasized as the most limiting factors. Future directions for CVD are proposed on the basis of current needs for efficient and effective progress in CVD process design and optimization.

  1. Re-Tooling the Agency's Engineering Predictive Practices for Durability and Damage Tolerance

    NASA Technical Reports Server (NTRS)

    Piascik, Robert S.; Knight, Norman F., Jr.

    2017-01-01

    Over the past decade, the Agency has placed less emphasis on testing and has increasingly relied on computational methods to assess durability and damage tolerance (D&DT) behavior when evaluating design margins for fracture-critical components. With increased emphasis on computational D&DT methods as the standard practice, it is paramount that capabilities of these methods are understood, the methods are used within their technical limits, and validation by well-designed tests confirms understanding. The D&DT performance of a component is highly dependent on parameters in the neighborhood of the damage. This report discusses D&DT method vulnerabilities.

  2. Analysis of the Capability and Limitations of Relativistic Gravity Measurements Using Radio Astronomy Methods

    NASA Technical Reports Server (NTRS)

    Shapiro, I. I.; Counselman, C. C., III

    1975-01-01

    The uses of radar observations of planets and very-long-baseline radio interferometric observations of extragalactic objects to test theories of gravitation are described in detail with special emphasis on sources of error. The accuracy achievable in these tests with data already obtained, can be summarized in terms of: retardation of signal propagation (radar), deflection of radio waves (interferometry), advance of planetary perihelia (radar), gravitational quadrupole moment of sun (radar), and time variation of gravitational constant (radar). The analyses completed to date have yielded no significant disagreement with the predictions of general relativity.

  3. A Perspective on Computational Aerothermodynamics at NASA

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    2007-01-01

    The evolving role of computational aerothermodynamics (CA) within NASA over the past 20 years is reviewed. The presentation highlights contributions to understanding the Space Shuttle pitching moment anomaly observed in the first shuttle flight, prediction of a static instability for Mars Pathfinder, and the use of CA for damage assessment in post-Columbia mission support. In the view forward, several current challenges in computational fluid dynamics and aerothermodynamics for hypersonic vehicle applications are discussed. Example simulations are presented to illustrate capabilities and limitations. Opportunities to advance the state-of-art in algorithms, grid generation and adaptation, and code validation are identified.

  4. Pitot pressure analyses in CO2 condensing rarefied hypersonic flows

    NASA Astrophysics Data System (ADS)

    Ozawa, T.; Suzuki, T.; Fujita, K.

    2016-11-01

    In order to improve the accuracy of rarefied aerodynamic prediction, a hypersonic rarefied wind tunnel (HRWT) was developed at Japan Aerospace Exploration Agency. While this wind tunnel has been limited to inert gases, such as nitrogen or argon, we recently extended the capability of HRWT to CO2 hypersonic flows for several Mars missions. Compared to our previous N2 cases, the condensation effect may not be negligible for CO2 rarefied aerodynamic measurements. Thus, in this work, we have utilized both experimental and numerical approaches to investigate the condensation and rarefaction effects in CO2 hypersonic nozzle flows.

  5. Landslide susceptibility assesssment in the Uttarakhand area (India) using GIS: a comparison study of prediction capability of naïve bayes, multilayer perceptron neural networks, and functional trees methods

    NASA Astrophysics Data System (ADS)

    Pham, Binh Thai; Tien Bui, Dieu; Pourghasemi, Hamid Reza; Indra, Prakash; Dholakia, M. B.

    2017-04-01

    The objective of this study is to make a comparison of the prediction performance of three techniques, Functional Trees (FT), Multilayer Perceptron Neural Networks (MLP Neural Nets), and Naïve Bayes (NB) for landslide susceptibility assessment at the Uttarakhand Area (India). Firstly, a landslide inventory map with 430 landslide locations in the study area was constructed from various sources. Landslide locations were then randomly split into two parts (i) 70 % landslide locations being used for training models (ii) 30 % landslide locations being employed for validation process. Secondly, a total of eleven landslide conditioning factors including slope angle, slope aspect, elevation, curvature, lithology, soil, land cover, distance to roads, distance to lineaments, distance to rivers, and rainfall were used in the analysis to elucidate the spatial relationship between these factors and landslide occurrences. Feature selection of Linear Support Vector Machine (LSVM) algorithm was employed to assess the prediction capability of these conditioning factors on landslide models. Subsequently, the NB, MLP Neural Nets, and FT models were constructed using training dataset. Finally, success rate and predictive rate curves were employed to validate and compare the predictive capability of three used models. Overall, all the three models performed very well for landslide susceptibility assessment. Out of these models, the MLP Neural Nets and the FT models had almost the same predictive capability whereas the MLP Neural Nets (AUC = 0.850) was slightly better than the FT model (AUC = 0.849). The NB model (AUC = 0.838) had the lowest predictive capability compared to other models. Landslide susceptibility maps were final developed using these three models. These maps would be helpful to planners and engineers for the development activities and land-use planning.

  6. Sixty-five years of the long march in protein secondary structure prediction: the final stretch?

    PubMed Central

    Yang, Yuedong; Gao, Jianzhao; Wang, Jihua; Heffernan, Rhys; Hanson, Jack; Paliwal, Kuldip; Zhou, Yaoqi

    2018-01-01

    Abstract Protein secondary structure prediction began in 1951 when Pauling and Corey predicted helical and sheet conformations for protein polypeptide backbone even before the first protein structure was determined. Sixty-five years later, powerful new methods breathe new life into this field. The highest three-state accuracy without relying on structure templates is now at 82–84%, a number unthinkable just a few years ago. These improvements came from increasingly larger databases of protein sequences and structures for training, the use of template secondary structure information and more powerful deep learning techniques. As we are approaching to the theoretical limit of three-state prediction (88–90%), alternative to secondary structure prediction (prediction of backbone torsion angles and Cα-atom-based angles and torsion angles) not only has more room for further improvement but also allows direct prediction of three-dimensional fragment structures with constantly improved accuracy. About 20% of all 40-residue fragments in a database of 1199 non-redundant proteins have <6 Å root-mean-squared distance from the native conformations by SPIDER2. More powerful deep learning methods with improved capability of capturing long-range interactions begin to emerge as the next generation of techniques for secondary structure prediction. The time has come to finish off the final stretch of the long march towards protein secondary structure prediction. PMID:28040746

  7. Nano-QSPR Modelling of Carbon-Based Nanomaterials Properties.

    PubMed

    Salahinejad, Maryam

    2015-01-01

    Evaluation of chemical and physical properties of nanomaterials is of critical importance in a broad variety of nanotechnology researches. There is an increasing interest in computational methods capable of predicting properties of new and modified nanomaterials in the absence of time-consuming and costly experimental studies. Quantitative Structure- Property Relationship (QSPR) approaches are progressive tools in modelling and prediction of many physicochemical properties of nanomaterials, which are also known as nano-QSPR. This review provides insight into the concepts, challenges and applications of QSPR modelling of carbon-based nanomaterials. First, we try to provide a general overview of QSPR implications, by focusing on the difficulties and limitations on each step of the QSPR modelling of nanomaterials. Then follows with the most significant achievements of QSPR methods in modelling of carbon-based nanomaterials properties and their recent applications to generate predictive models. This review specifically addresses the QSPR modelling of physicochemical properties of carbon-based nanomaterials including fullerenes, single-walled carbon nanotube (SWNT), multi-walled carbon nanotube (MWNT) and graphene.

  8. Bankruptcy Prevention: New Effort to Reflect on Legal and Social Changes.

    PubMed

    Kliestik, Tomas; Misankova, Maria; Valaskova, Katarina; Svabova, Lucia

    2018-04-01

    Every corporation has an economic and moral responsibility to its stockholders to perform well financially. However, the number of bankruptcies in Slovakia has been growing for several years without an apparent macroeconomic cause. To prevent a rapid denigration and to prevent the outflow of foreign capital, various efforts are being zealously implemented. Robust analysis using conventional bankruptcy prediction tools revealed that the existing models are adaptable to local conditions, particularly local legislation. Furthermore, it was confirmed that most of these outdated tools have sufficient capability to warn of impending financial problems several years in advance. A novel bankruptcy prediction tool that outperforms the conventional models was developed. However, it is increasingly challenging to predict bankruptcy risk as corporations have become more global and more complex and as they have developed sophisticated schemes to hide their actual situations under the guise of "optimization" for tax authorities. Nevertheless, scepticism remains because economic engineers have established bankruptcy as a strategy to limit the liability resulting from court-imposed penalties.

  9. Pyrolysis Model Development for a Multilayer Floor Covering

    PubMed Central

    McKinnon, Mark B.; Stoliarov, Stanislav I.

    2015-01-01

    Comprehensive pyrolysis models that are integral to computational fire codes have improved significantly over the past decade as the demand for improved predictive capabilities has increased. High fidelity pyrolysis models may improve the design of engineered materials for better fire response, the design of the built environment, and may be used in forensic investigations of fire events. A major limitation to widespread use of comprehensive pyrolysis models is the large number of parameters required to fully define a material and the lack of effective methodologies for measurement of these parameters, especially for complex materials. The work presented here details a methodology used to characterize the pyrolysis of a low-pile carpet tile, an engineered composite material that is common in commercial and institutional occupancies. The studied material includes three distinct layers of varying composition and physical structure. The methodology utilized a comprehensive pyrolysis model (ThermaKin) to conduct inverse analyses on data collected through several experimental techniques. Each layer of the composite was individually parameterized to identify its contribution to the overall response of the composite. The set of properties measured to define the carpet composite were validated against mass loss rate curves collected at conditions outside the range of calibration conditions to demonstrate the predictive capabilities of the model. The mean error between the predicted curve and the mean experimental mass loss rate curve was calculated as approximately 20% on average for heat fluxes ranging from 30 to 70 kW·m−2, which is within the mean experimental uncertainty. PMID:28793556

  10. Remote sensing of sagebrush canopy nitrogen

    USGS Publications Warehouse

    Mitchell, Jessica J.; Glenn, Nancy F.; Sankey, Temuulen T.; Derryberry, DeWayne R.; Germino, Matthew J.

    2012-01-01

    This paper presents a combination of techniques suitable for remotely sensing foliar Nitrogen (N) in semiarid shrublands – a capability that would significantly improve our limited understanding of vegetation functionality in dryland ecosystems. The ability to estimate foliar N distributions across arid and semi-arid environments could help answer process-driven questions related to topics such as controls on canopy photosynthesis, the influence of N on carbon cycling behavior, nutrient pulse dynamics, and post-fire recovery. Our study determined that further exploration into estimating sagebrush canopy N concentrations from an airborne platform is warranted, despite remote sensing challenges inherent to open canopy systems. Hyperspectral data transformed using standard derivative analysis were capable of quantifying sagebrush canopy N concentrations using partial least squares (PLS) regression with an R2 value of 0.72 and an R2 predicted value of 0.42 (n = 35). Subsetting the dataset to minimize the influence of bare ground (n = 19) increased R2 to 0.95 (R2 predicted = 0.56). Ground-based estimates of canopy N using leaf mass per unit area measurements (LMA) yielded consistently better model fits than ground-based estimates of canopy N using cover and height measurements. The LMA approach is likely a method that could be extended to other semiarid shrublands. Overall, the results of this study are encouraging for future landscape scale N estimates and represent an important step in addressing the confounding influence of bare ground, which we found to be a major influence on predictions of sagebrush canopy N from an airborne platform.

  11. 3D ECG- and respiratory-gated non-contrast-enhanced (CE) perfusion MRI for postoperative lung function prediction in non-small-cell lung cancer patients: A comparison with thin-section quantitative computed tomography, dynamic CE-perfusion MRI, and perfusion scan.

    PubMed

    Ohno, Yoshiharu; Seki, Shinichiro; Koyama, Hisanobu; Yoshikawa, Takeshi; Matsumoto, Sumiaki; Takenaka, Daisuke; Kassai, Yoshimori; Yui, Masao; Sugimura, Kazuro

    2015-08-01

    To compare predictive capabilities of non-contrast-enhanced (CE)- and dynamic CE-perfusion MRIs, thin-section multidetector computed tomography (CT) (MDCT), and perfusion scan for postoperative lung function in non-small cell lung cancer (NSCLC) patients. Sixty consecutive pathologically diagnosed NSCLC patients were included and prospectively underwent thin-section MDCT, non-CE-, and dynamic CE-perfusion MRIs and perfusion scan, and had their pre- and postoperative forced expiratory volume in one second (FEV1 ) measured. Postoperative percent FEV1 (po%FEV1 ) was then predicted from the fractional lung volume determined on semiquantitatively assessed non-CE- and dynamic CE-perfusion MRIs, from the functional lung volumes determined on quantitative CT, from the number of segments observed on qualitative CT, and from uptakes detected on perfusion scans within total and resected lungs. Predicted po%FEV1 s were then correlated with actual po%FEV1 s, which were %FEV1 s measured postoperatively. The limits of agreement were also determined. All predicted po%FEV1 s showed significant correlation (0.73 ≤ r ≤ 0.93, P < 0.0001) and limits of agreement with actual po%FEV1 (non-CE-perfusion MRI: 0.3 ± 10.0%, dynamic CE-perfusion MRI: 1.0 ± 10.8%, perfusion scan: 2.2 ± 14.1%, quantitative CT: 1.2 ± 9.0%, qualitative CT: 1.5 ± 10.2%). Non-CE-perfusion MRI may be able to predict postoperative lung function more accurately than qualitatively assessed MDCT and perfusion scan. © 2014 Wiley Periodicals, Inc.

  12. Application of various FLD modelling approaches

    NASA Astrophysics Data System (ADS)

    Banabic, D.; Aretz, H.; Paraianu, L.; Jurco, P.

    2005-07-01

    This paper focuses on a comparison between different modelling approaches to predict the forming limit diagram (FLD) for sheet metal forming under a linear strain path using the recently introduced orthotropic yield criterion BBC2003 (Banabic D et al 2005 Int. J. Plasticity 21 493-512). The FLD models considered here are a finite element based approach, the well known Marciniak-Kuczynski model, the modified maximum force criterion according to Hora et al (1996 Proc. Numisheet'96 Conf. (Dearborn/Michigan) pp 252-6), Swift's diffuse (Swift H W 1952 J. Mech. Phys. Solids 1 1-18) and Hill's classical localized necking approach (Hill R 1952 J. Mech. Phys. Solids 1 19-30). The FLD of an AA5182-O aluminium sheet alloy has been determined experimentally in order to quantify the predictive capabilities of the models mentioned above.

  13. Parametric bicubic spline and CAD tools for complex targets shape modelling in physical optics radar cross section prediction

    NASA Astrophysics Data System (ADS)

    Delogu, A.; Furini, F.

    1991-09-01

    Increasing interest in radar cross section (RCS) reduction is placing new demands on theoretical, computation, and graphic techniques for calculating scattering properties of complex targets. In particular, computer codes capable of predicting the RCS of an entire aircraft at high frequency and of achieving RCS control with modest structural changes, are becoming of paramount importance in stealth design. A computer code, evaluating the RCS of arbitrary shaped metallic objects that are computer aided design (CAD) generated, and its validation with measurements carried out using ALENIA RCS test facilities are presented. The code, based on the physical optics method, is characterized by an efficient integration algorithm with error control, in order to contain the computer time within acceptable limits, and by an accurate parametric representation of the target surface in terms of bicubic splines.

  14. Anisotropic constitutive modeling for nickel-base single crystal superalloys. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Sheh, Michael Y.

    1988-01-01

    An anisotropic constitutive model was developed based on crystallographic slip theory for nickel base single crystal superalloys. The constitutive equations developed utilizes drag stress and back stress state variables to model the local inelastic flow. Specially designed experiments were conducted to evaluate the existence of back stress in single crystal superalloy Rene N4 at 982 C. The results suggest that: (1) the back stress is orientation dependent; and (2) the back stress state variable is required for the current model to predict material anelastic recovery behavior. The model was evaluated for its predictive capability on single crystal material behavior including orientation dependent stress-strain response, tension/compression asymmetry, strain rate sensitivity, anelastic recovery behavior, cyclic hardening and softening, stress relaxation, creep and associated crystal lattice rotation. Limitation and future development needs are discussed.

  15. Current Trends in Modeling Research for Turbulent Aerodynamic Flows

    NASA Technical Reports Server (NTRS)

    Gatski, Thomas B.; Rumsey, Christopher L.; Manceau, Remi

    2007-01-01

    The engineering tools of choice for the computation of practical engineering flows have begun to migrate from those based on the traditional Reynolds-averaged Navier-Stokes approach to methodologies capable, in theory if not in practice, of accurately predicting some instantaneous scales of motion in the flow. The migration has largely been driven by both the success of Reynolds-averaged methods over a wide variety of flows as well as the inherent limitations of the method itself. Practitioners, emboldened by their ability to predict a wide-variety of statistically steady, equilibrium turbulent flows, have now turned their attention to flow control and non-equilibrium flows, that is, separation control. This review gives some current priorities in traditional Reynolds-averaged modeling research as well as some methodologies being applied to a new class of turbulent flow control problems.

  16. An investigation of gear mesh failure prediction techniques. M.S. Thesis - Cleveland State Univ.

    NASA Technical Reports Server (NTRS)

    Zakrajsek, James J.

    1989-01-01

    A study was performed in which several gear failure prediction methods were investigated and applied to experimental data from a gear fatigue test apparatus. The primary objective was to provide a baseline understanding of the prediction methods and to evaluate their diagnostic capabilities. The methods investigated use the signal average in both the time and frequency domain to detect gear failure. Data from eleven gear fatigue tests were recorded at periodic time intervals as the gears were run from initiation to failure. Four major failure modes, consisting of heavy wear, tooth breakage, single pits, and distributed pitting were observed among the failed gears. Results show that the prediction methods were able to detect only those gear failures which involved heavy wear or distributed pitting. None of the methods could predict fatigue cracks, which resulted in tooth breakage, or single pits. It is suspected that the fatigue cracks were not detected because of limitations in data acquisition rather than in methodology. Additionally, the frequency response between the gear shaft and the transducer was found to significantly affect the vibration signal. The specific frequencies affected were filtered out of the signal average prior to application of the methods.

  17. Applicability of a panel method, which includes nonlinear effects, to a forward-swept-wing aircraft

    NASA Technical Reports Server (NTRS)

    Ross, J. C.

    1984-01-01

    The ability of a lower order panel method VSAERO, to accurately predict the lift and pitching moment of a complete forward-swept-wing/canard configuration was investigated. The program can simulate nonlinear effects including boundary-layer displacement thickness, wake roll up, and to a limited extent, separated wakes. The predictions were compared with experimental data obtained using a small-scale model in the 7- by 10- Foot Wind Tunnel at NASA Ames Research Center. For the particular configuration under investigation, wake roll up had only a small effect on the force and moment predictions. The effect of the displacement thickness modeling was to reduce the lift curve slope slightly, thus bringing the predicted lift into good agreement with the measured value. Pitching moment predictions were also improved by the boundary-layer simulation. The separation modeling was found to be sensitive to user inputs, but appears to give a reasonable representation of a separated wake. In general, the nonlinear capabilities of the code were found to improve the agreement with experimental data. The usefullness of the code would be enhanced by improving the reliability of the separated wake modeling and by the addition of a leading edge separation model.

  18. A pan-African medium-range ensemble flood forecast system

    NASA Astrophysics Data System (ADS)

    Thiemig, Vera; Bisselink, Bernard; Pappenberger, Florian; Thielen, Jutta

    2015-04-01

    The African Flood Forecasting System (AFFS) is a probabilistic flood forecast system for medium- to large-scale African river basins, with lead times of up to 15 days. The key components are the hydrological model LISFLOOD, the African GIS database, the meteorological ensemble predictions of the ECMWF and critical hydrological thresholds. In this study the predictive capability is investigated, to estimate AFFS' potential as an operational flood forecasting system for the whole of Africa. This is done in a hindcast mode, by reproducing pan-African hydrological predictions for the whole year of 2003 where important flood events were observed. Results were analysed in two ways, each with its individual objective. The first part of the analysis is of paramount importance for the assessment of AFFS as a flood forecasting system, as it focuses on the detection and prediction of flood events. Here, results were verified with reports of various flood archives such as Dartmouth Flood Observatory, the Emergency Event Database, the NASA Earth Observatory and Reliefweb. The number of hits, false alerts and missed alerts as well as the Probability of Detection, False Alarm Rate and Critical Success Index were determined for various conditions (different regions, flood durations, average amount of annual precipitations, size of affected areas and mean annual discharge). The second part of the analysis complements the first by giving a basic insight into the prediction skill of the general streamflow. For this, hydrological predictions were compared against observations at 36 key locations across Africa and the Continuous Rank Probability Skill Score (CRPSS), the limit of predictability and reliability were calculated. Results showed that AFFS detected around 70 % of the reported flood events correctly. In particular, the system showed good performance in predicting riverine flood events of long duration (> 1 week) and large affected areas (> 10 000 km2) well in advance, whereas AFFS showed limitations for small-scale and short duration flood events. Also the forecasts showed on average a good reliability, and the CRPSS helped identifying regions to focus on for future improvements. The case study for the flood event in March 2003 in the Sabi Basin (Zimbabwe and Mozambique) illustrated the good performance of AFFS in forecasting timing and severity of the floods, gave an example of the clear and concise output products, and showed that the system is capable of producing flood warnings even in ungauged river basins. Hence, from a technical perspective, AFFS shows a good prospective as an operational system, as it has demonstrated its significant potential to contribute to the reduction of flood-related losses in Africa by providing national and international aid organizations timely with medium-range flood forecast information. However, issues related to the practical implication will still need to be investigated.

  19. High Fidelity Ion Beam Simulation of High Dose Neutron Irradiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Was, Gary; Wirth, Brian; Motta, Athur

    The objective of this proposal is to demonstrate the capability to predict the evolution of microstructure and properties of structural materials in-reactor and at high doses, using ion irradiation as a surrogate for reactor irradiations. “Properties” includes both physical properties (irradiated microstructure) and the mechanical properties of the material. Demonstration of the capability to predict properties has two components. One is ion irradiation of a set of alloys to yield an irradiated microstructure and corresponding mechanical behavior that are substantially the same as results from neutron exposure in the appropriate reactor environment. Second is the capability to predict the irradiatedmore » microstructure and corresponding mechanical behavior on the basis of improved models, validated against both ion and reactor irradiations and verified against ion irradiations. Taken together, achievement of these objectives will yield an enhanced capability for simulating the behavior of materials in reactor irradiations.« less

  20. Comparing multiple statistical methods for inverse prediction in nuclear forensics applications

    DOE PAGES

    Lewis, John R.; Zhang, Adah; Anderson-Cook, Christine Michaela

    2017-10-29

    Forensic science seeks to predict source characteristics using measured observables. Statistically, this objective can be thought of as an inverse problem where interest is in the unknown source characteristics or factors ( X) of some underlying causal model producing the observables or responses (Y = g ( X) + error). Here, this paper reviews several statistical methods for use in inverse problems and demonstrates that comparing results from multiple methods can be used to assess predictive capability. Motivation for assessing inverse predictions comes from the desired application to historical and future experiments involving nuclear material production for forensics research inmore » which inverse predictions, along with an assessment of predictive capability, are desired.« less

  1. Comparing multiple statistical methods for inverse prediction in nuclear forensics applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, John R.; Zhang, Adah; Anderson-Cook, Christine Michaela

    Forensic science seeks to predict source characteristics using measured observables. Statistically, this objective can be thought of as an inverse problem where interest is in the unknown source characteristics or factors ( X) of some underlying causal model producing the observables or responses (Y = g ( X) + error). Here, this paper reviews several statistical methods for use in inverse problems and demonstrates that comparing results from multiple methods can be used to assess predictive capability. Motivation for assessing inverse predictions comes from the desired application to historical and future experiments involving nuclear material production for forensics research inmore » which inverse predictions, along with an assessment of predictive capability, are desired.« less

  2. Evaluation of prediction capability, robustness, and sensitivity in non-linear landslide susceptibility models, Guantánamo, Cuba

    NASA Astrophysics Data System (ADS)

    Melchiorre, C.; Castellanos Abella, E. A.; van Westen, C. J.; Matteucci, M.

    2011-04-01

    This paper describes a procedure for landslide susceptibility assessment based on artificial neural networks, and focuses on the estimation of the prediction capability, robustness, and sensitivity of susceptibility models. The study is carried out in the Guantanamo Province of Cuba, where 186 landslides were mapped using photo-interpretation. Twelve conditioning factors were mapped including geomorphology, geology, soils, landuse, slope angle, slope direction, internal relief, drainage density, distance from roads and faults, rainfall intensity, and ground peak acceleration. A methodology was used that subdivided the database in 3 subsets. A training set was used for updating the weights. A validation set was used to stop the training procedure when the network started losing generalization capability, and a test set was used to calculate the performance of the network. A 10-fold cross-validation was performed in order to show that the results are repeatable. The prediction capability, the robustness analysis, and the sensitivity analysis were tested on 10 mutually exclusive datasets. The results show that by means of artificial neural networks it is possible to obtain models with high prediction capability and high robustness, and that an exploration of the effect of the individual variables is possible, even if they are considered as a black-box model.

  3. Toward a Public Toxicogenomics Capability for Supporting Predictive Toxicology: Survey of Current Resources and Chemical Indexing of Experiments in GEO and ArrayExpress

    EPA Science Inventory

    A publicly available toxicogenomics capability for supporting predictive toxicology and meta-analysis depends on availability of gene expression data for chemical treatment scenarios, the ability to locate and aggregate such information by chemical, and broad data coverage within...

  4. Probing Dynamics of 2-D Granular Media via X-Ray Imaging

    NASA Astrophysics Data System (ADS)

    Crum, Ryan; Akin, Minta; Herbold, Eric; Lind, Jon; Homel, Mike; Hurley, Ryan

    2017-06-01

    Granular systems are ever present in our everyday world and influence many dynamic scientific problems including mine blasting, projectile penetration, astrophysical collisions, and dynamic compaction. Despite its significance, a fundamental understanding of granular media's behavior falls well short of its solid counterpart, limiting predictive capabilities. The kinematics of granular media is complex in part to the intricate interplay between numerous degrees of freedom not present in its solid equivalent. Previous dynamic studies in granular media primarily use VISAR or PDV, macro-scale diagnostics that only focus on the aggregate effect of the many degrees of freedom leaving the principal interactions of these multiple degrees of freedom too entangled to elucidate. To isolate the significance of individualized grain-to-grain interactions, this study uses in-situ X-ray imaging to probe a 2-D array of granular media subjected to high strain rate gas gun loading. Analyses include evaluating displacement fields and grain fracture as a function of both saturation and impactor velocity. X-ray imaging analyses feed directly into our concurrent granular media modeling efforts to enhance our predictive capabilities. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  5. Early Estimation of Solar Activity Cycle: Potential Capability and Limits

    NASA Technical Reports Server (NTRS)

    Kitiashvili, Irina N.; Collins, Nancy S.

    2017-01-01

    The variable solar magnetic activity known as the 11-year solar cycle has the longest history of solar observations. These cycles dramatically affect conditions in the heliosphere and the Earth's space environment. Our current understanding of the physical processes that make up global solar dynamics and the dynamo that generates the magnetic fields is sketchy, resulting in unrealistic descriptions in theoretical and numerical models of the solar cycles. The absence of long-term observations of solar interior dynamics and photospheric magnetic fields hinders development of accurate dynamo models and their calibration. In such situations, mathematical data assimilation methods provide an optimal approach for combining the available observational data and their uncertainties with theoretical models in order to estimate the state of the solar dynamo and predict future cycles. In this presentation, we will discuss the implementation and performance of an Ensemble Kalman Filter data assimilation method based on the Parker migratory dynamo model, complemented by the equation of magnetic helicity conservation and longterm sunspot data series. This approach has allowed us to reproduce the general properties of solar cycles and has already demonstrated a good predictive capability for the current cycle, 24. We will discuss further development of this approach, which includes a more sophisticated dynamo model, synoptic magnetogram data, and employs the DART Data Assimilation Research Testbed.

  6. A Revised Validation Process for Ice Accretion Codes

    NASA Technical Reports Server (NTRS)

    Wright, William B.; Porter, Christopher E.

    2017-01-01

    A research project is underway at NASA Glenn to produce computer software that can accurately predict ice growth under any meteorological conditions for any aircraft surface. This report will present results from the latest LEWICE release, version 3.5. This program differs from previous releases in its ability to model mixed phase and ice crystal conditions such as those encountered inside an engine. It also has expanded capability to use structured grids and a new capability to use results from unstructured grid flow solvers. A quantitative comparison of the results against a database of ice shapes that have been generated in the NASA Glenn Icing Research Tunnel (IRT) has also been performed. This paper will extend the comparison of ice shapes between LEWICE 3.5 and experimental data from a previous paper. Comparisons of lift and drag are made between experimentally collected data from experimentally obtained ice shapes and simulated (CFD) data on simulated (LEWICE) ice shapes. Comparisons are also made between experimentally collected and simulated performance data on select experimental ice shapes to ensure the CFD solver, FUN3D, is valid within the flight regime. The results show that the predicted results are within the accuracy limits of the experimental data for the majority of cases.

  7. Validation Process for LEWICE by Use of a Navier-Stokes Solver

    NASA Technical Reports Server (NTRS)

    Wright, William B.; Porter, Christopher E.

    2017-01-01

    A research project is underway at NASA Glenn to produce computer software that can accurately predict ice growth under any meteorological conditions for any aircraft surface. This report will present results from the latest LEWICE release, version 3.5. This program differs from previous releases in its ability to model mixed phase and ice crystal conditions such as those encountered inside an engine. It also has expanded capability to use structured grids and a new capability to use results from unstructured grid flow solvers. A quantitative comparison of the results against a database of ice shapes that have been generated in the NASA Glenn Icing Research Tunnel (IRT) has also been performed. This paper will extend the comparison of ice shapes between LEWICE 3.5 and experimental data from a previous paper. Comparisons of lift and drag are made between experimentally collected data from experimentally obtained ice shapes and simulated (CFD) data on simulated (LEWICE) ice shapes. Comparisons are also made between experimentally collected and simulated performance data on select experimental ice shapes to ensure the CFD solver, FUN3D, is valid within the flight regime. The results show that the predicted results are within the accuracy limits of the experimental data for the majority of cases.

  8. Features and applications of the Groove Analysis Program (GAP)

    NASA Technical Reports Server (NTRS)

    Ku, Jentung; Nguyen, Tu M.; Brennan, Patrick J.

    1995-01-01

    An IBM Personal Computer (PC) version of the Groove Analysis program (GAP) was developed to predict the steady state heat transport capability of an axially grooved heat pipe for a specified groove geometry and working fluid. In the model, the capillary limit is determined by the numerical solution of the differential equation for momentum conservation with the appropriate boundary conditions. This governing equation accounts for the hydrodynamic losses due to friction in liquid and vapor flows and due to liquid/vapor shear interaction. Back-pumping in both 0-g and 1-g is accounted for in the boundary condition at the condenser end. Slug formation in 0-g and puddle flow in 1-g are also considered in the model. At the user's discretion, the code will perform the analysis for various fluid inventories (undercharge, nominal charge, overcharge, or a fixed fluid charge) and heat pipe elevations. GAP will also calculate the minimum required heat pipe wall thickness for pressure containment at design temperatures that are greater than or lower than the critical temperature of the working fluid. This paper discusses the theory behind the development of the GAP model. It also presents the many useful and powerful capabilities of the model. Furthermore, a correlation of flight test performance data and the predictions using GAP are presented and discussed.

  9. Two Machine Learning Approaches for Short-Term Wind Speed Time-Series Prediction.

    PubMed

    Ak, Ronay; Fink, Olga; Zio, Enrico

    2016-08-01

    The increasing liberalization of European electricity markets, the growing proportion of intermittent renewable energy being fed into the energy grids, and also new challenges in the patterns of energy consumption (such as electric mobility) require flexible and intelligent power grids capable of providing efficient, reliable, economical, and sustainable energy production and distribution. From the supplier side, particularly, the integration of renewable energy sources (e.g., wind and solar) into the grid imposes an engineering and economic challenge because of the limited ability to control and dispatch these energy sources due to their intermittent characteristics. Time-series prediction of wind speed for wind power production is a particularly important and challenging task, wherein prediction intervals (PIs) are preferable results of the prediction, rather than point estimates, because they provide information on the confidence in the prediction. In this paper, two different machine learning approaches to assess PIs of time-series predictions are considered and compared: 1) multilayer perceptron neural networks trained with a multiobjective genetic algorithm and 2) extreme learning machines combined with the nearest neighbors approach. The proposed approaches are applied for short-term wind speed prediction from a real data set of hourly wind speed measurements for the region of Regina in Saskatchewan, Canada. Both approaches demonstrate good prediction precision and provide complementary advantages with respect to different evaluation criteria.

  10. Advanced in-production hotspot prediction and monitoring with micro-topography

    NASA Astrophysics Data System (ADS)

    Fanton, P.; Hasan, T.; Lakcher, A.; Le-Gratiet, B.; Prentice, C.; Simiz, J.-G.; La Greca, R.; Depre, L.; Hunsche, S.

    2017-03-01

    At 28nm technology node and below, hot spot prediction and process window control across production wafers have become increasingly critical to prevent hotspots from becoming yield-limiting defects. We previously established proof of concept for a systematic approach to identify the most critical pattern locations, i.e. hotspots, in a reticle layout by computational lithography and combining process window characteristics of these patterns with across-wafer process variation data to predict where hotspots may become yield impacting defects [1,2]. The current paper establishes the impact of micro-topography on a 28nm metal layer, and its correlation with hotspot best focus variations across a production chip layout. Detailed topography measurements are obtained from an offline tool, and pattern-dependent best focus (BF) shifts are determined from litho simulations that include mask-3D effects. We also establish hotspot metrology and defect verification by SEM image contour extraction and contour analysis. This enables detection of catastrophic defects as well as quantitative characterization of pattern variability, i.e. local and global CD uniformity, across a wafer to establish hotspot defect and variability maps. Finally, we combine defect prediction and verification capabilities for process monitoring by on-product, guided hotspot metrology, i.e. with sampling locations being determined from the defect prediction model and achieved prediction accuracy (capture rate) around 75%

  11. Towards cheminformatics-based estimation of drug therapeutic index: Predicting the protective index of anticonvulsants using a new quantitative structure-index relationship approach.

    PubMed

    Chen, Shangying; Zhang, Peng; Liu, Xin; Qin, Chu; Tao, Lin; Zhang, Cheng; Yang, Sheng Yong; Chen, Yu Zong; Chui, Wai Keung

    2016-06-01

    The overall efficacy and safety profile of a new drug is partially evaluated by the therapeutic index in clinical studies and by the protective index (PI) in preclinical studies. In-silico predictive methods may facilitate the assessment of these indicators. Although QSAR and QSTR models can be used for predicting PI, their predictive capability has not been evaluated. To test this capability, we developed QSAR and QSTR models for predicting the activity and toxicity of anticonvulsants at accuracy levels above the literature-reported threshold (LT) of good QSAR models as tested by both the internal 5-fold cross validation and external validation method. These models showed significantly compromised PI predictive capability due to the cumulative errors of the QSAR and QSTR models. Therefore, in this investigation a new quantitative structure-index relationship (QSIR) model was devised and it showed improved PI predictive capability that superseded the LT of good QSAR models. The QSAR, QSTR and QSIR models were developed using support vector regression (SVR) method with the parameters optimized by using the greedy search method. The molecular descriptors relevant to the prediction of anticonvulsant activities, toxicities and PIs were analyzed by a recursive feature elimination method. The selected molecular descriptors are primarily associated with the drug-like, pharmacological and toxicological features and those used in the published anticonvulsant QSAR and QSTR models. This study suggested that QSIR is useful for estimating the therapeutic index of drug candidates. Copyright © 2016. Published by Elsevier Inc.

  12. A correlational approach to predicting operator status

    NASA Technical Reports Server (NTRS)

    Shingledecker, Clark A.

    1988-01-01

    This paper discusses a research approach for identifying and validating candidate physiological and behavioral parameters which can be used to predict the performance capabilities of aircrew and other system operators. In this methodology, concurrent and advance correlations are computed between predictor values and criterion performance measures. Continuous performance and sleep loss are used as stressors to promote performance variation. Preliminary data are presented which suggest dependence of prediction capability on the resource allocation policy of the operator.

  13. Probabilistic migration modelling focused on functional barrier efficiency and low migration concepts in support of risk assessment.

    PubMed

    Brandsch, Rainer

    2017-10-01

    Migration modelling provides reliable migration estimates from food-contact materials (FCM) to food or food simulants based on mass-transfer parameters like diffusion and partition coefficients related to individual materials. In most cases, mass-transfer parameters are not readily available from the literature and for this reason are estimated with a given uncertainty. Historically, uncertainty was accounted for by introducing upper limit concepts first, turning out to be of limited applicability due to highly overestimated migration results. Probabilistic migration modelling gives the possibility to consider uncertainty of the mass-transfer parameters as well as other model inputs. With respect to a functional barrier, the most important parameters among others are the diffusion properties of the functional barrier and its thickness. A software tool that accepts distribution as inputs and is capable of applying Monte Carlo methods, i.e., random sampling from the input distributions of the relevant parameters (i.e., diffusion coefficient and layer thickness), predicts migration results with related uncertainty and confidence intervals. The capabilities of probabilistic migration modelling are presented in the view of three case studies (1) sensitivity analysis, (2) functional barrier efficiency and (3) validation by experimental testing. Based on the predicted migration by probabilistic migration modelling and related exposure estimates, safety evaluation of new materials in the context of existing or new packaging concepts is possible. Identifying associated migration risk and potential safety concerns in the early stage of packaging development is possible. Furthermore, dedicated material selection exhibiting required functional barrier efficiency under application conditions becomes feasible. Validation of the migration risk assessment by probabilistic migration modelling through a minimum of dedicated experimental testing is strongly recommended.

  14. Characterization of the temperate phage vB_RleM_PPF1 and its site-specific integration into the Rhizobium leguminosarum F1 genome.

    PubMed

    Halmillawewa, Anupama P; Restrepo-Córdoba, Marcela; Perry, Benjamin J; Yost, Christopher K; Hynes, Michael F

    2016-02-01

    Bacteriophages may play an important role in regulating population size and diversity of the root nodule symbiont Rhizobium leguminosarum, as well as participating in horizontal gene transfer. Although phages that infect this species have been isolated in the past, our knowledge of their molecular biology, and especially of genome composition, is extremely limited, and this lack of information impacts on the ability to assess phage population dynamics and limits potential agricultural applications of rhizobiophages. To help address this deficit in available sequence and biological information, the complete genome sequence of the Myoviridae temperate phage PPF1 that infects R. leguminosarum biovar viciae strain F1 was determined. The genome is 54,506 bp in length with an average G+C content of 61.9 %. The genome contains 94 putative open reading frames (ORFs) and 74.5 % of these predicted ORFs share homology at the protein level with previously reported sequences in the database. However, putative functions could only be assigned to 25.5 % (24 ORFs) of the predicted genes. PPF1 was capable of efficiently lysogenizing its rhizobial host R. leguminosarum F1. The site-specific recombination system of the phage targets an integration site that lies within a putative tRNA-Pro (CGG) gene in R. leguminosarum F1. Upon integration, the phage is capable of restoring the disrupted tRNA gene, owing to the 50 bp homologous sequence (att core region) it shares with its rhizobial host genome. Phage PPF1 is the first temperate phage infecting members of the genus Rhizobium for which a complete genome sequence, as well as other biological data such as the integration site, is available.

  15. NOAA Climate Program Office Contributions to National ESPC

    NASA Astrophysics Data System (ADS)

    Higgins, W.; Huang, J.; Mariotti, A.; Archambault, H. M.; Barrie, D.; Lucas, S. E.; Mathis, J. T.; Legler, D. M.; Pulwarty, R. S.; Nierenberg, C.; Jones, H.; Cortinas, J. V., Jr.; Carman, J.

    2016-12-01

    NOAA is one of five federal agencies (DOD, DOE, NASA, NOAA, and NSF) which signed an updated charter in 2016 to partner on the National Earth System Prediction Capability (ESPC). Situated within NOAA's Office of Oceanic and Atmospheric Research (OAR), NOAA Climate Program Office (CPO) programs contribute significantly to the National ESPC goals and activities. This presentation will provide an overview of CPO contributions to National ESPC. First, we will discuss selected CPO research and transition activities that directly benefit the ESPC coupled model prediction capability, including The North American Multi-Model Ensemble (NMME) seasonal prediction system The Subseasonal Experiment (SubX) project to test real-time subseasonal ensemble prediction systems. Improvements to the NOAA operational Climate Forecast System (CFS), including software infrastructure and data assimilation. Next, we will show how CPO's foundational research activities are advancing future ESPC capabilities. Highlights will include: The Tropical Pacific Observing System (TPOS) to provide the basis for predicting climate on subseasonal to decadal timescales. Subseasonal-to-Seasonal (S2S) processes and predictability studies to improve understanding, modeling and prediction of the MJO. An Arctic Research Program to address urgent needs for advancing monitoring and prediction capabilities in this major area of concern. Advances towards building an experimental multi-decadal prediction system through studies on the Atlantic Meridional Overturning Circulation (AMOC). Finally, CPO has embraced Integrated Information Systems (IIS's) that build on the innovation of programs such as the National Integrated Drought Information System (NIDIS) to develop and deliver end to end environmental information for key societal challenges (e.g. extreme heat; coastal flooding). These contributions will help the National ESPC better understand and address societal needs and decision support requirements.

  16. USM3D Analysis of Low Boom Configuration

    NASA Technical Reports Server (NTRS)

    Carter, Melissa B.; Campbell, Richard L.; Nayani, Sudheer N.

    2011-01-01

    In the past few years considerable improvement was made in NASA's in house boom prediction capability. As part of this improved capability, the USM3D Navier-Stokes flow solver, when combined with a suitable unstructured grid, went from accurately predicting boom signatures at 1 body length to 10 body lengths. Since that time, the research emphasis has shifted from analysis to the design of supersonic configurations with boom signature mitigation In order to design an aircraft, the techniques for accurately predicting boom and drag need to be determined. This paper compares CFD results with the wind tunnel experimental results conducted on a Gulfstream reduced boom and drag configuration. Two different wind-tunnel models were designed and tested for drag and boom data. The goal of this study was to assess USM3D capability for predicting both boom and drag characteristics. Overall, USM3D coupled with a grid that was sheared and stretched was able to reasonably predict boom signature. The computational drag polar matched the experimental results for a lift coefficient above 0.1 despite some mismatch in the predicted lift-curve slope.

  17. Towards predicting the encoding capability of MR fingerprinting sequences.

    PubMed

    Sommer, K; Amthor, T; Doneva, M; Koken, P; Meineke, J; Börnert, P

    2017-09-01

    Sequence optimization and appropriate sequence selection is still an unmet need in magnetic resonance fingerprinting (MRF). The main challenge in MRF sequence design is the lack of an appropriate measure of the sequence's encoding capability. To find such a measure, three different candidates for judging the encoding capability have been investigated: local and global dot-product-based measures judging dictionary entry similarity as well as a Monte Carlo method that evaluates the noise propagation properties of an MRF sequence. Consistency of these measures for different sequence lengths as well as the capability to predict actual sequence performance in both phantom and in vivo measurements was analyzed. While the dot-product-based measures yielded inconsistent results for different sequence lengths, the Monte Carlo method was in a good agreement with phantom experiments. In particular, the Monte Carlo method could accurately predict the performance of different flip angle patterns in actual measurements. The proposed Monte Carlo method provides an appropriate measure of MRF sequence encoding capability and may be used for sequence optimization. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Measurements and Modeling of Nitric Oxide Formation in Counterflow, Premixed CH4/O2/N2 Flames

    NASA Technical Reports Server (NTRS)

    Thomsen, D. Douglas; Laurendeau, Normand M.

    2000-01-01

    Laser-induced fluorescence (LIF) measurements of NO concentration in a variety of CH4/O2/N2 flames are used to evaluate the chemical kinetics of NO formation. The analysis begins with previous measurements in flat, laminar, premixed CH4/O2/N2 flames stabilized on a water-cooled McKenna burner at pressures ranging from 1 to 14.6 atm, equivalence ratios from 0.5 to 1.6, and volumetric nitrogen/oxygen dilution ratios of 2.2, 3.1 and 3.76. These measured results are compared to predictions to determine the capabilities and limitations of the comprehensive kinetic mechanism developed by the Gas Research Institute (GRI), version 2.11. The model is shown to predict well the qualitative trends of NO formation in lean-premixed flames, while quantitatively underpredicting NO concentration by 30-50%. For rich flames, the model is unable to even qualitatively match the experimental results. These flames were found to be limited by low temperatures and an inability to separate the flame from the burner surface. In response to these limitations, a counterflow burner was designed for use in opposed premixed flame studies. A new LIF calibration technique was developed and applied to obtain quantitative measurements of NO concentration in laminar, counterflow premixed, CH4/O2/N2 flames at pressures ranging from 1 to 5.1 atm, equivalence ratios of 0.6 to 1.5, and an N2/O2 dilution ratio of 3.76. The counterflow premixed flame measurements are combined with measurements in burner-stabilized premixed flames and counterflow diffusion flames to build a comprehensive database for analysis of the GRI kinetic mechanism. Pathways, quantitative reaction path and sensitivity analyses are applied to the GRI mechanism for these flame conditions. The prompt NO mechanism is found to severely underpredict the amount of NO formed in rich premixed and nitrogen-diluted diffusion flames. This underprediction is traced to uncertainties in the CH kinetics as well as in the nitrogen oxidation chemistry. Suggestions are made which significantly improve the predictive capability of the GRI mechanism in near-stoichiometric, rich, premixed flames and in atmospheric-pressure, diffusion flames. However, the modified reaction mechanism is unable to model the formation of NO in ultra-rich, premixed or in high-pressure, nonpremixed flames, thus indicating the need for additional study under these conditions.

  19. An Assessment of Current Fan Noise Prediction Capability

    NASA Technical Reports Server (NTRS)

    Envia, Edmane; Woodward, Richard P.; Elliott, David M.; Fite, E. Brian; Hughes, Christopher E.; Podboy, Gary G.; Sutliff, Daniel L.

    2008-01-01

    In this paper, the results of an extensive assessment exercise carried out to establish the current state of the art for predicting fan noise at NASA are presented. Representative codes in the empirical, analytical, and computational categories were exercised and assessed against a set of benchmark acoustic data obtained from wind tunnel tests of three model scale fans. The chosen codes were ANOPP, representing an empirical capability, RSI, representing an analytical capability, and LINFLUX, representing a computational aeroacoustics capability. The selected benchmark fans cover a wide range of fan pressure ratios and fan tip speeds, and are representative of modern turbofan engine designs. The assessment results indicate that the ANOPP code can predict fan noise spectrum to within 4 dB of the measurement uncertainty band on a third-octave basis for the low and moderate tip speed fans except at extreme aft emission angles. The RSI code can predict fan broadband noise spectrum to within 1.5 dB of experimental uncertainty band provided the rotor-only contribution is taken into account. The LINFLUX code can predict interaction tone power levels to within experimental uncertainties at low and moderate fan tip speeds, but could deviate by as much as 6.5 dB outside the experimental uncertainty band at the highest tip speeds in some case.

  20. Developing a predictive tropospheric ozone model for Tabriz

    NASA Astrophysics Data System (ADS)

    Khatibi, Rahman; Naghipour, Leila; Ghorbani, Mohammad A.; Smith, Michael S.; Karimi, Vahid; Farhoudi, Reza; Delafrouz, Hadi; Arvanaghi, Hadi

    2013-04-01

    Predictive ozone models are becoming indispensable tools by providing a capability for pollution alerts to serve people who are vulnerable to the risks. We have developed a tropospheric ozone prediction capability for Tabriz, Iran, by using the following five modeling strategies: three regression-type methods: Multiple Linear Regression (MLR), Artificial Neural Networks (ANNs), and Gene Expression Programming (GEP); and two auto-regression-type models: Nonlinear Local Prediction (NLP) to implement chaos theory and Auto-Regressive Integrated Moving Average (ARIMA) models. The regression-type modeling strategies explain the data in terms of: temperature, solar radiation, dew point temperature, and wind speed, by regressing present ozone values to their past values. The ozone time series are available at various time intervals, including hourly intervals, from August 2010 to March 2011. The results for MLR, ANN and GEP models are not overly good but those produced by NLP and ARIMA are promising for the establishing a forecasting capability.

  1. A Formal Messaging Notation for Alaskan Aviation Data

    NASA Technical Reports Server (NTRS)

    Rios, Joseph L.

    2015-01-01

    Data exchange is an increasingly important aspect of the National Airspace System. While many data communication channels have become more capable of sending and receiving data at higher throughput rates, there is still a need to use communication channels efficiently with limited throughput. The limitation can be based on technological issues, financial considerations, or both. This paper provides a complete description of several important aviation weather data in Abstract Syntax Notation format. By doing so, data providers can take advantage of Abstract Syntax Notation's ability to encode data in a highly compressed format. When data such as pilot weather reports, surface weather observations, and various weather predictions are compressed in such a manner, it allows for the efficient use of throughput-limited communication channels. This paper provides details on the Abstract Syntax Notation One (ASN.1) implementation for Alaskan aviation data, and demonstrates its use on real-world aviation weather data samples as Alaska has sparse terrestrial data infrastructure and data are often sent via relatively costly satellite channels.

  2. Evaluating the habitat capability model for Merriam's turkeys

    Treesearch

    Mark A. Rumble; Stanley H. Anderson

    1995-01-01

    Habitat capability (HABCAP) models for wildlife assist land managers in predicting the consequences of their management decisions. Models must be tested and refined prior to using them in management planning. We tested the predicted patterns of habitat selection of the R2 HABCAP model using observed patterns of habitats selected by radio-marked Merriam’s turkey (

  3. Assessment of Current Process Modeling Approaches to Determine Their Limitations, Applicability and Developments Needed for Long-Fiber Thermoplastic Injection Molded Composites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Ba Nghiep; Holbery, Jim; Smith, Mark T.

    2006-11-30

    This report describes the status of the current process modeling approaches to predict the behavior and flow of fiber-filled thermoplastics under injection molding conditions. Previously, models have been developed to simulate the injection molding of short-fiber thermoplastics, and an as-formed composite part or component can then be predicted that contains a microstructure resulting from the constituents’ material properties and characteristics as well as the processing parameters. Our objective is to assess these models in order to determine their capabilities and limitations, and the developments needed for long-fiber injection-molded thermoplastics (LFTs). First, the concentration regimes are summarized to facilitate the understandingmore » of different types of fiber-fiber interaction that can occur for a given fiber volume fraction. After the formulation of the fiber suspension flow problem and the simplification leading to the Hele-Shaw approach, the interaction mechanisms are discussed. Next, the establishment of the rheological constitutive equation is presented that reflects the coupled flow/orientation nature. The decoupled flow/orientation approach is also discussed which constitutes a good simplification for many applications involving flows in thin cavities. Finally, before outlining the necessary developments for LFTs, some applications of the current orientation model and the so-called modified Folgar-Tucker model are illustrated through the fiber orientation predictions for selected LFT samples.« less

  4. Numerical Investigations of Capabilities and Limits of Photospheric Data Driven Magnetic Flux Emergence

    NASA Astrophysics Data System (ADS)

    Linton, Mark; Leake, James; Schuck, Peter W.

    2016-05-01

    The magnetic field of the solar atmosphere is the primary driver of solar activity. Understanding the magnetic state of the solar atmosphere is therefore of key importance to predicting solaractivity. One promising means of studying the magnetic atmosphere is to dynamically build up and evolve this atmosphere from the time evolution of the magnetic field at the photosphere, where it can be measured with current solar vector magnetograms at high temporal and spatial resolution.We report here on a series of numerical experiments investigating the capabilities and limits of magnetohydrodynamical simulations of such a process, where a magnetic corona is dynamically built up and evolved from a time series of synthetic photospheric data. These synthetic data are composed of photospheric slices taken from self consistent convection zone to corona simulations of flux emergence. The driven coronae are then quantitatively compared against the coronae of the original simulations. We investigate and report on the fidelity of these driven simulations, both as a function of the emergence timescale of the magnetic flux, and as a function of the driving cadence of the input data.This work was supported by the Chief of Naval Research and the NASA Living with a Star and Heliophysics Supporting Research programs.

  5. An Extremely Wide Bandwidth, Low Noise SIS Heterodyne Receiver Design for Millimeter and Submillimeter Observations

    NASA Technical Reports Server (NTRS)

    Zmuidzinas, J.

    2004-01-01

    Our group has designed a heterodyne submillimeter receiver that offers a very wide IF bandwidth of 12 GHz, while still maintaining a low noise temperature. The 180-300 GHz double-sideband design uses a single SI5 device excited by a full bandwidth, fixed-tuned waveguide probe on a silicon substrate. The IF output frequency (limited by the MMIC low noise IF preamplifier) is 6-18 GHz. providing an instantaneous RF bandwidth of 24 GHz (double-sideband). Intensive simulations predict that the junction will achieve a conversion loss better than 1-2 dB and a mixer noise temperature of less than 20 K across the band (twice the quantum limit). The single sideband receiver noise temperature goal is 70 K. The wide instantaneous bandwidth and low noise will result in an instrument capable of a variety of important astrophysical and environmental observations beyond the capabilities of current instruments. Lab testing of the receiver will begin this summer, and first light on the CSO should be in the Spring of 2003. At the CSO, we plan to use receiver with WASP2, a wideband spectrometer, to search for spectral lines from SCUBA sources. This approach should allow us to rapidly develop a catalog of redshifts for these objects.

  6. Prediction of Tissue Outcome and Assessment of Treatment Effect in Acute Ischemic Stroke Using Deep Learning.

    PubMed

    Nielsen, Anne; Hansen, Mikkel Bo; Tietze, Anna; Mouridsen, Kim

    2018-06-01

    Treatment options for patients with acute ischemic stroke depend on the volume of salvageable tissue. This volume assessment is currently based on fixed thresholds and single imagine modalities, limiting accuracy. We wish to develop and validate a predictive model capable of automatically identifying and combining acute imaging features to accurately predict final lesion volume. Using acute magnetic resonance imaging, we developed and trained a deep convolutional neural network (CNN deep ) to predict final imaging outcome. A total of 222 patients were included, of which 187 were treated with rtPA (recombinant tissue-type plasminogen activator). The performance of CNN deep was compared with a shallow CNN based on the perfusion-weighted imaging biomarker Tmax (CNN Tmax ), a shallow CNN based on a combination of 9 different biomarkers (CNN shallow ), a generalized linear model, and thresholding of the diffusion-weighted imaging biomarker apparent diffusion coefficient (ADC) at 600×10 -6 mm 2 /s (ADC thres ). To assess whether CNN deep is capable of differentiating outcomes of ±intravenous rtPA, patients not receiving intravenous rtPA were included to train CNN deep, -rtpa to access a treatment effect. The networks' performances were evaluated using visual inspection, area under the receiver operating characteristic curve (AUC), and contrast. CNN deep yields significantly better performance in predicting final outcome (AUC=0.88±0.12) than generalized linear model (AUC=0.78±0.12; P =0.005), CNN Tmax (AUC=0.72±0.14; P <0.003), and ADC thres (AUC=0.66±0.13; P <0.0001) and a substantially better performance than CNN shallow (AUC=0.85±0.11; P =0.063). Measured by contrast, CNN deep improves the predictions significantly, showing superiority to all other methods ( P ≤0.003). CNN deep also seems to be able to differentiate outcomes based on treatment strategy with the volume of final infarct being significantly different ( P =0.048). The considerable prediction improvement accuracy over current state of the art increases the potential for automated decision support in providing recommendations for personalized treatment plans. © 2018 American Heart Association, Inc.

  7. 40 CFR 60.2110 - What operating limits must I meet and by when?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... for Commercial and Industrial Solid Waste Incineration Units Emission Limitations and Operating Limits... capable of reading PM concentrations from zero to a level equivalent to at least two times your allowable... of the instrument must be capable of reading PM concentration from zero to a level equivalent to two...

  8. Genomics of antibiotic-resistance prediction in Pseudomonas aeruginosa.

    PubMed

    Jeukens, Julie; Freschi, Luca; Kukavica-Ibrulj, Irena; Emond-Rheault, Jean-Guillaume; Tucker, Nicholas P; Levesque, Roger C

    2017-06-02

    Antibiotic resistance is a worldwide health issue spreading quickly among human and animal pathogens, as well as environmental bacteria. Misuse of antibiotics has an impact on the selection of resistant bacteria, thus contributing to an increase in the occurrence of resistant genotypes that emerge via spontaneous mutation or are acquired by horizontal gene transfer. There is a specific and urgent need not only to detect antimicrobial resistance but also to predict antibiotic resistance in silico. We now have the capability to sequence hundreds of bacterial genomes per week, including assembly and annotation. Novel and forthcoming bioinformatics tools can predict the resistome and the mobilome with a level of sophistication not previously possible. Coupled with bacterial strain collections and databases containing strain metadata, prediction of antibiotic resistance and the potential for virulence are moving rapidly toward a novel approach in molecular epidemiology. Here, we present a model system in antibiotic-resistance prediction, along with its promises and limitations. As it is commonly multidrug resistant, Pseudomonas aeruginosa causes infections that are often difficult to eradicate. We review novel approaches for genotype prediction of antibiotic resistance. We discuss the generation of microbial sequence data for real-time patient management and the prediction of antimicrobial resistance. © 2017 The Authors. Annals of the New York Academy of Sciences published by Wiley Periodicals Inc. on behalf of The New York Academy of Sciences.

  9. An approximate theoretical method for modeling the static thrust performance of non-axisymmetric two-dimensional convergent-divergent nozzles. M.S. Thesis - George Washington Univ.

    NASA Technical Reports Server (NTRS)

    Hunter, Craig A.

    1995-01-01

    An analytical/numerical method has been developed to predict the static thrust performance of non-axisymmetric, two-dimensional convergent-divergent exhaust nozzles. Thermodynamic nozzle performance effects due to over- and underexpansion are modeled using one-dimensional compressible flow theory. Boundary layer development and skin friction losses are calculated using an approximate integral momentum method based on the classic karman-Polhausen solution. Angularity effects are included with these two models in a computational Nozzle Performance Analysis Code, NPAC. In four different case studies, results from NPAC are compared to experimental data obtained from subscale nozzle testing to demonstrate the capabilities and limitations of the NPAC method. In several cases, the NPAC prediction matched experimental gross thrust efficiency data to within 0.1 percent at a design NPR, and to within 0.5 percent at off-design conditions.

  10. A Machine Learns to Predict the Stability of Tightly Packed Planetary Systems

    NASA Astrophysics Data System (ADS)

    Tamayo, Daniel; Silburt, Ari; Valencia, Diana; Menou, Kristen; Ali-Dib, Mohamad; Petrovich, Cristobal; Huang, Chelsea X.; Rein, Hanno; van Laerhoven, Christa; Paradise, Adiv; Obertas, Alysa; Murray, Norman

    2016-12-01

    The requirement that planetary systems be dynamically stable is often used to vet new discoveries or set limits on unconstrained masses or orbital elements. This is typically carried out via computationally expensive N-body simulations. We show that characterizing the complicated and multi-dimensional stability boundary of tightly packed systems is amenable to machine-learning methods. We find that training an XGBoost machine-learning algorithm on physically motivated features yields an accurate classifier of stability in packed systems. On the stability timescale investigated (107 orbits), it is three orders of magnitude faster than direct N-body simulations. Optimized machine-learning classifiers for dynamical stability may thus prove useful across the discipline, e.g., to characterize the exoplanet sample discovered by the upcoming Transiting Exoplanet Survey Satellite. This proof of concept motivates investing computational resources to train algorithms capable of predicting stability over longer timescales and over broader regions of phase space.

  11. A resolution measure for three-dimensional microscopy

    PubMed Central

    Chao, Jerry; Ram, Sripad; Abraham, Anish V.; Ward, E. Sally; Ober, Raimund J.

    2009-01-01

    A three-dimensional (3D) resolution measure for the conventional optical microscope is introduced which overcomes the drawbacks of the classical 3D (axial) resolution limit. Formulated within the context of a parameter estimation problem and based on the Cramer-Rao lower bound, this 3D resolution measure indicates the accuracy with which a given distance between two objects in 3D space can be determined from the acquired image. It predicts that, given enough photons from the objects of interest, arbitrarily small distances of separation can be estimated with prespecified accuracy. Using simulated images of point source pairs, we show that the maximum likelihood estimator is capable of attaining the accuracy predicted by the resolution measure. We also demonstrate how different factors, such as extraneous noise sources and the spatial orientation of the imaged object pair, can affect the accuracy with which a given distance of separation can be determined. PMID:20161040

  12. Predictive and Reactive Distribution of Vaccines and Antivirals during Cross-Regional Pandemic Outbreaks

    PubMed Central

    Uribe-Sánchez, Andrés; Savachkin, Alex

    2011-01-01

    As recently pointed out by the Institute of Medicine, the existing pandemic mitigation models lack the dynamic decision support capability. We develop a large-scale simulation-driven optimization model for generating dynamic predictive distribution of vaccines and antivirals over a network of regional pandemic outbreaks. The model incorporates measures of morbidity, mortality, and social distancing, translated into the cost of lost productivity and medical expenses. The performance of the strategy is compared to that of the reactive myopic policy, using a sample outbreak in Fla, USA, with an affected population of over four millions. The comparison is implemented at different levels of vaccine and antiviral availability and administration capacity. Sensitivity analysis is performed to assess the impact of variability of some critical factors on policy performance. The model is intended to support public health policy making for effective distribution of limited mitigation resources. PMID:23074658

  13. What can we learn about the dynamics of DO-events from studying the high resolution ice core records?

    NASA Astrophysics Data System (ADS)

    Ditlevsen, Peter

    2017-04-01

    The causes for and possible predictions of rapid climate changes are poorly understood. The most pronounced changes observed, beside the glacial terminations, are the Dansgaard-Oeschger events. Present day general circulation climate models simulating glacial conditions are not capable of reproducing these rapid shifts. It is thus not known if they are due to bifurcations in the structural stability of the climate or if they are induced by stochastic fluctuations. By analyzing a high resolution ice core record we exclude the bifurcation scenario, which strongly suggests that they are noise induced and thus have very limited predictability. Ref: Peter Ditlevsen, "Tipping points in the climate system", in Nonlinear and Stochastic Climate Dynamics, Cambridge University Press (C. Franzke and T. O'Kane, eds.) (2016) P. D. Ditlevsen and S. Johnsen, "Tipping points: Early warning and wishful thinking", Geophys. Res. Lett., 37, L19703, 2010

  14. Applying Differential Coercion and Social Support Theory to Intimate Partner Violence.

    PubMed

    Zavala, Egbert; Kurtz, Don L

    2017-09-01

    A review of the current body of literature on intimate partner violence (IPV) shows that the most common theories used to explain this public health issue are social learning theory, a general theory of crime, general strain theory, or a combination of these perspectives. Other criminological theories have received less empirical attention. Therefore, the purpose of this study is to apply Differential Coercion and Social Support (DCSS) theory to test its capability to explain IPV. Data collected from two public universities ( N = 492) shows that three out of four measures of coercion (i.e., physical abuse, emotional abuse, and anticipated strain) predicted IPV perpetration, whereas social support was not found to be significant. Only two social-psychological deficits (anger and self-control) were found to be positive and significant in predicting IPV. Results, as well as the study's limitations and suggestions for future research, are discussed.

  15. Measurement of Trailing Edge Noise Using Directional Array and Coherent Output Power Methods

    NASA Technical Reports Server (NTRS)

    Hutcheson, Florence V.; Brooks, Thomas F.

    2002-01-01

    The use of a directional (or phased) array of microphones for the measurement of trailing edge (TE) noise is described and tested. The capabilities of this method arc evaluated via measurements of TE noise from a NACA 63-215 airfoil model and from a cylindrical rod. This TE noise measurement approach is compared to one that is based on thc cross spectral analysis of output signals from a pair of microphones placed on opposite sides of an airframe model (COP method). Advantages and limitations of both methods arc examined. It is shown that the microphone array can accurately measures TE noise and captures its two-dimensional characteristic over a large frequency range for any TE configuration as long as noise contamination from extraneous sources is within bounds. The COP method is shown to also accurately measure TE noise but over a more limited frequency range that narrows for increased TE thickness. Finally, the applicability and generality of an airfoil self-noise prediction method was evaluated via comparison to the experimental data obtained using the COP and array measurement methods. The predicted and experimental results are shown to agree over large frequency ranges.

  16. Large Eddy Simulation Modeling of Flashback and Flame Stabilization in Hydrogen-Rich Gas Turbines Using a Hierarchical Validation Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clemens, Noel

    This project was a combined computational and experimental effort to improve predictive capability for boundary layer flashback of premixed swirl flames relevant to gas-turbine power plants operating with high-hydrogen-content fuels. During the course of this project, significant progress in modeling was made on four major fronts: 1) use of direct numerical simulation of turbulent flames to understand the coupling between the flame and the turbulent boundary layer; 2) improved modeling capability for flame propagation in stratified pre-mixtures; 3) improved portability of computer codes using the OpenFOAM platform to facilitate transfer to industry and other researchers; and 4) application of LESmore » to flashback in swirl combustors, and a detailed assessment of its capabilities and limitations for predictive purposes. A major component of the project was an experimental program that focused on developing a rich experimental database of boundary layer flashback in swirl flames. Both methane and high-hydrogen fuels, including effects of elevated pressure (1 to 5 atm), were explored. For this project, a new model swirl combustor was developed. Kilohertz-rate stereoscopic PIV and chemiluminescence imaging were used to investigate the flame propagation dynamics. In addition to the planar measurements, a technique capable of detecting the instantaneous, time-resolved 3D flame front topography was developed and applied successfully to investigate the flow-flame interaction. The UT measurements and legacy data were used in a hierarchical validation approach where flows with increasingly complex physics were used for validation. First component models were validated with DNS and literature data in simplified configurations, and this was followed by validation with the UT 1-atm flashback cases, and then the UT high-pressure flashback cases. The new models and portable code represent a major improvement over what was available before this project was initiated.« less

  17. Machine learning and predictive data analytics enabling metrology and process control in IC fabrication

    NASA Astrophysics Data System (ADS)

    Rana, Narender; Zhang, Yunlin; Wall, Donald; Dirahoui, Bachir; Bailey, Todd C.

    2015-03-01

    Integrate circuit (IC) technology is going through multiple changes in terms of patterning techniques (multiple patterning, EUV and DSA), device architectures (FinFET, nanowire, graphene) and patterning scale (few nanometers). These changes require tight controls on processes and measurements to achieve the required device performance, and challenge the metrology and process control in terms of capability and quality. Multivariate data with complex nonlinear trends and correlations generally cannot be described well by mathematical or parametric models but can be relatively easily learned by computing machines and used to predict or extrapolate. This paper introduces the predictive metrology approach which has been applied to three different applications. Machine learning and predictive analytics have been leveraged to accurately predict dimensions of EUV resist patterns down to 18 nm half pitch leveraging resist shrinkage patterns. These patterns could not be directly and accurately measured due to metrology tool limitations. Machine learning has also been applied to predict the electrical performance early in the process pipeline for deep trench capacitance and metal line resistance. As the wafer goes through various processes its associated cost multiplies. It may take days to weeks to get the electrical performance readout. Predicting the electrical performance early on can be very valuable in enabling timely actionable decision such as rework, scrap, feedforward, feedback predicted information or information derived from prediction to improve or monitor processes. This paper provides a general overview of machine learning and advanced analytics application in the advanced semiconductor development and manufacturing.

  18. New Tool Released for Engine-Airframe Blade-Out Structural Simulations

    NASA Technical Reports Server (NTRS)

    Lawrence, Charles

    2004-01-01

    Researchers at the NASA Glenn Research Center have enhanced a general-purpose finite element code, NASTRAN, for engine-airframe structural simulations during steady-state and transient operating conditions. For steady-state simulations, the code can predict critical operating speeds, natural modes of vibration, and forced response (e.g., cabin noise and component fatigue). The code can be used to perform static analysis to predict engine-airframe response and component stresses due to maneuver loads. For transient response, the simulation code can be used to predict response due to bladeoff events and subsequent engine shutdown and windmilling conditions. In addition, the code can be used as a pretest analysis tool to predict the results of the bladeout test required for FAA certification of new and derivative aircraft engines. Before the present analysis code was developed, all the major aircraft engine and airframe manufacturers in the United States and overseas were performing similar types of analyses to ensure the structural integrity of engine-airframe systems. Although there were many similarities among the analysis procedures, each manufacturer was developing and maintaining its own structural analysis capabilities independently. This situation led to high software development and maintenance costs, complications with manufacturers exchanging models and results, and limitations in predicting the structural response to the desired degree of accuracy. An industry-NASA team was formed to overcome these problems by developing a common analysis tool that would satisfy all the structural analysis needs of the industry and that would be available and supported by a commercial software vendor so that the team members would be relieved of maintenance and development responsibilities. Input from all the team members was used to ensure that everyone's requirements were satisfied and that the best technology was incorporated into the code. Furthermore, because the code would be distributed by a commercial software vendor, it would be more readily available to engine and airframe manufacturers, as well as to nonaircraft companies that did not previously have access to this capability.

  19. Knotty: Efficient and Accurate Prediction of Complex RNA Pseudoknot Structures.

    PubMed

    Jabbari, Hosna; Wark, Ian; Montemagno, Carlo; Will, Sebastian

    2018-06-01

    The computational prediction of RNA secondary structure by free energy minimization has become an important tool in RNA research. However in practice, energy minimization is mostly limited to pseudoknot-free structures or rather simple pseudoknots, not covering many biologically important structures such as kissing hairpins. Algorithms capable of predicting sufficiently complex pseudoknots (for sequences of length n) used to have extreme complexities, e.g. Pknots (Rivas and Eddy, 1999) has O(n6) time and O(n4) space complexity. The algorithm CCJ (Chen et al., 2009) dramatically improves the asymptotic run time for predicting complex pseudoknots (handling almost all relevant pseudoknots, while being slightly less general than Pknots), but this came at the cost of large constant factors in space and time, which strongly limited its practical application (∼200 bases already require 256GB space). We present a CCJ-type algorithm, Knotty, that handles the same comprehensive pseudoknot class of structures as CCJ with improved space complexity of Θ(n3 + Z)-due to the applied technique of sparsification, the number of "candidates", Z, appears to grow significantly slower than n4 on our benchmark set (which include pseudoknotted RNAs up to 400 nucleotides). In terms of run time over this benchmark, Knotty clearly outperforms Pknots and the original CCJ implementation, CCJ 1.0; Knotty's space consumption fundamentally improves over CCJ 1.0, being on a par with the space-economic Pknots. By comparing to CCJ 2.0, our unsparsified Knotty variant, we demonstrate the isolated effect of sparsification. Moreover, Knotty employs the state-of-the-art energy model of "HotKnots DP09", which results in superior prediction accuracy over Pknots. Our software is available at https://github.com/HosnaJabbari/Knotty. will@tbi.unvie.ac.at. Supplementary data are available at Bioinformatics online.

  20. A Vision for the Future of Environmental Research: Creating Environmental Intelligence Centers

    NASA Astrophysics Data System (ADS)

    Barron, E. J.

    2002-12-01

    The nature of the environmental issues facing our nation demands a capability that allows us to enhance economic vitality, maintain environmental quality, and limit threats to life and property through more fundamental understanding of the Earth. It is "advanced" knowledge of how the system may respond that gives environmental information most of its power and utility. This fact is evident in the demand for new forecasting products, involving air quality, energy demand, water quality and quantity, ultraviolet radiation, and human health indexes. As we demonstrate feasibility and benefit, society is likely to demand a growing number of new operational forecast products on prediction time scales of days to decades into the future. The driving forces that govern our environment are widely recognized, involving primarily weather and climate, patterns of land use and land cover, and resource use with its associated waste products. The importance of these driving forces has been demonstrated by a decade of research on greenhouse gas emissions, ozone depletion and deforestation, and through the birth of Earth System Science. But, there are also major challenges. We find the strongest intersection between human activity, environmental stresses, system interactions and human decision-making in regional analysis coupled to larger spatial scales. In addition, most regions are influenced by multiple-stresses. Multiple, cumulative, and interactive stresses are clearly the most difficult to understand and hence the most difficult to assess and to manage. Currently, we are incapable of addressing these issues in a truly integrated fashion at global scales. The lack of an ability to combine global and regional forcing and to assess the response of the system to multiple stresses at the spatial and temporal scales of interest to humans limits our ability to assess the impacts of specific human perturbations, to assess advantages and risks, and to enhance economic and societal well being in the context of global, national and regional stewardship. These societal needs lead to a vision that uses a regional framework as a stepping-stone to a comprehensive national or global capability. The development of a comprehensive regional framework depends on a new approach to environmental research - the creation of regional Environmental Intelligence Centers. A key objective is to bring a demanding level of discipline to "forecasting" in a broad arena of environmental issues. The regional vision described above is designed to address a broad range of current and future environmental issues by creating a capability based on integrating diverse observing systems, making data readily accessible, developing an increasingly comprehensive predictive capability at the spatial and temporal scales appropriate for examining societal issues, and creating a vigorous intersection with decision-makers. With demonstrated success over a few large-scale regions of the U.S., this strategy will very likely grow into a national capability that far exceeds current capabilities.

  1. Potential capabilities of Reynolds stress turbulence model in the COMMIX-RSM code

    NASA Technical Reports Server (NTRS)

    Chang, F. C.; Bottoni, M.

    1994-01-01

    A Reynolds stress turbulence model has been implemented in the COMMIX code, together with transport equations describing turbulent heat fluxes, variance of temperature fluctuations, and dissipation of turbulence kinetic energy. The model has been verified partially by simulating homogeneous turbulent shear flow, and stable and unstable stratified shear flows with strong buoyancy-suppressing or enhancing turbulence. This article outlines the model, explains the verifications performed thus far, and discusses potential applications of the COMMIX-RSM code in several domains, including, but not limited to, analysis of thermal striping in engineering systems, simulation of turbulence in combustors, and predictions of bubbly and particulate flows.

  2. Development of design information for molecular-sieve type regenerative CO2-removal systems

    NASA Technical Reports Server (NTRS)

    Wright, R. M.; Ruder, J. M.; Dunn, V. B.; Hwang, K. C.

    1973-01-01

    Experimental and analytic studies were conducted with molecular sieve sorbents to provide basic design information, and to develop a system design technique for regenerable CO2-removal systems for manned spacecraft. Single sorbate equilibrium data were obtained over a wide range of conditions for CO2, water, nitrogen, and oxygen on several molecular sieve and silica gel sorbents. The coadsorption of CO2 with water preloads, and with oxygen and nitrogen was experimentally evaluated. Mass-transfer, and some limited heat-transfer performance evaluations were accomplished under representative operating conditions, including the coadsorption of CO2 and water. CO2-removal system performance prediction capability was derived.

  3. An openstack-based flexible video transcoding framework in live

    NASA Astrophysics Data System (ADS)

    Shi, Qisen; Song, Jianxin

    2017-08-01

    With the rapid development of mobile live business, transcoding HD video is often a challenge for mobile devices due to their limited processing capability and bandwidth-constrained network connection. For live service providers, it's wasteful for resources to delay lots of transcoding server because some of them are free to work sometimes. To deal with this issue, this paper proposed an Openstack-based flexible transcoding framework to achieve real-time video adaption for mobile device and make computing resources used efficiently. To this end, we introduced a special method of video stream splitting and VMs resource scheduling based on access pressure prediction,which is forecasted by an AR model.

  4. Sexual differentiation in the distribution potential of northern jaguars (Panthera onca)

    USGS Publications Warehouse

    Boydston, Erin E.; Lopez Gonzalez, Carlos A.

    2005-01-01

    We estimated the potential geographic distribution of jaguars in the southwestern United States and northwestern Mexico by modeling the jaguar ecological niche from occurrence records. We modeled separately the distribution of males and females, assuming records of females probably represented established home ranges while male records likely included dispersal movements. The predicted distribution for males was larger than that for females. Eastern Sonora appeared capable for supporting male and female jaguars with potential range expansion into southeastern Arizona. New Mexico and Chihuahua contained environmental characteristics primarily limited to the male niche and thus may be areas into which males occasionally disperse.

  5. RF model of the distribution system as a communication channel, phase 2. Volume 1: Summary Report

    NASA Technical Reports Server (NTRS)

    Rustay, R. C.; Gajjar, J. T.; Rankin, R. W.; Wentz, R. C.; Wooding, R.

    1982-01-01

    The design, implementation, and verification of a computerized model for predicting the steady-state sinusoidal response of radial (tree) configured distribution feeders was undertaken. That work demonstrated the feasibility and validity based on verification measurements made on a limited size portion of an actual live feeder. On that basis a follow-on effort concerned with (1) extending the verification based on a greater variety of situations and network size, (2) extending the model capabilities for reverse direction propagation, (3) investigating parameter sensitivities, (4) improving transformer models, and (5) investigating procedures/fixes for ameliorating propagation trouble spots was conducted. Results are summarized.

  6. DNA-Based Methods in the Immunohematology Reference Laboratory

    PubMed Central

    Denomme, Gregory A

    2010-01-01

    Although hemagglutination serves the immunohematology reference laboratory well, when used alone, it has limited capability to resolve complex problems. This overview discusses how molecular approaches can be used in the immunohematology reference laboratory. In order to apply molecular approaches to immunohematology, knowledge of genes, DNA-based methods, and the molecular bases of blood groups are required. When applied correctly, DNA-based methods can predict blood groups to resolve ABO/Rh discrepancies, identify variant alleles, and screen donors for antigen-negative units. DNA-based testing in immunohematology is a valuable tool used to resolve blood group incompatibilities and to support patients in their transfusion needs. PMID:21257350

  7. Drought Predictability and Prediction in a Changing Climate: Assessing Current Predictive Knowledge and Capabilities, User Requirements and Research Priorities

    NASA Technical Reports Server (NTRS)

    Schubert, Siegfried

    2011-01-01

    Drought is fundamentally the result of an extended period of reduced precipitation lasting anywhere from a few weeks to decades and even longer. As such, addressing drought predictability and prediction in a changing climate requires foremost that we make progress on the ability to predict precipitation anomalies on subseasonal and longer time scales. From the perspective of the users of drought forecasts and information, drought is however most directly viewed through its impacts (e.g., on soil moisture, streamflow, crop yields). As such, the question of the predictability of drought must extend to those quantities as well. In order to make progress on these issues, the WCRP drought information group (DIG), with the support of WCRP, the Catalan Institute of Climate Sciences, the La Caixa Foundation, the National Aeronautics and Space Administration, the National Oceanic and Atmospheric Administration, and the National Science Foundation, has organized a workshop to focus on: 1. User requirements for drought prediction information on sub-seasonal to centennial time scales 2. Current understanding of the mechanisms and predictability of drought on sub-seasonal to centennial time scales 3. Current drought prediction/projection capabilities on sub-seasonal to centennial time scales 4. Advancing regional drought prediction capabilities for variables and scales most relevant to user needs on sub-seasonal to centennial time scales. This introductory talk provides an overview of these goals, and outlines the occurrence and mechanisms of drought world-wide.

  8. Miniaturized orb-weaving spiders: behavioural precision is not limited by small size

    PubMed Central

    Eberhard, William G

    2007-01-01

    The special problems confronted by very small animals in nervous system design that may impose limitations on their behaviour and evolution are reviewed. Previous attempts to test for such behavioural limitations have suffered from lack of detail in behavioural observations of tiny species and unsatisfactory measurements of their behavioural capacities. This study presents partial solutions to both problems. The orb-web construction behaviour of spiders provided data on the comparative behavioural capabilities of tiny animals in heretofore unparalleled detail; species ranged about five orders of magnitude in weight, from approximately 50–100 mg down to some of the smallest spiders known (less than 0.005 mg), whose small size is a derived trait. Previous attempts to quantify the ‘complexity’ of behaviour were abandoned in favour of using comparisons of behavioural imprecision in performing the same task. The prediction of the size limitation hypothesis that very small spiders would have a reduced ability to repeat one particular behaviour pattern precisely was not confirmed. The anatomical and physiological mechanisms by which these tiny animals achieve this precision and the possibility that they are more limited in the performance of higher-order behaviour patterns await further investigation. PMID:17609181

  9. Orbital Debris Modeling

    NASA Technical Reports Server (NTRS)

    Liou, J. C.

    2012-01-01

    Presentation outlne: (1) The NASA Orbital Debris (OD) Engineering Model -- A mathematical model capable of predicting OD impact risks for the ISS and other critical space assets (2) The NASA OD Evolutionary Model -- A physical model capable of predicting future debris environment based on user-specified scenarios (3) The NASA Standard Satellite Breakup Model -- A model describing the outcome of a satellite breakup (explosion or collision)

  10. Web-based applications for building, managing and analysing kinetic models of biological systems.

    PubMed

    Lee, Dong-Yup; Saha, Rajib; Yusufi, Faraaz Noor Khan; Park, Wonjun; Karimi, Iftekhar A

    2009-01-01

    Mathematical modelling and computational analysis play an essential role in improving our capability to elucidate the functions and characteristics of complex biological systems such as metabolic, regulatory and cell signalling pathways. The modelling and concomitant simulation render it possible to predict the cellular behaviour of systems under various genetically and/or environmentally perturbed conditions. This motivates systems biologists/bioengineers/bioinformaticians to develop new tools and applications, allowing non-experts to easily conduct such modelling and analysis. However, among a multitude of systems biology tools developed to date, only a handful of projects have adopted a web-based approach to kinetic modelling. In this report, we evaluate the capabilities and characteristics of current web-based tools in systems biology and identify desirable features, limitations and bottlenecks for further improvements in terms of usability and functionality. A short discussion on software architecture issues involved in web-based applications and the approaches taken by existing tools is included for those interested in developing their own simulation applications.

  11. Logistics Modeling for Lunar Exploration Systems

    NASA Technical Reports Server (NTRS)

    Andraschko, Mark R.; Merrill, R. Gabe; Earle, Kevin D.

    2008-01-01

    The extensive logistics required to support extended crewed operations in space make effective modeling of logistics requirements and deployment critical to predicting the behavior of human lunar exploration systems. This paper discusses the software that has been developed as part of the Campaign Manifest Analysis Tool in support of strategic analysis activities under the Constellation Architecture Team - Lunar. The described logistics module enables definition of logistics requirements across multiple surface locations and allows for the transfer of logistics between those locations. A key feature of the module is the loading algorithm that is used to efficiently load logistics by type into carriers and then onto landers. Attention is given to the capabilities and limitations of this loading algorithm, particularly with regard to surface transfers. These capabilities are described within the context of the object-oriented software implementation, with details provided on the applicability of using this approach to model other human exploration scenarios. Some challenges of incorporating probabilistics into this type of logistics analysis model are discussed at a high level.

  12. Optical performance analysis of plenoptic camera systems

    NASA Astrophysics Data System (ADS)

    Langguth, Christin; Oberdörster, Alexander; Brückner, Andreas; Wippermann, Frank; Bräuer, Andreas

    2014-09-01

    Adding an array of microlenses in front of the sensor transforms the capabilities of a conventional camera to capture both spatial and angular information within a single shot. This plenoptic camera is capable of obtaining depth information and providing it for a multitude of applications, e.g. artificial re-focusing of photographs. Without the need of active illumination it represents a compact and fast optical 3D acquisition technique with reduced effort in system alignment. Since the extent of the aperture limits the range of detected angles, the observed parallax is reduced compared to common stereo imaging systems, which results in a decreased depth resolution. Besides, the gain of angular information implies a degraded spatial resolution. This trade-off requires a careful choice of the optical system parameters. We present a comprehensive assessment of possible degrees of freedom in the design of plenoptic systems. Utilizing a custom-built simulation tool, the optical performance is quantified with respect to particular starting conditions. Furthermore, a plenoptic camera prototype is demonstrated in order to verify the predicted optical characteristics.

  13. Bioinformatics approaches to predict target genes from transcription factor binding data.

    PubMed

    Essebier, Alexandra; Lamprecht, Marnie; Piper, Michael; Bodén, Mikael

    2017-12-01

    Transcription factors regulate gene expression and play an essential role in development by maintaining proliferative states, driving cellular differentiation and determining cell fate. Transcription factors are capable of regulating multiple genes over potentially long distances making target gene identification challenging. Currently available experimental approaches to detect distal interactions have multiple weaknesses that have motivated the development of computational approaches. Although an improvement over experimental approaches, existing computational approaches are still limited in their application, with different weaknesses depending on the approach. Here, we review computational approaches with a focus on data dependency, cell type specificity and usability. With the aim of identifying transcription factor target genes, we apply available approaches to typical transcription factor experimental datasets. We show that approaches are not always capable of annotating all transcription factor binding sites; binding sites should be treated disparately; and a combination of approaches can increase the biological relevance of the set of genes identified as targets. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Integration of nitrogen dynamics into the Noah-MP land surface model v1.1 for climate and environmental predictions

    NASA Astrophysics Data System (ADS)

    Cai, X.; Yang, Z.-L.; Fisher, J. B.; Zhang, X.; Barlage, M.; Chen, F.

    2016-01-01

    Climate and terrestrial biosphere models consider nitrogen an important factor in limiting plant carbon uptake, while operational environmental models view nitrogen as the leading pollutant causing eutrophication in water bodies. The community Noah land surface model with multi-parameterization options (Noah-MP) is unique in that it is the next-generation land surface model for the Weather Research and Forecasting meteorological model and for the operational weather/climate models in the National Centers for Environmental Prediction. In this study, we add a capability to Noah-MP to simulate nitrogen dynamics by coupling the Fixation and Uptake of Nitrogen (FUN) plant model and the Soil and Water Assessment Tool (SWAT) soil nitrogen dynamics. This model development incorporates FUN's state-of-the-art concept of carbon cost theory and SWAT's strength in representing the impacts of agricultural management on the nitrogen cycle. Parameterizations for direct root and mycorrhizal-associated nitrogen uptake, leaf retranslocation, and symbiotic biological nitrogen fixation are employed from FUN, while parameterizations for nitrogen mineralization, nitrification, immobilization, volatilization, atmospheric deposition, and leaching are based on SWAT. The coupled model is then evaluated at the Kellogg Biological Station - a Long Term Ecological Research site within the US Corn Belt. Results show that the model performs well in capturing the major nitrogen state/flux variables (e.g., soil nitrate and nitrate leaching). Furthermore, the addition of nitrogen dynamics improves the modeling of net primary productivity and evapotranspiration. The model improvement is expected to advance the capability of Noah-MP to simultaneously predict weather and water quality in fully coupled Earth system models.

  15. Multi-Hypothesis Modelling Capabilities for Robust Data-Model Integration

    NASA Astrophysics Data System (ADS)

    Walker, A. P.; De Kauwe, M. G.; Lu, D.; Medlyn, B.; Norby, R. J.; Ricciuto, D. M.; Rogers, A.; Serbin, S.; Weston, D. J.; Ye, M.; Zaehle, S.

    2017-12-01

    Large uncertainty is often inherent in model predictions due to imperfect knowledge of how to describe the mechanistic processes (hypotheses) that a model is intended to represent. Yet this model hypothesis uncertainty (MHU) is often overlooked or informally evaluated, as methods to quantify and evaluate MHU are limited. MHU is increased as models become more complex because each additional processes added to a model comes with inherent MHU as well as parametric unceratinty. With the current trend of adding more processes to Earth System Models (ESMs), we are adding uncertainty, which can be quantified for parameters but not MHU. Model inter-comparison projects do allow for some consideration of hypothesis uncertainty but in an ad hoc and non-independent fashion. This has stymied efforts to evaluate ecosystem models against data and intepret the results mechanistically because it is not simple to interpret exactly why a model is producing the results it does and identify which model assumptions are key as they combine models of many sub-systems and processes, each of which may be conceptualised and represented mathematically in various ways. We present a novel modelling framework—the multi-assumption architecture and testbed (MAAT)—that automates the combination, generation, and execution of a model ensemble built with different representations of process. We will present the argument that multi-hypothesis modelling needs to be considered in conjunction with other capabilities (e.g. the Predictive Ecosystem Analyser; PecAn) and statistical methods (e.g. sensitivity anaylsis, data assimilation) to aid efforts in robust data model integration to enhance our predictive understanding of biological systems.

  16. CFD-RANS prediction of individual exposure from continuous release of hazardous airborne materials in complex urban environments

    NASA Astrophysics Data System (ADS)

    Efthimiou, G. C.; Andronopoulos, S.; Bartzis, J. G.; Berbekar, E.; Harms, F.; Leitl, B.

    2017-02-01

    One of the key issues of recent research on the dispersion inside complex urban environments is the ability to predict individual exposure (maximum dosages) of an airborne material which is released continuously from a point source. The present work addresses the question whether the computational fluid dynamics (CFD)-Reynolds-averaged Navier-Stokes (RANS) methodology can be used to predict individual exposure for various exposure times. This is feasible by providing the two RANS concentration moments (mean and variance) and a turbulent time scale to a deterministic model. The whole effort is focused on the prediction of individual exposure inside a complex real urban area. The capabilities of the proposed methodology are validated against wind-tunnel data (CUTE experiment). The present simulations were performed 'blindly', i.e. the modeller had limited information for the inlet boundary conditions and the results were kept unknown until the end of the COST Action ES1006. Thus, a high uncertainty of the results was expected. The general performance of the methodology due to this 'blind' strategy is good. The validation metrics fulfil the acceptance criteria. The effect of the grid and the turbulence model on the model performance is examined.

  17. A hierarchical spatial model for well yield in complex aquifers

    NASA Astrophysics Data System (ADS)

    Montgomery, J.; O'sullivan, F.

    2017-12-01

    Efficiently siting and managing groundwater wells requires reliable estimates of the amount of water that can be produced, or the well yield. This can be challenging to predict in highly complex, heterogeneous fractured aquifers due to the uncertainty around local hydraulic properties. Promising statistical approaches have been advanced in recent years. For instance, kriging and multivariate regression analysis have been applied to well test data with limited but encouraging levels of prediction accuracy. Additionally, some analytical solutions to diffusion in homogeneous porous media have been used to infer "effective" properties consistent with observed flow rates or drawdown. However, this is an under-specified inverse problem with substantial and irreducible uncertainty. We describe a flexible machine learning approach capable of combining diverse datasets with constraining physical and geostatistical models for improved well yield prediction accuracy and uncertainty quantification. Our approach can be implemented within a hierarchical Bayesian framework using Markov Chain Monte Carlo, which allows for additional sources of information to be incorporated in priors to further constrain and improve predictions and reduce the model order. We demonstrate the usefulness of this approach using data from over 7,000 wells in a fractured bedrock aquifer.

  18. Arsenic concentrations, related environmental factors, and the predicted probability of elevated arsenic in groundwater in Pennsylvania

    USGS Publications Warehouse

    Gross, Eliza L.; Low, Dennis J.

    2013-01-01

    Logistic regression models were created to predict and map the probability of elevated arsenic concentrations in groundwater statewide in Pennsylvania and in three intrastate regions to further improve predictions for those three regions (glacial aquifer system, Gettysburg Basin, Newark Basin). Although the Pennsylvania and regional predictive models retained some different variables, they have common characteristics that can be grouped by (1) geologic and soils variables describing arsenic sources and mobilizers, (2) geochemical variables describing the geochemical environment of the groundwater, and (3) locally specific variables that are unique to each of the three regions studied and not applicable to statewide analysis. Maps of Pennsylvania and the three intrastate regions were produced that illustrate that areas most at risk are those with geology and soils capable of functioning as an arsenic source or mobilizer and geochemical groundwater conditions able to facilitate redox reactions. The models have limitations because they may not characterize areas that have localized controls on arsenic mobility. The probability maps associated with this report are intended for regional-scale use and may not be accurate for use at the field scale or when considering individual wells.

  19. Validation metrics for turbulent plasma transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holland, C., E-mail: chholland@ucsd.edu

    Developing accurate models of plasma dynamics is essential for confident predictive modeling of current and future fusion devices. In modern computer science and engineering, formal verification and validation processes are used to assess model accuracy and establish confidence in the predictive capabilities of a given model. This paper provides an overview of the key guiding principles and best practices for the development of validation metrics, illustrated using examples from investigations of turbulent transport in magnetically confined plasmas. Particular emphasis is given to the importance of uncertainty quantification and its inclusion within the metrics, and the need for utilizing synthetic diagnosticsmore » to enable quantitatively meaningful comparisons between simulation and experiment. As a starting point, the structure of commonly used global transport model metrics and their limitations is reviewed. An alternate approach is then presented, which focuses upon comparisons of predicted local fluxes, fluctuations, and equilibrium gradients against observation. The utility of metrics based upon these comparisons is demonstrated by applying them to gyrokinetic predictions of turbulent transport in a variety of discharges performed on the DIII-D tokamak [J. L. Luxon, Nucl. Fusion 42, 614 (2002)], as part of a multi-year transport model validation activity.« less

  20. Surfactant enhanced recovery of tetrachloroethylene from a porous medium containing low permeability lenses. 2. Numerical simulation.

    PubMed

    Rathfelder, K M; Abriola, L M; Taylor, T P; Pennell, K D

    2001-04-01

    A numerical model of surfactant enhanced solubilization was developed and applied to the simulation of nonaqueous phase liquid recovery in two-dimensional heterogeneous laboratory sand tank systems. Model parameters were derived from independent, small-scale, batch and column experiments. These parameters included viscosity, density, solubilization capacity, surfactant sorption, interfacial tension, permeability, capillary retention functions, and interphase mass transfer correlations. Model predictive capability was assessed for the evaluation of the micellar solubilization of tetrachloroethylene (PCE) in the two-dimensional systems. Predicted effluent concentrations and mass recovery agreed reasonably well with measured values. Accurate prediction of enhanced solubilization behavior in the sand tanks was found to require the incorporation of pore-scale, system-dependent, interphase mass transfer limitations, including an explicit representation of specific interfacial contact area. Predicted effluent concentrations and mass recovery were also found to depend strongly upon the initial NAPL entrapment configuration. Numerical results collectively indicate that enhanced solubilization processes in heterogeneous, laboratory sand tank systems can be successfully simulated using independently measured soil parameters and column-measured mass transfer coefficients, provided that permeability and NAPL distributions are accurately known. This implies that the accuracy of model predictions at the field scale will be constrained by our ability to quantify soil heterogeneity and NAPL distribution.

  1. Comprehensive Micromechanics-Analysis Code - Version 4.0

    NASA Technical Reports Server (NTRS)

    Arnold, S. M.; Bednarcyk, B. A.

    2005-01-01

    Version 4.0 of the Micromechanics Analysis Code With Generalized Method of Cells (MAC/GMC) has been developed as an improved means of computational simulation of advanced composite materials. The previous version of MAC/GMC was described in "Comprehensive Micromechanics-Analysis Code" (LEW-16870), NASA Tech Briefs, Vol. 24, No. 6 (June 2000), page 38. To recapitulate: MAC/GMC is a computer program that predicts the elastic and inelastic thermomechanical responses of continuous and discontinuous composite materials with arbitrary internal microstructures and reinforcement shapes. The predictive capability of MAC/GMC rests on a model known as the generalized method of cells (GMC) - a continuum-based model of micromechanics that provides closed-form expressions for the macroscopic response of a composite material in terms of the properties, sizes, shapes, and responses of the individual constituents or phases that make up the material. Enhancements in version 4.0 include a capability for modeling thermomechanically and electromagnetically coupled ("smart") materials; a more-accurate (high-fidelity) version of the GMC; a capability to simulate discontinuous plies within a laminate; additional constitutive models of materials; expanded yield-surface-analysis capabilities; and expanded failure-analysis and life-prediction capabilities on both the microscopic and macroscopic scales.

  2. Investigating the Interaction Between Sleep Symptoms of Arousal and Acquired Capability in Predicting Suicidality.

    PubMed

    Hochard, Kevin D; Heym, Nadja; Townsend, Ellen

    2017-06-01

    Heightened arousal significantly interacts with acquired capability to predict suicidality. We explore this interaction with insomnia and nightmares independently of waking state arousal symptoms, and test predictions of the Interpersonal Theory of Suicide (IPTS) and Escape Theory in relation to these sleep arousal symptoms. Findings from our e-survey (n = 540) supported the IPTS over models of Suicide as Escape. Sleep-specific measurements of arousal (insomnia and nightmares) showed no main effect, yet interacted with acquired capability to predict increased suicidality. The explained variance in suicidality by the interaction (1%-2%) using sleep-specific measures was comparable to variance explained by interactions previously reported in the literature using measurements composed of a mix of waking and sleep state arousal symptoms. Similarly, when entrapment (inability to escape) was included in models, main effects of sleep symptoms arousal were not detected yet interacted with entrapment to predict suicidality. We discuss findings in relation to treatment options suggesting that sleep-specific interventions be considered for the long-term management of at-risk individuals. © 2016 The American Association of Suicidology.

  3. Cloud-Based Numerical Weather Prediction for Near Real-Time Forecasting and Disaster Response

    NASA Technical Reports Server (NTRS)

    Molthan, Andrew; Case, Jonathan; Venners, Jason; Schroeder, Richard; Checchi, Milton; Zavodsky, Bradley; Limaye, Ashutosh; O'Brien, Raymond

    2015-01-01

    The use of cloud computing resources continues to grow within the public and private sector components of the weather enterprise as users become more familiar with cloud-computing concepts, and competition among service providers continues to reduce costs and other barriers to entry. Cloud resources can also provide capabilities similar to high-performance computing environments, supporting multi-node systems required for near real-time, regional weather predictions. Referred to as "Infrastructure as a Service", or IaaS, the use of cloud-based computing hardware in an on-demand payment system allows for rapid deployment of a modeling system in environments lacking access to a large, supercomputing infrastructure. Use of IaaS capabilities to support regional weather prediction may be of particular interest to developing countries that have not yet established large supercomputing resources, but would otherwise benefit from a regional weather forecasting capability. Recently, collaborators from NASA Marshall Space Flight Center and Ames Research Center have developed a scripted, on-demand capability for launching the NOAA/NWS Science and Training Resource Center (STRC) Environmental Modeling System (EMS), which includes pre-compiled binaries of the latest version of the Weather Research and Forecasting (WRF) model. The WRF-EMS provides scripting for downloading appropriate initial and boundary conditions from global models, along with higher-resolution vegetation, land surface, and sea surface temperature data sets provided by the NASA Short-term Prediction Research and Transition (SPoRT) Center. This presentation will provide an overview of the modeling system capabilities and benchmarks performed on the Amazon Elastic Compute Cloud (EC2) environment. In addition, the presentation will discuss future opportunities to deploy the system in support of weather prediction in developing countries supported by NASA's SERVIR Project, which provides capacity building activities in environmental monitoring and prediction across a growing number of regional hubs throughout the world. Capacity-building applications that extend numerical weather prediction to developing countries are intended to provide near real-time applications to benefit public health, safety, and economic interests, but may have a greater impact during disaster events by providing a source for local predictions of weather-related hazards, or impacts that local weather events may have during the recovery phase.

  4. Modeling of solid-state and excimer laser processes for 3D micromachining

    NASA Astrophysics Data System (ADS)

    Holmes, Andrew S.; Onischenko, Alexander I.; George, David S.; Pedder, James E.

    2005-04-01

    An efficient simulation method has recently been developed for multi-pulse ablation processes. This is based on pulse-by-pulse propagation of the machined surface according to one of several phenomenological models for the laser-material interaction. The technique allows quantitative predictions to be made about the surface shapes of complex machined parts, given only a minimal set of input data for parameter calibration. In the case of direct-write machining of polymers or glasses with ns-duration pulses, this data set can typically be limited to the surface profiles of a small number of standard test patterns. The use of phenomenological models for the laser-material interaction, calibrated by experimental feedback, allows fast simulation, and can achieve a high degree of accuracy for certain combinations of material, laser and geometry. In this paper, the capabilities and limitations of the approach are discussed, and recent results are presented for structures machined in SU8 photoresist.

  5. Applications of LANCE Data at SPoRT

    NASA Technical Reports Server (NTRS)

    Molthan, Andrew

    2014-01-01

    Short term Prediction Research and Transition (SPoRT) Center: Mission: Apply NASA and NOAA measurement systems and unique Earth science research to improve the accuracy of short term weather prediction at the regional/local scale. Goals: Evaluate and assess the utility of NASA and NOAA Earth science data and products and unique research capabilities to address operational weather forecast problems; Provide an environment which enables the development and testing of new capabilities to improve short term weather forecasts on a regional scale; Help ensure successful transition of new capabilities to operational weather entities for the benefit of society

  6. Allogeneic Cell Therapy Bioprocess Economics and Optimization: Single-Use Cell Expansion Technologies

    PubMed Central

    Simaria, Ana S; Hassan, Sally; Varadaraju, Hemanthram; Rowley, Jon; Warren, Kim; Vanek, Philip; Farid, Suzanne S

    2014-01-01

    For allogeneic cell therapies to reach their therapeutic potential, challenges related to achieving scalable and robust manufacturing processes will need to be addressed. A particular challenge is producing lot-sizes capable of meeting commercial demands of up to 109 cells/dose for large patient numbers due to the current limitations of expansion technologies. This article describes the application of a decisional tool to identify the most cost-effective expansion technologies for different scales of production as well as current gaps in the technology capabilities for allogeneic cell therapy manufacture. The tool integrates bioprocess economics with optimization to assess the economic competitiveness of planar and microcarrier-based cell expansion technologies. Visualization methods were used to identify the production scales where planar technologies will cease to be cost-effective and where microcarrier-based bioreactors become the only option. The tool outputs also predict that for the industry to be sustainable for high demand scenarios, significant increases will likely be needed in the performance capabilities of microcarrier-based systems. These data are presented using a technology S-curve as well as windows of operation to identify the combination of cell productivities and scale of single-use bioreactors required to meet future lot sizes. The modeling insights can be used to identify where future R&D investment should be focused to improve the performance of the most promising technologies so that they become a robust and scalable option that enables the cell therapy industry reach commercially relevant lot sizes. The tool outputs can facilitate decision-making very early on in development and be used to predict, and better manage, the risk of process changes needed as products proceed through the development pathway. Biotechnol. Bioeng. 2014;111: 69–83. © 2013 Wiley Periodicals, Inc. PMID:23893544

  7. Allogeneic cell therapy bioprocess economics and optimization: single-use cell expansion technologies.

    PubMed

    Simaria, Ana S; Hassan, Sally; Varadaraju, Hemanthram; Rowley, Jon; Warren, Kim; Vanek, Philip; Farid, Suzanne S

    2014-01-01

    For allogeneic cell therapies to reach their therapeutic potential, challenges related to achieving scalable and robust manufacturing processes will need to be addressed. A particular challenge is producing lot-sizes capable of meeting commercial demands of up to 10(9) cells/dose for large patient numbers due to the current limitations of expansion technologies. This article describes the application of a decisional tool to identify the most cost-effective expansion technologies for different scales of production as well as current gaps in the technology capabilities for allogeneic cell therapy manufacture. The tool integrates bioprocess economics with optimization to assess the economic competitiveness of planar and microcarrier-based cell expansion technologies. Visualization methods were used to identify the production scales where planar technologies will cease to be cost-effective and where microcarrier-based bioreactors become the only option. The tool outputs also predict that for the industry to be sustainable for high demand scenarios, significant increases will likely be needed in the performance capabilities of microcarrier-based systems. These data are presented using a technology S-curve as well as windows of operation to identify the combination of cell productivities and scale of single-use bioreactors required to meet future lot sizes. The modeling insights can be used to identify where future R&D investment should be focused to improve the performance of the most promising technologies so that they become a robust and scalable option that enables the cell therapy industry reach commercially relevant lot sizes. The tool outputs can facilitate decision-making very early on in development and be used to predict, and better manage, the risk of process changes needed as products proceed through the development pathway. © 2013 Wiley Periodicals, Inc.

  8. Aircraft noise prediction program user's manual

    NASA Technical Reports Server (NTRS)

    Gillian, R. E.

    1982-01-01

    The Aircraft Noise Prediction Program (ANOPP) predicts aircraft noise with the best methods available. This manual is designed to give the user an understanding of the capabilities of ANOPP and to show how to formulate problems and obtain solutions by using these capabilities. Sections within the manual document basic ANOPP concepts, ANOPP usage, ANOPP functional modules, ANOPP control statement procedure library, and ANOPP permanent data base. appendixes to the manual include information on preparing job decks for the operating systems in use, error diagnostics and recovery techniques, and a glossary of ANOPP terms.

  9. High-Fidelity Aerostructural Design Optimization of Transport Aircraft with Continuous Morphing Trailing Edge Technology

    NASA Astrophysics Data System (ADS)

    Burdette, David A., Jr.

    Adaptive morphing trailing edge technology offers the potential to decrease the fuel burn of transonic commercial transport aircraft by allowing wings to dynamically adjust to changing flight conditions. Current configurations allow flap and aileron droop; however, this approach provides limited degrees of freedom and increased drag produced by gaps in the wing's surface. Leading members in the aeronautics community including NASA, AFRL, Boeing, and a number of academic institutions have extensively researched morphing technology for its potential to improve aircraft efficiency. With modern computational tools it is possible to accurately and efficiently model aircraft configurations in order to quantify the efficiency improvements offered by mor- phing technology. Coupled high-fidelity aerodynamic and structural solvers provide the capability to model and thoroughly understand the nuanced trade-offs involved in aircraft design. This capability is important for a detailed study of the capabilities of morphing trailing edge technology. Gradient-based multidisciplinary design opti- mization provides the ability to efficiently traverse design spaces and optimize the trade-offs associated with the design. This thesis presents a number of optimization studies comparing optimized config- urations with and without morphing trailing edge devices. The baseline configuration used throughout this work is the NASA Common Research Model. The first opti- mization comparison considers the optimal fuel burn predicted by the Breguet range equation at a single cruise point. This initial singlepoint optimization comparison demonstrated a limited fuel burn savings of less than 1%. Given the effectiveness of the passive aeroelastic tailoring in the optimized non-morphing wing, the singlepoint optimization offered limited potential for morphing technology to provide any bene- fit. To provide a more appropriate comparison, a number of multipoint optimizations were performed. With a 3-point stencil, the morphing wing burned 2.53% less fuel than its optimized non-morphing counterpart. Expanding further to a 7-point stencil, the morphing wing used 5.04% less fuel. Additional studies demonstrate that the size of the morphing device can be reduced without sizable performance reductions, and that as aircraft wings' aspect ratios increase, the effectiveness of morphing trailing edge devices increases. The final set of studies in this thesis consider mission analy- sis, including climb, multi-altitude cruise, and descent. These mission analyses were performed with a number of surrogate models, trained with O(100) optimizations. These optimizations demonstrated fuel burn reductions as large as 5% at off-design conditions. The fuel burn predicted by the mission analysis was up to 2.7% lower for the morphing wing compared to the conventional configuration.

  10. Linear prediction data extrapolation superresolution radar imaging

    NASA Astrophysics Data System (ADS)

    Zhu, Zhaoda; Ye, Zhenru; Wu, Xiaoqing

    1993-05-01

    Range resolution and cross-range resolution of range-doppler imaging radars are related to the effective bandwidth of transmitted signal and the angle through which the object rotates relatively to the radar line of sight (RLOS) during the coherent processing time, respectively. In this paper, linear prediction data extrapolation discrete Fourier transform (LPDEDFT) superresolution imaging method is investigated for the purpose of surpassing the limitation imposed by the conventional FFT range-doppler processing and improving the resolution capability of range-doppler imaging radar. The LPDEDFT superresolution imaging method, which is conceptually simple, consists of extrapolating observed data beyond the observation windows by means of linear prediction, and then performing the conventional IDFT of the extrapolated data. The live data of a metalized scale model B-52 aircraft mounted on a rotating platform in a microwave anechoic chamber and a flying Boeing-727 aircraft were processed. It is concluded that, compared to the conventional Fourier method, either higher resolution for the same effective bandwidth of transmitted signals and total rotation angle of the object or equal-quality images from smaller bandwidth and total angle may be obtained by LPDEDFT.

  11. Investigating the Relationship between Ocean Surface Currents and Seasonal Precipitation in the Western United States

    NASA Astrophysics Data System (ADS)

    Chiang, F.; AghaKouchak, A.

    2017-12-01

    While many studies have explored the predictive capabilities of teleconnections associated with North American climate, currently established teleconnections offer limited predictability for rainfall in the Western United States. A recent example was the 2015-16 California drought in which a strong ENSO signal did not lead to above average precipitation as was expected. From an exploration of climate and ocean variables available from satellite data, we hypothesize that ocean currents can provide additional information to explain precipitation variability and improve seasonal predictability on the West Coast. Since ocean currents are influenced by surface wind and temperatures, characterizing connections between currents and precipitation patterns has the potential to further our understanding of coastal weather patterns. For the study, we generated gridded point correlation maps to identify ocean areas with high correlation to precipitation time series corresponding to climate regions in the West Coast region. We also used other statistical measures to evaluate ocean `hot spot' regions with significant correlation to West Coast precipitation. Preliminary results show that strong correlations can be found in the tropical regions of the globe.

  12. Polarization modeling and predictions for Daniel K. Inouye Solar Telescope part 1: telescope and example instrument configurations

    NASA Astrophysics Data System (ADS)

    Harrington, David M.; Sueoka, Stacey R.

    2017-01-01

    We outline polarization performance calculations and predictions for the Daniel K. Inouye Solar Telescope (DKIST) optics and show Mueller matrices for two of the first light instruments. Telescope polarization is due to polarization-dependent mirror reflectivity and rotations between groups of mirrors as the telescope moves in altitude and azimuth. The Zemax optical modeling software has polarization ray-trace capabilities and predicts system performance given a coating prescription. We develop a model coating formula that approximates measured witness sample polarization properties. Estimates show the DKIST telescope Mueller matrix as functions of wavelength, azimuth, elevation, and field angle for the cryogenic near infra-red spectro-polarimeter (CryoNIRSP) and visible spectro-polarimeter. Footprint variation is substantial and shows vignetted field points will have strong polarization effects. We estimate 2% variation of some Mueller matrix elements over the 5-arc min CryoNIRSP field. We validate the Zemax model by showing limiting cases for flat mirrors in collimated and powered designs that compare well with theoretical approximations and are testable with lab ellipsometers.

  13. Fatigue-Life Prediction Methodology Using Small-Crack Theory

    NASA Technical Reports Server (NTRS)

    Newmann, James C., Jr.; Phillips, Edward P.; Swain, M. H.

    1997-01-01

    This paper reviews the capabilities of a plasticity-induced crack-closure model to predict fatigue lives of metallic materials using 'small-crack theory' for various materials and loading conditions. Crack-tip constraint factors, to account for three-dimensional state-of-stress effects, were selected to correlate large-crack growth rate data as a function of the effective-stress-intensity factor range (delta K(eff)) under constant-amplitude loading. Some modifications to the delta k(eff)-rate relations were needed in the near-threshold regime to fit measured small-crack growth rate behavior and fatigue endurance limits. The model was then used to calculate small- and large-crack growth rates, and to predict total fatigue lives, for notched and un-notched specimens made of two aluminum alloys and a steel under constant-amplitude and spectrum loading. Fatigue lives were calculated using the crack-growth relations and microstructural features like those that initiated cracks for the aluminum alloys and steel for edge-notched specimens. An equivalent-initial-flaw-size concept was used to calculate fatigue lives in other cases. Results from the tests and analyses agreed well.

  14. The fate of memory: Reconsolidation and the case of Prediction Error.

    PubMed

    Fernández, Rodrigo S; Boccia, Mariano M; Pedreira, María E

    2016-09-01

    The ability to make predictions based on stored information is a general coding strategy. A Prediction-Error (PE) is a mismatch between expected and current events. It was proposed as the process by which memories are acquired. But, our memories like ourselves are subject to change. Thus, an acquired memory can become active and update its content or strength by a labilization-reconsolidation process. Within the reconsolidation framework, PE drives the updating of consolidated memories. Moreover, memory features, such as strength and age, are crucial boundary conditions that limit the initiation of the reconsolidation process. In order to disentangle these boundary conditions, we review the role of surprise, classical models of conditioning, and their neural correlates. Several forms of PE were found to be capable of inducing memory labilization-reconsolidation. Notably, many of the PE findings mirror those of memory-reconsolidation, suggesting a strong link between these signals and memory process. Altogether, the aim of the present work is to integrate a psychological and neuroscientific analysis of PE into a general framework for memory-reconsolidation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Evaluation of a habitat capability model for nongame birds in the Black Hills, South Dakota

    Treesearch

    Todd R. Mills; Mark A. Rumble; Lester D. Flake

    1996-01-01

    Habitat models, used to predict consequences of land management decisions on wildlife, can have considerable economic effect on management decisions. The Black Hills National Forest uses such a habitat capability model (HABCAP), but its accuracy is largely unknown. We tested this model’s predictive accuracy for nongame birds in 13 vegetative structural stages of...

  16. Ballistic-Failure Mechanisms in Gas Metal Arc Welds of Mil A46100 Armor-Grade Steel: A Computational Investigation

    NASA Astrophysics Data System (ADS)

    Grujicic, M.; Snipes, J. S.; Galgalikar, R.; Ramaswami, S.; Yavari, R.; Yen, C.-F.; Cheeseman, B. A.

    2014-09-01

    In our recent work, a multi-physics computational model for the conventional gas metal arc welding (GMAW) joining process was introduced. The model is of a modular type and comprises five modules, each designed to handle a specific aspect of the GMAW process, i.e.: (i) electro-dynamics of the welding-gun; (ii) radiation-/convection-controlled heat transfer from the electric-arc to the workpiece and mass transfer from the filler-metal consumable electrode to the weld; (iii) prediction of the temporal evolution and the spatial distribution of thermal and mechanical fields within the weld region during the GMAW joining process; (iv) the resulting temporal evolution and spatial distribution of the material microstructure throughout the weld region; and (v) spatial distribution of the as-welded material mechanical properties. In the present work, the GMAW process model has been upgraded with respect to its predictive capabilities regarding the spatial distribution of the mechanical properties controlling the ballistic-limit (i.e., penetration-resistance) of the weld. The model is upgraded through the introduction of the sixth module in the present work in recognition of the fact that in thick steel GMAW weldments, the overall ballistic performance of the armor may become controlled by the (often inferior) ballistic limits of its weld (fusion and heat-affected) zones. To demonstrate the utility of the upgraded GMAW process model, it is next applied to the case of butt-welding of a prototypical high-hardness armor-grade martensitic steel, MIL A46100. The model predictions concerning the spatial distribution of the material microstructure and ballistic-limit-controlling mechanical properties within the MIL A46100 butt-weld are found to be consistent with prior observations and general expectations.

  17. Design of the Next Generation Aircraft Noise Prediction Program: ANOPP2

    NASA Technical Reports Server (NTRS)

    Lopes, Leonard V., Dr.; Burley, Casey L.

    2011-01-01

    The requirements, constraints, and design of NASA's next generation Aircraft NOise Prediction Program (ANOPP2) are introduced. Similar to its predecessor (ANOPP), ANOPP2 provides the U.S. Government with an independent aircraft system noise prediction capability that can be used as a stand-alone program or within larger trade studies that include performance, emissions, and fuel burn. The ANOPP2 framework is designed to facilitate the combination of acoustic approaches of varying fidelity for the analysis of noise from conventional and unconventional aircraft. ANOPP2 integrates noise prediction and propagation methods, including those found in ANOPP, into a unified system that is compatible for use within general aircraft analysis software. The design of the system is described in terms of its functionality and capability to perform predictions accounting for distributed sources, installation effects, and propagation through a non-uniform atmosphere including refraction and the influence of terrain. The philosophy of mixed fidelity noise prediction through the use of nested Ffowcs Williams and Hawkings surfaces is presented and specific issues associated with its implementation are identified. Demonstrations for a conventional twin-aisle and an unconventional hybrid wing body aircraft configuration are presented to show the feasibility and capabilities of the system. Isolated model-scale jet noise predictions are also presented using high-fidelity and reduced order models, further demonstrating ANOPP2's ability to provide predictions for model-scale test configurations.

  18. Automated System Checkout to Support Predictive Maintenance for the Reusable Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Patterson-Hine, Ann; Deb, Somnath; Kulkarni, Deepak; Wang, Yao; Lau, Sonie (Technical Monitor)

    1998-01-01

    The Propulsion Checkout and Control System (PCCS) is a predictive maintenance software system. The real-time checkout procedures and diagnostics are designed to detect components that need maintenance based on their condition, rather than using more conventional approaches such as scheduled or reliability centered maintenance. Predictive maintenance can reduce turn-around time and cost and increase safety as compared to conventional maintenance approaches. Real-time sensor validation, limit checking, statistical anomaly detection, and failure prediction based on simulation models are employed. Multi-signal models, useful for testability analysis during system design, are used during the operational phase to detect and isolate degraded or failed components. The TEAMS-RT real-time diagnostic engine was developed to utilize the multi-signal models by Qualtech Systems, Inc. Capability of predicting the maintenance condition was successfully demonstrated with a variety of data, from simulation to actual operation on the Integrated Propulsion Technology Demonstrator (IPTD) at Marshall Space Flight Center (MSFC). Playback of IPTD valve actuations for feature recognition updates identified an otherwise undetectable Main Propulsion System 12 inch prevalve degradation. The algorithms were loaded into the Propulsion Checkout and Control System for further development and are the first known application of predictive Integrated Vehicle Health Management to an operational cryogenic testbed. The software performed successfully in real-time, meeting the required performance goal of 1 second cycle time.

  19. Extension to Higher Mass Numbers of an Improved Knockout-Ablation-Coalescence Model for Secondary Neutron and Light Ion Production in Cosmic Ray Interactions

    NASA Astrophysics Data System (ADS)

    Indi Sriprisan, Sirikul; Townsend, Lawrence; Cucinotta, Francis A.; Miller, Thomas M.

    Purpose: An analytical knockout-ablation-coalescence model capable of making quantitative predictions of the neutron spectra from high-energy nucleon-nucleus and nucleus-nucleus collisions is being developed for use in space radiation protection studies. The FORTRAN computer code that implements this model is called UBERNSPEC. The knockout or abrasion stage of the model is based on Glauber multiple scattering theory. The ablation part of the model uses the classical evaporation model of Weisskopf-Ewing. In earlier work, the knockout-ablation model has been extended to incorporate important coalescence effects into the formalism. Recently, alpha coalescence has been incorporated, and the ability to predict light ion spectra with the coalescence model added. The earlier versions were limited to nuclei with mass numbers less than 69. In this work, the UBERNSPEC code has been extended to make predictions of secondary neutrons and light ion production from the interactions of heavy charged particles with higher mass numbers (as large as 238). The predictions are compared with published measurements of neutron spectra and light ion energy for a variety of collision pairs. Furthermore, the predicted spectra from this work are compared with the predictions from the recently-developed heavy ion event generator incorporated in the Monte Carlo radiation transport code HETC-HEDS.

  20. Accurate prediction of severe allergic reactions by a small set of environmental parameters (NDVI, temperature).

    PubMed

    Notas, George; Bariotakis, Michail; Kalogrias, Vaios; Andrianaki, Maria; Azariadis, Kalliopi; Kampouri, Errika; Theodoropoulou, Katerina; Lavrentaki, Katerina; Kastrinakis, Stelios; Kampa, Marilena; Agouridakis, Panagiotis; Pirintsos, Stergios; Castanas, Elias

    2015-01-01

    Severe allergic reactions of unknown etiology,necessitating a hospital visit, have an important impact in the life of affected individuals and impose a major economic burden to societies. The prediction of clinically severe allergic reactions would be of great importance, but current attempts have been limited by the lack of a well-founded applicable methodology and the wide spatiotemporal distribution of allergic reactions. The valid prediction of severe allergies (and especially those needing hospital treatment) in a region, could alert health authorities and implicated individuals to take appropriate preemptive measures. In the present report we have collecterd visits for serious allergic reactions of unknown etiology from two major hospitals in the island of Crete, for two distinct time periods (validation and test sets). We have used the Normalized Difference Vegetation Index (NDVI), a satellite-based, freely available measurement, which is an indicator of live green vegetation at a given geographic area, and a set of meteorological data to develop a model capable of describing and predicting severe allergic reaction frequency. Our analysis has retained NDVI and temperature as accurate identifiers and predictors of increased hospital severe allergic reactions visits. Our approach may contribute towards the development of satellite-based modules, for the prediction of severe allergic reactions in specific, well-defined geographical areas. It could also probably be used for the prediction of other environment related diseases and conditions.

  1. Accurate Prediction of Severe Allergic Reactions by a Small Set of Environmental Parameters (NDVI, Temperature)

    PubMed Central

    Andrianaki, Maria; Azariadis, Kalliopi; Kampouri, Errika; Theodoropoulou, Katerina; Lavrentaki, Katerina; Kastrinakis, Stelios; Kampa, Marilena; Agouridakis, Panagiotis; Pirintsos, Stergios; Castanas, Elias

    2015-01-01

    Severe allergic reactions of unknown etiology,necessitating a hospital visit, have an important impact in the life of affected individuals and impose a major economic burden to societies. The prediction of clinically severe allergic reactions would be of great importance, but current attempts have been limited by the lack of a well-founded applicable methodology and the wide spatiotemporal distribution of allergic reactions. The valid prediction of severe allergies (and especially those needing hospital treatment) in a region, could alert health authorities and implicated individuals to take appropriate preemptive measures. In the present report we have collecterd visits for serious allergic reactions of unknown etiology from two major hospitals in the island of Crete, for two distinct time periods (validation and test sets). We have used the Normalized Difference Vegetation Index (NDVI), a satellite-based, freely available measurement, which is an indicator of live green vegetation at a given geographic area, and a set of meteorological data to develop a model capable of describing and predicting severe allergic reaction frequency. Our analysis has retained NDVI and temperature as accurate identifiers and predictors of increased hospital severe allergic reactions visits. Our approach may contribute towards the development of satellite-based modules, for the prediction of severe allergic reactions in specific, well-defined geographical areas. It could also probably be used for the prediction of other environment related diseases and conditions. PMID:25794106

  2. Acoustic Prediction State of the Art Assessment

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.

    2007-01-01

    The acoustic assessment task for both the Subsonic Fixed Wing and the Supersonic projects under NASA s Fundamental Aeronautics Program was designed to assess the current state-of-the-art in noise prediction capability and to establish baselines for gauging future progress. The documentation of our current capabilities included quantifying the differences between predictions of noise from computer codes and measurements of noise from experimental tests. Quantifying the accuracy of both the computed and experimental results further enhanced the credibility of the assessment. This presentation gives sample results from codes representative of NASA s capabilities in aircraft noise prediction both for systems and components. These include semi-empirical, statistical, analytical, and numerical codes. System level results are shown for both aircraft and engines. Component level results are shown for a landing gear prototype, for fan broadband noise, for jet noise from a subsonic round nozzle, and for propulsion airframe aeroacoustic interactions. Additional results are shown for modeling of the acoustic behavior of duct acoustic lining and the attenuation of sound in lined ducts with flow.

  3. A benchmark system to optimize our defense against an attack on the US food supply using the Risk Reduction Effectiveness and Capabilities Assessment Program.

    PubMed

    Hodoh, Ofia; Dallas, Cham E; Williams, Paul; Jaine, Andrew M; Harris, Curt

    2015-01-01

    A predictive system was developed and tested in a series of exercises with the objective of evaluating the preparedness and effectiveness of the multiagency response to food terrorism attacks. A computerized simulation model, Risk Reduction Effectiveness and Capabilities Assessment Program (RRECAP), was developed to identify the key factors that influence the outcomes of an attack and quantify the relative reduction of such outcomes caused by each factor. The model was evaluated in a set of Tabletop and Full-Scale Exercises that simulate biological and chemical attacks on the food system. More than 300 participants representing more than 60 federal, state, local, and private sector agencies and organizations. The exercises showed that agencies could use RRECAP to identify and prioritize their advance preparation to mitigate such attacks with minimal expense. RRECAP also demonstrated the relative utility and limitations of the ability of medical resources to treat patients if responders do not recognize and mitigate the attack rapidly, and the exercise results showed that proper advance preparation would reduce these deficiencies. Using computer simulation prediction of the medical outcomes of food supply attacks to identify optimal remediation activities and quantify the benefits of various measures provides a significant tool to agencies in both the public and private sector as they seek to prepare for such an attack.

  4. Recent Developments in the Formability of Aluminum Alloys

    NASA Astrophysics Data System (ADS)

    Banabic, Dorel; Cazacu, Oana; Paraianu, Liana; Jurco, Paul

    2005-08-01

    The paper presents a few recent contributions brought by the authors in the field of the formability of aluminum alloys. A new concept for calculating Forming Limit Diagrams (FLD) using the finite element method is presented. The article presents a new strategy for calculating both branches of an FLD, using a Hutchinson - Neale model implemented in a finite element code. The simulations have been performed with Abaqus/Standard. The constitutive model has been implemented using a UMAT subroutine. The plastic anisotropy of the sheet metal is described by the Cazacu-Barlat and the BBC2003 yield criteria. The theoretical predictions have been compared with the results given by the classical Hutchinson - Neale method and also with experimental data for different aluminum alloys. The comparison proves the capability of the finite element method to predict the strain localization. A computer program used for interactive calculation and graphical representation of different Yield Loci and Forming Limit Diagrams has also been developed. The program is based on a Hutchinson-Neale model. Different yield criteria (Hill 1948, Barlat-Lian and BBC 2003) are implemented in this model. The program consists in three modules: a graphical interface for input, a module for the identification and visualization of the yield surfaces, and a module for calculating and visualizing the forming limit curves. A useful facility offered by the program is the possibility to perform the sensitivity analysis both for the yield surface and the forming limit curves. The numerical results can be compared with experimental data, using the import/export facilities included in the program.

  5. Recent Developments in the Formability of Aluminum Alloys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Banabic, Dorel; Paraianu, Liana; Jurco, Paul

    The paper presents a few recent contributions brought by the authors in the field of the formability of aluminum alloys. A new concept for calculating Forming Limit Diagrams (FLD) using the finite element method is presented. The article presents a new strategy for calculating both branches of an FLD, using a Hutchinson - Neale model implemented in a finite element code. The simulations have been performed with Abaqus/Standard. The constitutive model has been implemented using a UMAT subroutine. The plastic anisotropy of the sheet metal is described by the Cazacu-Barlat and the BBC2003 yield criteria. The theoretical predictions have beenmore » compared with the results given by the classical Hutchinson - Neale method and also with experimental data for different aluminum alloys. The comparison proves the capability of the finite element method to predict the strain localization. A computer program used for interactive calculation and graphical representation of different Yield Loci and Forming Limit Diagrams has also been developed. The program is based on a Hutchinson-Neale model. Different yield criteria (Hill 1948, Barlat-Lian and BBC 2003) are implemented in this model. The program consists in three modules: a graphical interface for input, a module for the identification and visualization of the yield surfaces, and a module for calculating and visualizing the forming limit curves. A useful facility offered by the program is the possibility to perform the sensitivity analysis both for the yield surface and the forming limit curves. The numerical results can be compared with experimental data, using the import/export facilities included in the program.« less

  6. Application Of A New Semi-Empirical Model For Forming Limit Prediction Of Sheet Material Including Superposed Loads Of Bending And Shearing

    NASA Astrophysics Data System (ADS)

    Held, Christian; Liewald, Mathias; Schleich, Ralf; Sindel, Manfred

    2010-06-01

    The use of lightweight materials offers substantial strength and weight advantages in car body design. Unfortunately such kinds of sheet material are more susceptible to wrinkling, spring back and fracture during press shop operations. For characterization of capability of sheet material dedicated to deep drawing processes in the automotive industry, mainly Forming Limit Diagrams (FLD) are used. However, new investigations at the Institute for Metal Forming Technology have shown that High Strength Steel Sheet Material and Aluminum Alloys show increased formability in case of bending loads are superposed to stretching loads. Likewise, by superposing shearing on in plane uniaxial or biaxial tension formability changes because of materials crystallographic texture. Such mixed stress and strain conditions including bending and shearing effects can occur in deep-drawing processes of complex car body parts as well as subsequent forming operations like flanging. But changes in formability cannot be described by using the conventional FLC. Hence, for purpose of improvement of failure prediction in numerical simulation codes significant failure criteria for these strain conditions are missing. Considering such aspects in defining suitable failure criteria which is easy to implement into FEA a new semi-empirical model has been developed considering the effect of bending and shearing in sheet metals formability. This failure criterion consists of the combination of the so called cFLC (combined Forming Limit Curve), which considers superposed bending load conditions and the SFLC (Shear Forming Limit Curve), which again includes the effect of shearing on sheet metal's formability.

  7. Prediction of circulation control performance characteristics for Super STOL and STOL applications

    NASA Astrophysics Data System (ADS)

    Naqvi, Messam Abbas

    The rapid air travel growth during the last three decades, has resulted in runway congestion at major airports. The current airports infrastructure will not be able to support the rapid growth trends expected in the next decade. Changes or upgrades in infrastructure alone would not be able to satisfy the growth requirements, and new airplane concepts such as the NASA proposed Super Short Takeoff and Landing and Extremely Short Takeoff & Landing (ESTOL) are being vigorously pursued. Aircraft noise pollution during Takeoff & Landing is another serious concern and efforts are aimed to reduce the airframe noise produced by Conventional High Lift Devices during Takeoff & Landing. Circulation control technology has the prospect of being a good alternative to resolve both the aforesaid issues. Circulation control airfoils are not only capable of producing very high values of lift (Cl values in excess of 8.0) at zero degree angle of attack, but also eliminate the noise generated by the conventional high lift devices and their associated weight penalty as well as their complex operation and storage. This will ensure not only satisfying the small takeoff and landing distances, but minimal acoustic signature in accordance with FAA requirements. The Circulation Control relies on the tendency of an emanating wall jet to independently control the circulation and lift on an airfoil. Unlike, conventional airfoil where rear stagnation point is located at the sharp trailing edge, circulation control airfoils possess a round trailing edge, therefore the rear stagnation point is free to move. The location of rear stagnation point is controlled by the blown jet momentum. This provides a secondary control in the form of jet momentum with which the lift generated can be controlled rather the only available control of incidence (angle of attack) in case of conventional airfoils. The use of Circulation control despite its promising potential has been limited only to research applications due to the lack of a simple prediction capability. This research effort was focused on the creation of a rapid prediction capability of Circulation Control Aerodynamic Characteristics which could help designers with rapid performance estimates for design space exploration. A morphological matrix was created with the available set of options which could be chosen to create this prediction capability starting with purely analytical physics based modeling to high fidelity CFD codes. Based on the available constraints, and desired accuracy meta-models have been created around the two dimensional circulation control performance results computed using Navier Stokes Equations (Computational Fluid Dynamics). DSS2, a two dimensional RANS code written by Professor Lakshmi Sankar was utilized for circulation control airfoil characteristics. The CFD code was first applied to the NCCR 1510-7607N airfoil to validate the model with available experimental results. It was then applied to compute the results of a fractional factorial design of experiments array. Metamodels were formulated using the neural networks to the results obtained from the Design of Experiments. Additional validation runs were performed to validate the model predictions. Metamodels are not only capable of rapid performance prediction, but also help generate the relation trends of response matrices with control variables and capture the complex interactions between control variables. Quantitative as well as qualitative assessments of results were performed by computation of aerodynamic forces & moments and flow field visualizations. Wing characteristics in three dimensions were obtained by integration over the whole wing using Prandtl's Wing Theory. The baseline Super STOL configuration [3] was then analyzed with the application of circulation control technology. The desired values of lift and drag to achieve the target values of Takeoff & Landing performance were compared with the optimal configurations obtained by the model. The same optimal configurations were then subjected to Super STOL cruise conditions to perform a trade off analysis between Takeoff and Cruise Performance. Supercritical airfoils modified for circulation control were also thoroughly analyzed for Takeoff and Cruise performance and may constitute a viable option for Super STOL & STOL Designs. The prediction capability produced by this research effort can be integrated with the current conceptual aircraft modeling & simulation framework. The prediction tool is applicable within the selected ranges of each variable, but methodology and formulation scheme adopted can be applied to any other design space exploration.

  8. Development of a generally applicable morphokinetic algorithm capable of predicting the implantation potential of embryos transferred on Day 3

    PubMed Central

    Petersen, Bjørn Molt; Boel, Mikkel; Montag, Markus; Gardner, David K.

    2016-01-01

    STUDY QUESTION Can a generally applicable morphokinetic algorithm suitable for Day 3 transfers of time-lapse monitored embryos originating from different culture conditions and fertilization methods be developed for the purpose of supporting the embryologist's decision on which embryo to transfer back to the patient in assisted reproduction? SUMMARY ANSWER The algorithm presented here can be used independently of culture conditions and fertilization method and provides predictive power not surpassed by other published algorithms for ranking embryos according to their blastocyst formation potential. WHAT IS KNOWN ALREADY Generally applicable algorithms have so far been developed only for predicting blastocyst formation. A number of clinics have reported validated implantation prediction algorithms, which have been developed based on clinic-specific culture conditions and clinical environment. However, a generally applicable embryo evaluation algorithm based on actual implantation outcome has not yet been reported. STUDY DESIGN, SIZE, DURATION Retrospective evaluation of data extracted from a database of known implantation data (KID) originating from 3275 embryos transferred on Day 3 conducted in 24 clinics between 2009 and 2014. The data represented different culture conditions (reduced and ambient oxygen with various culture medium strategies) and fertilization methods (IVF, ICSI). The capability to predict blastocyst formation was evaluated on an independent set of morphokinetic data from 11 218 embryos which had been cultured to Day 5. PARTICIPANTS/MATERIALS, SETTING, METHODS The algorithm was developed by applying automated recursive partitioning to a large number of annotation types and derived equations, progressing to a five-fold cross-validation test of the complete data set and a validation test of different incubation conditions and fertilization methods. The results were expressed as receiver operating characteristics curves using the area under the curve (AUC) to establish the predictive strength of the algorithm. MAIN RESULTS AND THE ROLE OF CHANCE By applying the here developed algorithm (KIDScore), which was based on six annotations (the number of pronuclei equals 2 at the 1-cell stage, time from insemination to pronuclei fading at the 1-cell stage, time from insemination to the 2-cell stage, time from insemination to the 3-cell stage, time from insemination to the 5-cell stage and time from insemination to the 8-cell stage) and ranking the embryos in five groups, the implantation potential of the embryos was predicted with an AUC of 0.650. On Day 3 the KIDScore algorithm was capable of predicting blastocyst development with an AUC of 0.745 and blastocyst quality with an AUC of 0.679. In a comparison of blastocyst prediction including six other published algorithms and KIDScore, only KIDScore and one more algorithm surpassed an algorithm constructed on conventional Alpha/ESHRE consensus timings in terms of predictive power. LIMITATIONS, REASONS FOR CAUTION Some morphological assessments were not available and consequently three of the algorithms in the comparison were not used in full and may therefore have been put at a disadvantage. Algorithms based on implantation data from Day 3 embryo transfers require adjustments to be capable of predicting the implantation potential of Day 5 embryo transfers. The current study is restricted by its retrospective nature and absence of live birth information. Prospective Randomized Controlled Trials should be used in future studies to establish the value of time-lapse technology and morphokinetic evaluation. WIDER IMPLICATIONS OF THE FINDINGS Algorithms applicable to different culture conditions can be developed if based on large data sets of heterogeneous origin. STUDY FUNDING/COMPETING INTEREST(S) This study was funded by Vitrolife A/S, Denmark and Vitrolife AB, Sweden. B.M.P.’s company BMP Analytics is performing consultancy for Vitrolife A/S. M.B. is employed at Vitrolife A/S. M.M.’s company ilabcomm GmbH received honorarium for consultancy from Vitrolife AB. D.K.G. received research support from Vitrolife AB. PMID:27609980

  9. Towards a Seamless Framework for Drought Analysis and Prediction from Seasonal to Climate Change Time Scales (Plinius Medal Lecture)

    NASA Astrophysics Data System (ADS)

    Sheffield, Justin

    2013-04-01

    Droughts arguably cause the most impacts of all natural hazards in terms of the number of people affected and the long-term economic costs and ecosystem stresses. Recent droughts worldwide have caused humanitarian and economic problems such as food insecurity across the Horn of Africa, agricultural economic losses across the central US and loss of livelihoods in rural western India. The prospect of future increases in drought severity and duration driven by projected changes in precipitation patterns and increasing temperatures is worrisome. Some evidence for climate change impacts on drought is already being seen for some regions, such as the Mediterranean and east Africa. Mitigation of the impacts of drought requires advance warning of developing conditions and enactment of drought plans to reduce vulnerability. A key element of this is a drought early warning system that at its heart is the capability to monitor evolving hydrological conditions and water resources storage, and provide reliable and robust predictions out to several months, as well as the capacity to act on this information. At longer time scales, planning and policy-making need to consider the potential impacts of climate change and its impact on drought risk, and do this within the context of natural climate variability, which is likely to dominate any climate change signal over the next few decades. There are several challenges that need to be met to advance our capability to provide both early warning at seasonal time scales and risk assessment under climate change, regionally and globally. Advancing our understanding of drought predictability and risk requires knowledge of drought at all time scales. This includes understanding of past drought occurrence, from the paleoclimate record to the recent past, and understanding of drought mechanisms, from initiation, through persistence to recovery and translation of this understanding to predictive models. Current approaches to monitoring and predicting drought are limited in many parts of the world, and especially in developing countries where national capacity is limited. Evaluation of past droughts and their mechanisms is limited by data availability and especially before the instrumental period of the last 50-100 years, for which there is reliance on incomplete spatial proxy data, such as tree rings. Seasonal predictability is currently mainly limited to tropical and sub-tropical regions through connections with sea surface temperature variations such as ENSO. Predictability in mid-latitudes is low and especially for precipitation, although dynamical model predictions appear to be edging statistical models in many aspects of seasonal prediction. This presentation describes ongoing research on evaluation of drought risk and drought mechanisms at regional to global scales with the eventual goal of developing a seamless monitoring and prediction framework at all time scales. Such a framework would allow consistent assessment of drought from historic to current conditions, and from seasonal and decadal predictions to climate change projections. At the center of the framework is an experimental global drought monitoring and seasonal forecast system that has evolved out of regional and continental systems for the US and Africa. The system is based on land surface hydrological modeling that is driven by satellite remote sensing precipitation to predict current hydrological conditions and the state of drought. Seasonal climate model forecasts are downscaled and bias-corrected to drive the land surface model to provide hydrological forecasts and drought products out 6-9 months. The system relies on historic reconstructions of drought variability over the 20th century, which forms the background climatology to which current conditions can be assessed and drought mechanisms can be diagnosed. Future drought risk is quantified based on bias-corrected and downscaled climate model projections that are used to drive the land surface models. Current research is focused on several aspects, including: 1) quantifying the uncertainties in historic drought reconstructions; 2) analysis of drought propagation through the coupled hydrological/vegetation system; 3) the utility of new data sources such as on the ground sensors and new satellite products for terrestrial hydrology and vegetation, for improved monitoring and prediction, especially in poorly observed regions; 4) advancing predictive skill for all aspects of drought occurrence through diagnosis of the driving mechanisms and feedbacks of historic droughts; and 5) quantification and reduction of uncertainties in future projections of drought under climate change. The steps towards the development of a seamless framework for analysis and prediction in the context of this research are discussed.

  10. Predictions of the electro-mechanical response of conductive CNT-polymer composites

    NASA Astrophysics Data System (ADS)

    Matos, Miguel A. S.; Tagarielli, Vito L.; Baiz-Villafranca, Pedro M.; Pinho, Silvestre T.

    2018-05-01

    We present finite element simulations to predict the conductivity, elastic response and strain-sensing capability of conductive composites comprising a polymeric matrix and carbon nanotubes. Realistic representative volume elements (RVE) of the microstructure are generated and both constituents are modelled as linear elastic solids, with resistivity independent of strain; the electrical contact between nanotubes is represented by a new element which accounts for quantum tunnelling effects and captures the sensitivity of conductivity to separation. Monte Carlo simulations are conducted and the sensitivity of the predictions to RVE size is explored. Predictions of modulus and conductivity are found in good agreement with published results. The strain-sensing capability of the material is explored for multiaxial strain states.

  11. Material Stream Strategy for Lithium and Inorganics (U)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Safarik, Douglas Joseph; Dunn, Paul Stanton; Korzekwa, Deniece Rochelle

    Design Agency Responsibilities: Manufacturing Support to meet Stockpile Stewardship goals for maintaining the nuclear stockpile through experimental and predictive modeling capability. Development and maintenance of Manufacturing Science expertise to assess material specifications and performance boundaries, and their relationship to processing parameters. Production Engineering Evaluations with competence in design requirements, material specifications, and manufacturing controls. Maintenance and enhancement of Aging Science expertise to support Stockpile Stewardship predictive science capability.

  12. Progress in Finite Element Modeling of the Lower Extremities

    DTIC Science & Technology

    2015-06-01

    bending and subsequent injury , e.g., the distal tibia motion results in bending of the tibia rather than the tibia rotating about the knee joint...layers, rich anisotropy, and wide variability. Developing a model for predictive injury capability, therefore, needs to be versatile and flexible to... injury capability presents many challenges, the first of which is identifying the types of conditions where injury prediction is needed. Our focus

  13. State-of-the-art radiological techniques improve the assessment of postoperative lung function in patients with non-small cell lung cancer.

    PubMed

    Ohno, Yoshiharu; Koyama, Hisanobu; Nogami, Munenobu; Takenaka, Daisuke; Onishi, Yumiko; Matsumoto, Keiko; Matsumoto, Sumiaki; Maniwa, Yoshimasa; Yoshimura, Masahiro; Nishimura, Yoshihiro; Sugimura, Kazuro

    2011-01-01

    The purpose of this study was to compare predictive capabilities for postoperative lung function in non-small cell lung cancer (NSCLC) patients of the state-of-the-art radiological methods including perfusion MRI, quantitative CT and SPECT/CT with that of anatomical method (i.e. qualitative CT) and traditional nuclear medicine methods such as planar imaging and SPECT. Perfusion MRI, CT, nuclear medicine study and measurements of %FEV(1) before and after lung resection were performed for 229 NSCLC patients (125 men and 104 women). For perfusion MRI, postoperative %FEV(1) (po%FEV(1)) was predicted from semi-quantitatively assessed blood volumes within total and resected lungs, for quantitative CT, it was predicted from the functional lung volumes within total and resected lungs, for qualitative CT, from the number of segments of total and resected lungs, and for nuclear medicine studies, from uptakes within total and resected lungs. All SPECTs were automatically co-registered with CTs for preparation of SPECT/CTs. Predicted po%FEV(1)s were then correlated with actual po%FEV(1)s, which were measured %FEV(1)s after operation. The limits of agreement were also evaluated. All predicted po%FEV(1)s showed good correlation with actual po%FEV(1)s (0.83≤r≤0.88, p<0.0001). Perfusion MRI, quantitative CT and SPECT/CT demonstrated better correlation than other methods. The limits of agreement of perfusion MRI (4.4±14.2%), quantitative CT (4.7±14.2%) and SPECT/CT (5.1±14.7%) were less than those of qualitative CT (6.0±17.4%), planar imaging (5.8±18.2%), and SPECT (5.5±16.8%). State-of-the-art radiological methods can predict postoperative lung function in NSCLC patients more accurately than traditional methods. Copyright © 2009 Elsevier Ireland Ltd. All rights reserved.

  14. Modeling evapotranspiration based on plant hydraulic theory can predict spatial variability across an elevation gradient and link to biogeochemical fluxes

    NASA Astrophysics Data System (ADS)

    Mackay, D. S.; Frank, J.; Reed, D.; Whitehouse, F.; Ewers, B. E.; Pendall, E.; Massman, W. J.; Sperry, J. S.

    2012-04-01

    In woody plant systems transpiration is often the dominant component of total evapotranspiration, and so it is key to understanding water and energy cycles. Moreover, transpiration is tightly coupled to carbon and nutrient fluxes, and so it is also vital to understanding spatial variability of biogeochemical fluxes. However, the spatial variability of transpiration and its links to biogeochemical fluxes, within- and among-ecosystems, has been a challenge to constrain because of complex feedbacks between physical and biological controls. Plant hydraulics provides an emerging theory with the rigor needed to develop testable hypotheses and build useful models for scaling these coupled fluxes from individual plants to regional scales. This theory predicts that vegetative controls over water, energy, carbon, and nutrient fluxes can be determined from the limitation of plant water transport through the soil-xylem-stomata pathway. Limits to plant water transport can be predicted from measurable plant structure and function (e.g., vulnerability to cavitation). We present a next-generation coupled transpiration-biogeochemistry model based on this emerging theory. The model, TREEScav, is capable of predicting transpiration, along with carbon and nutrient flows, constrained by plant structure and function. The model incorporates tightly coupled mechanisms of the demand and supply of water through the soil-xylem-stomata system, with the feedbacks to photosynthesis and utilizable carbohydrates. The model is evaluated by testing it against transpiration and carbon flux data along an elevation gradient of woody plants comprising sagebrush steppe, mid-elevation lodgepole pine forests, and subalpine spruce/fir forests in the Rocky Mountains. The model accurately predicts transpiration and carbon fluxes as measured from gas exchange, sap flux, and eddy covariance towers. The results of this work demonstrate that credible spatial predictions of transpiration and related biogeochemical fluxes will be possible at regional scales using relatively easily obtained vegetation structural and functional information.

  15. Transforming Atmospheric and Remotely-Sensed Information to Hydrologic Predictability in South Asia

    NASA Astrophysics Data System (ADS)

    Hopson, T. M.; Riddle, E. E.; Broman, D.; Brakenridge, G. R.; Birkett, C. M.; Kettner, A.; Sampson, K. M.; Boehnert, J.; Priya, S.; Collins, D. C.; Rostkier-Edelstein, D.; Young, W.; Singh, D.; Islam, A. S.

    2017-12-01

    South Asia is a flashpoint for natural disasters with profound societal impacts for the region and globally. Although close to 40% of the world's population depends on the Greater Himalaya's great rivers, $20 Billion of GDP is affected by river floods each year. The frequent occurrence of floods, combined with large and rapidly growing populations with high levels of poverty, make South Asia highly susceptible to humanitarian disasters. The challenges of mitigating such devastating disasters are exacerbated by the limited availability of real-time rain and stream gauge measuring stations and transboundary data sharing, and by constrained institutional commitments to overcome these challenges. To overcome such limitations, India and the World Bank have committed resources to the National Hydrology Project III, with the development objective to improve the extent, quality, and accessibility of water resources information and to strengthen the capacity of targeted water resources management institutions in India. The availability and application of remote sensing products and weather forecasts from ensemble prediction systems (EPS) have transformed river forecasting capability over the last decade, and is of interest to India. In this talk, we review the potential predictability of river flow contributed by some of the freely-available remotely-sensed and weather forecasting products within the framework of the physics of water migration through a watershed. Our specific geographical context is the Ganges, Brahmaputra, and Meghna river basin and a newly-available set of stream gauge measurements located over the region. We focus on satellite rainfall estimation, river height and width estimation, and EPS weather forecasts. For the later, we utilize the THORPEX-TIGGE dataset of global forecasts, and discuss how atmospheric predictability, as measured by an EPS, is transformed into hydrometeorological predictability. We provide an overview of the strengths and weaknesses of each of these data sets to the river flow prediction problem, generalizing their utility across spatial- and temporal-scales, and highlight the benefits of joint utilization and multi-modeling to minimize uncertainty and enhance operational robustness.

  16. Satellite Observations of Coastal Processes from a Geostationary Orbit: Application to estuarine, coastal, and ocean resource management

    NASA Astrophysics Data System (ADS)

    Tzortziou, M.; Mannino, A.; Schaeffer, B. A.

    2016-02-01

    Coastal areas are among the most vulnerable yet economically valuable ecosystems on Earth. Estuaries and coastal oceans are critically important as essential habitat for marine life, as highly productive ecosystems and a rich source of food for human consumption, as a strong economic driver for coastal communities, and as a highly dynamic interface between land and ocean carbon and nutrient cycles. Still, our present capabilities to remotely observe coastal ocean processes from space are limited in their temporal, spatial, and spectral resolution. These limitations, in turn, constrain our ability to observe and understand biogeochemical processes in highly dynamic coastal ecosystems, or predict their response and resilience to current and future pressures including sea level rise, coastal urbanization, and anthropogenic pollution.On a geostationary orbit, and with high spatial resolution and hyper-spectral capabilities, NASA's Decadal Survey mission GEO-CAPE (GEO-stationary for Coastal and Air Pollution Events) will provide, for the first time, a satellite view of the short-term changes and evolution of processes along the economically invaluable but, simultaneously, particularly vulnerable near-shore waters of the United States. GEO-CAPE will observe U.S. lakes, estuaries, and coastal regions at sufficient temporal and spatial scales to resolve near-shore processes, tides, coastal fronts, and eddies, track sediments and pollutants, capture diurnal biogeochemical processes and rates of transformation, monitor harmful algal blooms and large oil spills, observe episodic events and coastal hazards. Here we discuss the GEO-CAPE applications program and the new capabilities afforded by this future satellite mission, to identify potential user communities, incorporate end-user needs into future mission planning, and allow integration of science and management at the coastal interface.

  17. Satellite Observations of Coastal Processes from a Geostationary Orbit: Application to estuarine, coastal, and ocean resource management

    NASA Astrophysics Data System (ADS)

    Tzortziou, M.; Mannino, A.; Schaeffer, B. A.

    2016-12-01

    Coastal areas are among the most vulnerable yet economically valuable ecosystems on Earth. Estuaries and coastal oceans are critically important as essential habitat for marine life, as highly productive ecosystems and a rich source of food for human consumption, as a strong economic driver for coastal communities, and as a highly dynamic interface between land and ocean carbon and nutrient cycles. Still, our present capabilities to remotely observe coastal ocean processes from space are limited in their temporal, spatial, and spectral resolution. These limitations, in turn, constrain our ability to observe and understand biogeochemical processes in highly dynamic coastal ecosystems, or predict their response and resilience to current and future pressures including sea level rise, coastal urbanization, and anthropogenic pollution.On a geostationary orbit, and with high spatial resolution and hyper-spectral capabilities, NASA's Decadal Survey mission GEO-CAPE (GEO-stationary for Coastal and Air Pollution Events) will provide, for the first time, a satellite view of the short-term changes and evolution of processes along the economically invaluable but, simultaneously, particularly vulnerable near-shore waters of the United States. GEO-CAPE will observe U.S. lakes, estuaries, and coastal regions at sufficient temporal and spatial scales to resolve near-shore processes, tides, coastal fronts, and eddies, track sediments and pollutants, capture diurnal biogeochemical processes and rates of transformation, monitor harmful algal blooms and large oil spills, observe episodic events and coastal hazards. Here we discuss the GEO-CAPE applications program and the new capabilities afforded by this future satellite mission, to identify potential user communities, incorporate end-user needs into future mission planning, and allow integration of science and management at the coastal interface.

  18. ISSM-SESAW v1.0: mesh-based computation of gravitationally consistent sea-level and geodetic signatures caused by cryosphere and climate driven mass change

    NASA Astrophysics Data System (ADS)

    Adhikari, Surendra; Ivins, Erik R.; Larour, Eric

    2016-03-01

    A classical Green's function approach for computing gravitationally consistent sea-level variations associated with mass redistribution on the earth's surface employed in contemporary sea-level models naturally suits the spectral methods for numerical evaluation. The capability of these methods to resolve high wave number features such as small glaciers is limited by the need for large numbers of pixels and high-degree (associated Legendre) series truncation. Incorporating a spectral model into (components of) earth system models that generally operate on a mesh system also requires repetitive forward and inverse transforms. In order to overcome these limitations, we present a method that functions efficiently on an unstructured mesh, thus capturing the physics operating at kilometer scale yet capable of simulating geophysical observables that are inherently of global scale with minimal computational cost. The goal of the current version of this model is to provide high-resolution solid-earth, gravitational, sea-level and rotational responses for earth system models operating in the domain of the earth's outer fluid envelope on timescales less than about 1 century when viscous effects can largely be ignored over most of the globe. The model has numerous important geophysical applications. For example, we compute time-varying computations of global geodetic and sea-level signatures associated with recent ice-sheet changes that are derived from space gravimetry observations. We also demonstrate the capability of our model to simultaneously resolve kilometer-scale sources of the earth's time-varying surface mass transport, derived from high-resolution modeling of polar ice sheets, and predict the corresponding local and global geodetic signatures.

  19. Verification of bubble tracking method and DNS examinations of single- and two-phase turbulent channel flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tryggvason, Gretar; Bolotnov, Igor; Fang, Jun

    2017-03-30

    Direct numerical simulation (DNS) has been regarded as a reliable data source for the development and validation of turbulence models along with experiments. The realization of DNS usually involves a very fine mesh that should be able to resolve all relevant turbulence scales down to Kolmogorov scale [1]. As the most computationally expensive approach compared to other CFD techniques, DNS applications used to be limited to flow studies at very low Reynolds numbers. Thanks to the tremendous growth of computing power over the past decades, the simulation capability of DNS has now started overlapping with some of the most challengingmore » engineering problems. One of those examples in nuclear engineering is the turbulent coolant flow inside reactor cores. Coupled with interface tracking methods (ITM), the simulation capability of DNS can be extended to more complicated two-phase flow regimes. Departure from nucleate boiling (DNB) is the limiting critical heat flux phenomena for the majority of accidents that are postulated to occur in pressurized water reactors (PWR) [2]. As one of the major modeling and simulation (M&S) challenges pursued by CASL, the prediction capability is being developed for the onset of DNB utilizing multiphase-CFD (M-CFD) approach. DNS (coupled with ITM) can be employed to provide closure law information for the multiphase flow modeling at CFD scale. In the presented work, research groups at NCSU and UND will focus on applying different ITM to different geometries. Higher void fraction flow analysis at reactor prototypical conditions will be performed, and novel analysis methods will be developed, implemented and verified for the challenging flow conditions.« less

  20. Using connectome-based predictive modeling to predict individual behavior from brain connectivity

    PubMed Central

    Shen, Xilin; Finn, Emily S.; Scheinost, Dustin; Rosenberg, Monica D.; Chun, Marvin M.; Papademetris, Xenophon; Constable, R Todd

    2017-01-01

    Neuroimaging is a fast developing research area where anatomical and functional images of human brains are collected using techniques such as functional magnetic resonance imaging (fMRI), diffusion tensor imaging (DTI), and electroencephalography (EEG). Technical advances and large-scale datasets have allowed for the development of models capable of predicting individual differences in traits and behavior using brain connectivity measures derived from neuroimaging data. Here, we present connectome-based predictive modeling (CPM), a data-driven protocol for developing predictive models of brain-behavior relationships from connectivity data using cross-validation. This protocol includes the following steps: 1) feature selection, 2) feature summarization, 3) model building, and 4) assessment of prediction significance. We also include suggestions for visualizing the most predictive features (i.e., brain connections). The final result should be a generalizable model that takes brain connectivity data as input and generates predictions of behavioral measures in novel subjects, accounting for a significant amount of the variance in these measures. It has been demonstrated that the CPM protocol performs equivalently or better than most of the existing approaches in brain-behavior prediction. However, because CPM focuses on linear modeling and a purely data-driven driven approach, neuroscientists with limited or no experience in machine learning or optimization would find it easy to implement the protocols. Depending on the volume of data to be processed, the protocol can take 10–100 minutes for model building, 1–48 hours for permutation testing, and 10–20 minutes for visualization of results. PMID:28182017

  1. RNA 3D Structure Modeling by Combination of Template-Based Method ModeRNA, Template-Free Folding with SimRNA, and Refinement with QRNAS.

    PubMed

    Piatkowski, Pawel; Kasprzak, Joanna M; Kumar, Deepak; Magnus, Marcin; Chojnowski, Grzegorz; Bujnicki, Janusz M

    2016-01-01

    RNA encompasses an essential part of all known forms of life. The functions of many RNA molecules are dependent on their ability to form complex three-dimensional (3D) structures. However, experimental determination of RNA 3D structures is laborious and challenging, and therefore, the majority of known RNAs remain structurally uncharacterized. To address this problem, computational structure prediction methods were developed that either utilize information derived from known structures of other RNA molecules (by way of template-based modeling) or attempt to simulate the physical process of RNA structure formation (by way of template-free modeling). All computational methods suffer from various limitations that make theoretical models less reliable than high-resolution experimentally determined structures. This chapter provides a protocol for computational modeling of RNA 3D structure that overcomes major limitations by combining two complementary approaches: template-based modeling that is capable of predicting global architectures based on similarity to other molecules but often fails to predict local unique features, and template-free modeling that can predict the local folding, but is limited to modeling the structure of relatively small molecules. Here, we combine the use of a template-based method ModeRNA with a template-free method SimRNA. ModeRNA requires a sequence alignment of the target RNA sequence to be modeled with a template of the known structure; it generates a model that predicts the structure of a conserved core and provides a starting point for modeling of variable regions. SimRNA can be used to fold small RNAs (<80 nt) without any additional structural information, and to refold parts of models for larger RNAs that have a correctly modeled core. ModeRNA can be either downloaded, compiled and run locally or run through a web interface at http://genesilico.pl/modernaserver/ . SimRNA is currently available to download for local use as a precompiled software package at http://genesilico.pl/software/stand-alone/simrna and as a web server at http://genesilico.pl/SimRNAweb . For model optimization we use QRNAS, available at http://genesilico.pl/qrnas .

  2. Macroscale hydrologic modeling of ecologically relevant flow metrics

    NASA Astrophysics Data System (ADS)

    Wenger, Seth J.; Luce, Charles H.; Hamlet, Alan F.; Isaak, Daniel J.; Neville, Helen M.

    2010-09-01

    Stream hydrology strongly affects the structure of aquatic communities. Changes to air temperature and precipitation driven by increased greenhouse gas concentrations are shifting timing and volume of streamflows potentially affecting these communities. The variable infiltration capacity (VIC) macroscale hydrologic model has been employed at regional scales to describe and forecast hydrologic changes but has been calibrated and applied mainly to large rivers. An important question is how well VIC runoff simulations serve to answer questions about hydrologic changes in smaller streams, which are important habitat for many fish species. To answer this question, we aggregated gridded VIC outputs within the drainage basins of 55 streamflow gages in the Pacific Northwest United States and compared modeled hydrographs and summary metrics to observations. For most streams, several ecologically relevant aspects of the hydrologic regime were accurately modeled, including center of flow timing, mean annual and summer flows and frequency of winter floods. Frequencies of high and low flows in the summer were not well predicted, however. Predictions were worse for sites with strong groundwater influence, and some sites showed errors that may result from limitations in the forcing climate data. Higher resolution (1/16th degree) modeling provided small improvements over lower resolution (1/8th degree). Despite some limitations, the VIC model appears capable of representing several ecologically relevant hydrologic characteristics in streams, making it a useful tool for understanding the effects of hydrology in delimiting species distributions and predicting the potential effects of climate shifts on aquatic organisms.

  3. Future directions in high-pressure neutron diffraction

    NASA Astrophysics Data System (ADS)

    Guthrie, M.

    2015-04-01

    The ability to manipulate structure and properties using pressure has been well known for many centuries. Diffraction provides the unique ability to observe these structural changes in fine detail on lengthscales spanning atomic to nanometre dimensions. Amongst the broad suite of diffraction tools available today, neutrons provide unique capabilities of fundamental importance. However, to date, the growth of neutron diffraction under extremes of pressure has been limited by the weakness of available sources. In recent years, substantial government investments have led to the construction of a new generation of neutron sources while existing facilities have been revitalized by upgrades. The timely convergence of these bright facilities with new pressure-cell technologies suggests that the field of high-pressure (HP) neutron science is on the cusp of substantial growth. Here, the history of HP neutron research is examined with the hope of gleaning an accurate prediction of where some of these revolutionary capabilities will lead in the near future. In particular, a dramatic expansion of current pressure-temperature range is likely, with corresponding increased scope for extreme-conditions science with neutron diffraction. This increase in coverage will be matched with improvements in data quality. Furthermore, we can also expect broad new capabilities beyond diffraction, including in neutron imaging, small angle scattering and inelastic spectroscopy.

  4. A Process for Assessing NASA's Capability in Aircraft Noise Prediction Technology

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.

    2008-01-01

    An acoustic assessment is being conducted by NASA that has been designed to assess the current state of the art in NASA s capability to predict aircraft related noise and to establish baselines for gauging future progress in the field. The process for determining NASA s current capabilities includes quantifying the differences between noise predictions and measurements of noise from experimental tests. The computed noise predictions are being obtained from semi-empirical, analytical, statistical, and numerical codes. In addition, errors and uncertainties are being identified and quantified both in the predictions and in the measured data to further enhance the credibility of the assessment. The content of this paper contains preliminary results, since the assessment project has not been fully completed, based on the contributions of many researchers and shows a select sample of the types of results obtained regarding the prediction of aircraft noise at both the system and component levels. The system level results are for engines and aircraft. The component level results are for fan broadband noise, for jet noise from a variety of nozzles, and for airframe noise from flaps and landing gear parts. There are also sample results for sound attenuation in lined ducts with flow and the behavior of acoustic lining in ducts.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giuliani, Sarah E; Frank, Ashley M; Corgliano, Danielle M

    Abstract Background: Transporter proteins are one of an organism s primary interfaces with the environment. The expressed set of transporters mediates cellular metabolic capabilities and influences signal transduction pathways and regulatory networks. The functional annotation of most transporters is currently limited to general classification into families. The development of capabilities to map ligands with specific transporters would improve our knowledge of the function of these proteins, improve the annotation of related genomes, and facilitate predictions for their role in cellular responses to environmental changes. Results: To improve the utility of the functional annotation for ABC transporters, we expressed and purifiedmore » the set of solute binding proteins from Rhodopseudomonas palustris and characterized their ligand-binding specificity. Our approach utilized ligand libraries consisting of environmental and cellular metabolic compounds, and fluorescence thermal shift based high throughput ligand binding screens. This process resulted in the identification of specific binding ligands for approximately 64% of the purified and screened proteins. The collection of binding ligands is representative of common functionalities associated with many bacterial organisms as well as specific capabilities linked to the ecological niche occupied by R. palustris. Conclusion: The functional screen identified specific ligands that bound to ABC transporter periplasmic binding subunits from R. palustris. These assignments provide unique insight for the metabolic capabilities of this organism and are consistent with the ecological niche of strain isolation. This functional insight can be used to improve the annotation of related organisms and provides a route to evaluate the evolution of this important and diverse group of transporter proteins.« less

  6. Artificial neural network model for ozone concentration estimation and Monte Carlo analysis

    NASA Astrophysics Data System (ADS)

    Gao, Meng; Yin, Liting; Ning, Jicai

    2018-07-01

    Air pollution in urban atmosphere directly affects public-health; therefore, it is very essential to predict air pollutant concentrations. Air quality is a complex function of emissions, meteorology and topography, and artificial neural networks (ANNs) provide a sound framework for relating these variables. In this study, we investigated the feasibility of using ANN model with meteorological parameters as input variables to predict ozone concentration in the urban area of Jinan, a metropolis in Northern China. We firstly found that the architecture of network of neurons had little effect on the predicting capability of ANN model. A parsimonious ANN model with 6 routinely monitored meteorological parameters and one temporal covariate (the category of day, i.e. working day, legal holiday and regular weekend) as input variables was identified, where the 7 input variables were selected following the forward selection procedure. Compared with the benchmarking ANN model with 9 meteorological and photochemical parameters as input variables, the predicting capability of the parsimonious ANN model was acceptable. Its predicting capability was also verified in term of warming success ratio during the pollution episodes. Finally, uncertainty and sensitivity analysis were also performed based on Monte Carlo simulations (MCS). It was concluded that the ANN could properly predict the ambient ozone level. Maximum temperature, atmospheric pressure, sunshine duration and maximum wind speed were identified as the predominate input variables significantly influencing the prediction of ambient ozone concentrations.

  7. Harnessing atomistic simulations to predict the rate at which dislocations overcome obstacles

    NASA Astrophysics Data System (ADS)

    Saroukhani, S.; Nguyen, L. D.; Leung, K. W. K.; Singh, C. V.; Warner, D. H.

    2016-05-01

    Predicting the rate at which dislocations overcome obstacles is key to understanding the microscopic features that govern the plastic flow of modern alloys. In this spirit, the current manuscript examines the rate at which an edge dislocation overcomes an obstacle in aluminum. Predictions were made using different popular variants of Harmonic Transition State Theory (HTST) and compared to those of direct Molecular Dynamics (MD) simulations. The HTST predictions were found to be grossly inaccurate due to the large entropy barrier associated with the dislocation-obstacle interaction. Considering the importance of finite temperature effects, the utility of the Finite Temperature String (FTS) method was then explored. While this approach was found capable of identifying a prominent reaction tube, it was not capable of computing the free energy profile along the tube. Lastly, the utility of the Transition Interface Sampling (TIS) approach was explored, which does not need a free energy profile and is known to be less reliant on the choice of reaction coordinate. The TIS approach was found capable of accurately predicting the rate, relative to direct MD simulations. This finding was utilized to examine the temperature and load dependence of the dislocation-obstacle interaction in a simple periodic cell configuration. An attractive rate prediction approach combining TST and simple continuum models is identified, and the strain rate sensitivity of individual dislocation obstacle interactions is predicted.

  8. The Coastal Ocean Prediction Systems program: Understanding and managing our coastal ocean

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eden, H.F.; Mooers, C.N.K.

    1990-06-01

    The goal of COPS is to couple a program of regular observations to numerical models, through techniques of data assimilation, in order to provide a predictive capability for the US coastal ocean including the Great Lakes, estuaries, and the entire Exclusive Economic Zone (EEZ). The objectives of the program include: determining the predictability of the coastal ocean and the processes that govern the predictability; developing efficient prediction systems for the coastal ocean based on the assimilation of real-time observations into numerical models; and coupling the predictive systems for the physical behavior of the coastal ocean to predictive systems for biological,more » chemical, and geological processes to achieve an interdisciplinary capability. COPS will provide the basis for effective monitoring and prediction of coastal ocean conditions by optimizing the use of increased scientific understanding, improved observations, advanced computer models, and computer graphics to make the best possible estimates of sea level, currents, temperatures, salinities, and other properties of entire coastal regions.« less

  9. CARES/Life Ceramics Durability Evaluation Software Used for Mars Microprobe Aeroshell

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.

    1998-01-01

    The CARES/Life computer program, which was developed at the NASA Lewis Research Center, predicts the probability of a monolithic ceramic component's failure as a function of time in service. The program has many features and options for materials evaluation and component design. It couples commercial finite element programs-which resolve a component's temperature and stress distribution-to-reliability evaluation and fracture mechanics routines for modeling strength-limiting defects. These routines are based on calculations of the probabilistic nature of the brittle material's strength. The capability, flexibility, and uniqueness of CARES/Life has attracted many users representing a broad range of interests and has resulted in numerous awards for technological achievements and technology transfer.

  10. Minimal two-sphere model of the generation of fluid flow at low Reynolds numbers.

    PubMed

    Leoni, M; Bassetti, B; Kotar, J; Cicuta, P; Cosentino Lagomarsino, M

    2010-03-01

    Locomotion and generation of flow at low Reynolds number are subject to severe limitations due to the irrelevance of inertia: the "scallop theorem" requires that the system have at least two degrees of freedom, which move in non-reciprocal fashion, i.e. breaking time-reversal symmetry. We show here that a minimal model consisting of just two spheres driven by harmonic potentials is capable of generating flow. In this pump system the two degrees of freedom are the mean and relative positions of the two spheres. We have performed and compared analytical predictions, numerical simulation and experiments, showing that a time-reversible drive is sufficient to induce flow.

  11. Fuel Aging in Storage and Transportation (FAST): Accelerated Characterization and Performance Assessment of the Used Nuclear Fuel Storage System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDeavitt, Sean

    2016-08-02

    This Integrated Research Project (IRP) was established to characterize key limiting phenomena related to the performance of used nuclear fuel (UNF) storage systems. This was an applied engineering project with a specific application in view (i.e., UNF dry storage). The completed tasks made use of a mixture of basic science and engineering methods. The overall objective was to create, or enable the creation of, predictive tools in the form of observation methods, phenomenological models, and databases that will enable the design, installation, and licensing of dry UNF storage systems that will be capable of containing UNF for extended period ofmore » time.« less

  12. Chaotic ultra-wideband radio generator based on an optoelectronic oscillator with a built-in microwave photonic filter.

    PubMed

    Wang, Li Xian; Zhu, Ning Hua; Zheng, Jian Yu; Liu, Jian Guo; Li, Wei

    2012-05-20

    We induce a microwave photonic bandpass filter into an optoelectronic oscillator to generate a chaotic ultra-wideband signal in both the optical and electrical domain. The theoretical analysis and numerical simulation indicate that this system is capable of generating band-limited high-dimensional chaos. Experimental results coincide well with the theoretical prediction and show that the power spectrum of the generated chaotic signal basically meets the Federal Communications Commission indoor mask. The generated chaotic carrier is further intensity modulated by a 10 MHz square wave, and the waveform of the output ultra-wideband signal is measured for demonstrating the chaotic on-off keying modulation.

  13. Evaluation of the three-dimensional parabolic flow computer program SHIP

    NASA Technical Reports Server (NTRS)

    Pan, Y. S.

    1978-01-01

    The three-dimensional parabolic flow program SHIP designed for predicting supersonic combustor flow fields is evaluated to determine its capabilities. The mathematical foundation and numerical procedure are reviewed; simplifications are pointed out and commented upon. The program is then evaluated numerically by applying it to several subsonic and supersonic, turbulent, reacting and nonreacting flow problems. Computational results are compared with available experimental or other analytical data. Good agreements are obtained when the simplifications on which the program is based are justified. Limitations of the program and the needs for improvement and extension are pointed out. The present three dimensional parabolic flow program appears to be potentially useful for the development of supersonic combustors.

  14. Flapping Wings of an Inclined Stroke Angle: Experiments and Reduced-Order Models in Dual Aerial/Aquatic Flight

    NASA Astrophysics Data System (ADS)

    Izraelevitz, Jacob; Triantafyllou, Michael

    2016-11-01

    Flapping wings in nature demonstrate a large force actuation envelope, with capabilities beyond the limits of static airfoil section coefficients. Puffins, guillemots, and other auks particularly showcase this mechanism, as they are able to both generate both enough thrust to swim and lift to fly, using the same wing, by changing the wing motion trajectory. The wing trajectory is therefore an additional design criterion to be optimized along with traditional aircraft parameters, and could possibly enable dual aerial/aquatic flight. We showcase finite aspect-ratio flapping wing experiments, dynamic similarity arguments, and reduced-order models for predicting the performance of flapping wings that carry out complex motion trajectories.

  15. Application of Partial Least Square (PLS) Analysis on Fluorescence Data of 8-Anilinonaphthalene-1-Sulfonic Acid, a Polarity Dye, for Monitoring Water Adulteration in Ethanol Fuel.

    PubMed

    Kumar, Keshav; Mishra, Ashok Kumar

    2015-07-01

    Fluorescence characteristic of 8-anilinonaphthalene-1-sulfonic acid (ANS) in ethanol-water mixture in combination with partial least square (PLS) analysis was used to propose a simple and sensitive analytical procedure for monitoring the adulteration of ethanol by water. The proposed analytical procedure was found to be capable of detecting even small adulteration level of ethanol by water. The robustness of the procedure is evident from the statistical parameters such as square of correlation coefficient (R(2)), root mean square of calibration (RMSEC) and root mean square of prediction (RMSEP) that were found to be well with in the acceptable limits.

  16. The plane strain shear fracture of the advanced high strength steels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Li, E-mail: li.sun@gm.com

    2013-12-16

    The “shear fracture” which occurs at the high-curvature die radii in the sheet metal forming has been reported to remarkably limit the application of the advanced high strength steels (AHSS) in the automobile industry. However, this unusual fracture behavior generally cannot be predicted by the traditional forming limit diagram (FLD). In this research, a new experimental system was developed in order to simulate the shear fracture, especially at the plane strain state which is the most common state in the auto-industry and difficult to achieve in the lab due to sample size. Furthermore, the system has the capability to operatemore » in a strain rate range from quasi-static state to the industrial forming state. One kinds of AHSS, Quenching-Partitioning (QP) steels have been performed in this test and the results show that the limiting fracture strain is related to the bending ratio and strain rate. The experimental data support that deformation-induced heating is an important cause of “shear fracture” phenomena for AHSS: a deformation-induced quasi-heating caused by smaller bending ratio and high strain rate produce a smaller limiting plane strain and lead a “shear fracture” in the component.« less

  17. Thermal niche estimators and the capability of poor dispersal species to cope with climate change

    NASA Astrophysics Data System (ADS)

    Sánchez-Fernández, David; Rizzo, Valeria; Cieslak, Alexandra; Faille, Arnaud; Fresneda, Javier; Ribera, Ignacio

    2016-03-01

    For management strategies in the context of global warming, accurate predictions of species response are mandatory. However, to date most predictions are based on niche (bioclimatic) models that usually overlook biotic interactions, behavioral adjustments or adaptive evolution, and assume that species can disperse freely without constraints. The deep subterranean environment minimises these uncertainties, as it is simple, homogeneous and with constant environmental conditions. It is thus an ideal model system to study the effect of global change in species with poor dispersal capabilities. We assess the potential fate of a lineage of troglobitic beetles under global change predictions using different approaches to estimate their thermal niche: bioclimatic models, rates of thermal niche change estimated from a molecular phylogeny, and data from physiological studies. Using bioclimatic models, at most 60% of the species were predicted to have suitable conditions in 2080. Considering the rates of thermal niche change did not improve this prediction. However, physiological data suggest that subterranean species have a broad thermal tolerance, allowing them to stand temperatures never experienced through their evolutionary history. These results stress the need of experimental approaches to assess the capability of poor dispersal species to cope with temperatures outside those they currently experience.

  18. Orchestration of Molecular Information through Higher Order Chemical Recognition

    NASA Astrophysics Data System (ADS)

    Frezza, Brian M.

    Broadly defined, higher order chemical recognition is the process whereby discrete chemical building blocks capable of specifically binding to cognate moieties are covalently linked into oligomeric chains. These chains, or sequences, are then able to recognize and bind to their cognate sequences with a high degree of cooperativity. Principally speaking, DNA and RNA are the most readily obtained examples of this chemical phenomenon, and function via Watson-Crick cognate pairing: guanine pairs with cytosine and adenine with thymine (DNA) or uracil (RNA), in an anti-parallel manner. While the theoretical principles, techniques, and equations derived herein apply generally to any higher-order chemical recognition system, in practice we utilize DNA oligomers as a model-building material to experimentally investigate and validate our hypotheses. Historically, general purpose information processing has been a task limited to semiconductor electronics. Molecular computing on the other hand has been limited to ad hoc approaches designed to solve highly specific and unique computation problems, often involving components or techniques that cannot be applied generally in a manner suitable for precise and predictable engineering. Herein, we provide a fundamental framework for harnessing high-order recognition in a modular and programmable fashion to synthesize molecular information process networks of arbitrary construction and complexity. This document provides a solid foundation for routinely embedding computational capability into chemical and biological systems where semiconductor electronics are unsuitable for practical application.

  19. Numerical simulations of rough contacts between viscoelastic materials

    NASA Astrophysics Data System (ADS)

    Spinu, S.; Cerlinca, D.

    2017-08-01

    The durability of the mechanical contact is often plagued by surface-related phenomena like rolling contact fatigue, wear or crack propagation, which are linked to the important gradients of stress arising in the contacting bodies due to interaction at the asperity level. The semi-analytical computational approach adopted in this paper is based on a previously reported algorithm capable of simulating the contact between bodies with arbitrary limiting surfaces and viscoelastic behaviour, which is enhanced and adapted for the contact of real surfaces with microtopography. As steep slopes at the asperity level inevitably lead to localized plastic deformation at the tip of the asperities that are first brought into contact, the viscoelastic behaviour is amended by limiting the maximum value of the pressure on the contact area to that of the material hardness, according to the Tabor equation. In this manner, plasticity is considered in a simplified manner that assures the knowledge of the contact area and of the pressure distribution without estimation of the residual state. The main advantage of this approach is the preservation of the algorithmic complexity, allowing the simulation of very fine meshes capable of capturing particular features of the investigated contacting surface. The newly advanced model is expected to predict the contact specifics of rough surfaces as resulting from various manufacturing processes, thus assisting the design of durable machine elements using elastomers or rubbers.

  20. Numerical Investigations of Capabilities and Limits of Photospheric Data Driven Magnetic Flux Emergence

    NASA Astrophysics Data System (ADS)

    Linton, M.; Leake, J. E.; Schuck, P. W.

    2016-12-01

    The magnetic field of the solar atmosphere is the primary driver of solar activity. Understanding the magnetic state of the solar atmosphere is therefore of key importance to predicting solar activity. One promising means of studying the magnetic atmosphere is to dynamically build up and evolve this atmosphere from the time evolution of emerging magnetic field at the photosphere, where it can be measured with current solar vector magnetograms at high temporal and spatial resolution. We report here on a series of numerical experiments investigating the capabilities and limits of magnetohydrodynamical simulations of such a process, where a magnetic corona is dynamically built up and evolved from a time series of synthetic photospheric data. These synthetic data are composed of photospheric slices taken from self consistent convection zone to corona simulations of flux emergence. The driven coronae are then quantitatively compared against the coronae of the original simulations. We investigate and report on the fidelity of these driven simulations, both as a function of the emergence timescale of the magnetic flux, and as a function of the driving cadence of the input data. These investigations will then be used to outline future prospects and challenges for using observed photospheric data to drive such solar atmospheric simulations. This work was supported by the Chief of Naval Research and the NASA Living with a Star and Heliophysics Supporting Research programs.

  1. Airport Noise Prediction Model -- MOD 7

    DOT National Transportation Integrated Search

    1978-07-01

    The MOD 7 Airport Noise Prediction Model is fully operational. The language used is Fortran, and it has been run on several different computer systems. Its capabilities include prediction of noise levels for single parameter changes, for multiple cha...

  2. Evaluating Rapid Models for High-Throughput Exposure Forecasting (SOT)

    EPA Science Inventory

    High throughput exposure screening models can provide quantitative predictions for thousands of chemicals; however these predictions must be systematically evaluated for predictive ability. Without the capability to make quantitative, albeit uncertain, forecasts of exposure, the ...

  3. In silico prediction of pharmaceutical degradation pathways: a benchmarking study.

    PubMed

    Kleinman, Mark H; Baertschi, Steven W; Alsante, Karen M; Reid, Darren L; Mowery, Mark D; Shimanovich, Roman; Foti, Chris; Smith, William K; Reynolds, Dan W; Nefliu, Marcela; Ott, Martin A

    2014-11-03

    Zeneth is a new software application capable of predicting degradation products derived from small molecule active pharmaceutical ingredients. This study was aimed at understanding the current status of Zeneth's predictive capabilities and assessing gaps in predictivity. Using data from 27 small molecule drug substances from five pharmaceutical companies, the evolution of Zeneth predictions through knowledge base development since 2009 was evaluated. The experimentally observed degradation products from forced degradation, accelerated, and long-term stability studies were compared to Zeneth predictions. Steady progress in predictive performance was observed as the knowledge bases grew and were refined. Over the course of the development covered within this evaluation, the ability of Zeneth to predict experimentally observed degradants increased from 31% to 54%. In particular, gaps in predictivity were noted in the areas of epimerizations, N-dealkylation of N-alkylheteroaromatic compounds, photochemical decarboxylations, and electrocyclic reactions. The results of this study show that knowledge base development efforts have increased the ability of Zeneth to predict relevant degradation products and aid pharmaceutical research. This study has also provided valuable information to help guide further improvements to Zeneth and its knowledge base.

  4. Development of advanced structural analysis methodologies for predicting widespread fatigue damage in aircraft structures

    NASA Technical Reports Server (NTRS)

    Harris, Charles E.; Starnes, James H., Jr.; Newman, James C., Jr.

    1995-01-01

    NASA is developing a 'tool box' that includes a number of advanced structural analysis computer codes which, taken together, represent the comprehensive fracture mechanics capability required to predict the onset of widespread fatigue damage. These structural analysis tools have complementary and specialized capabilities ranging from a finite-element-based stress-analysis code for two- and three-dimensional built-up structures with cracks to a fatigue and fracture analysis code that uses stress-intensity factors and material-property data found in 'look-up' tables or from equations. NASA is conducting critical experiments necessary to verify the predictive capabilities of the codes, and these tests represent a first step in the technology-validation and industry-acceptance processes. NASA has established cooperative programs with aircraft manufacturers to facilitate the comprehensive transfer of this technology by making these advanced structural analysis codes available to industry.

  5. Integration of nitrogen dynamics into the Noah-MP land surface model v1.1 for climate and environmental predictions

    DOE PAGES

    Cai, X.; Yang, Z. -L.; Fisher, J. B.; ...

    2016-01-15

    Climate and terrestrial biosphere models consider nitrogen an important factor in limiting plant carbon uptake, while operational environmental models view nitrogen as the leading pollutant causing eutrophication in water bodies. The community Noah land surface model with multi-parameterization options (Noah-MP) is unique in that it is the next-generation land surface model for the Weather Research and Forecasting meteorological model and for the operational weather/climate models in the National Centers for Environmental Prediction. Here in this study, we add a capability to Noah-MP to simulate nitrogen dynamics by coupling the Fixation and Uptake of Nitrogen (FUN) plant model and the Soilmore » and Water Assessment Tool (SWAT) soil nitrogen dynamics. This model development incorporates FUN's state-of-the-art concept of carbon cost theory and SWAT's strength in representing the impacts of agricultural management on the nitrogen cycle. Parameterizations for direct root and mycorrhizal-associated nitrogen uptake, leaf retranslocation, and symbiotic biological nitrogen fixation are employed from FUN, while parameterizations for nitrogen mineralization, nitrification, immobilization, volatilization, atmospheric deposition, and leaching are based on SWAT. The coupled model is then evaluated at the Kellogg Biological Station – a Long Term Ecological Research site within the US Corn Belt. Results show that the model performs well in capturing the major nitrogen state/flux variables (e.g., soil nitrate and nitrate leaching). Furthermore, the addition of nitrogen dynamics improves the modeling of net primary productivity and evapotranspiration. The model improvement is expected to advance the capability of Noah-MP to simultaneously predict weather and water quality in fully coupled Earth system models.« less

  6. Integration of nitrogen dynamics into the Noah-MP land surface model v1.1 for climate and environmental predictions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, X.; Yang, Z. -L.; Fisher, J. B.

    Climate and terrestrial biosphere models consider nitrogen an important factor in limiting plant carbon uptake, while operational environmental models view nitrogen as the leading pollutant causing eutrophication in water bodies. The community Noah land surface model with multi-parameterization options (Noah-MP) is unique in that it is the next-generation land surface model for the Weather Research and Forecasting meteorological model and for the operational weather/climate models in the National Centers for Environmental Prediction. Here in this study, we add a capability to Noah-MP to simulate nitrogen dynamics by coupling the Fixation and Uptake of Nitrogen (FUN) plant model and the Soilmore » and Water Assessment Tool (SWAT) soil nitrogen dynamics. This model development incorporates FUN's state-of-the-art concept of carbon cost theory and SWAT's strength in representing the impacts of agricultural management on the nitrogen cycle. Parameterizations for direct root and mycorrhizal-associated nitrogen uptake, leaf retranslocation, and symbiotic biological nitrogen fixation are employed from FUN, while parameterizations for nitrogen mineralization, nitrification, immobilization, volatilization, atmospheric deposition, and leaching are based on SWAT. The coupled model is then evaluated at the Kellogg Biological Station – a Long Term Ecological Research site within the US Corn Belt. Results show that the model performs well in capturing the major nitrogen state/flux variables (e.g., soil nitrate and nitrate leaching). Furthermore, the addition of nitrogen dynamics improves the modeling of net primary productivity and evapotranspiration. The model improvement is expected to advance the capability of Noah-MP to simultaneously predict weather and water quality in fully coupled Earth system models.« less

  7. A new approach to predict soil temperature under vegetated surfaces.

    PubMed

    Dolschak, Klaus; Gartner, Karl; Berger, Torsten W

    2015-12-01

    In this article, the setup and the application of an empirical model, based on Newton's law of cooling, capable to predict daily mean soil temperature ( T soil ) under vegetated surfaces, is described. The only input variable, necessary to run the model, is a time series of daily mean air temperature. The simulator employs 9 empirical parameters, which were estimated by inverse modeling. The model, which primarily addresses forested sites, incorporates the effect of snow cover and soil freezing on soil temperature. The model was applied to several temperate forest sites, managing the split between Central Europe (Austria) and the United States (Harvard Forest, Massachusetts; Hubbard Brook, New Hampshire), aiming to cover a broad range of site characteristics. Investigated stands differ fundamentally in stand composition, elevation, exposition, annual mean temperature, precipitation regime, as well as in the duration of winter snow cover. At last, to explore the limits of the formulation, the simulator was applied to non-forest sites (Illinois), where soil temperature was recorded under short cut grass. The model was parameterized, specifically to site and measurement depth. After calibration of the model, an evaluation was performed, using ~50 % of the available data. In each case, the simulator was capable to deliver a feasible prediction of soil temperature in the validation time interval. To evaluate the practical suitability of the simulator, the minimum amount of soil temperature point measurements, necessary to yield expedient model performance was determined. In the investigated case 13-20 point observations, uniformly distributed within an 11-year timeframe, have been proven sufficient to yield sound model performance (root mean square error <0.9 °C, Nash-Sutcliffe efficiency >0.97). This makes the model suitable for the application on sites, where the information on soil temperature is discontinuous or scarce.

  8. Two-dimensional time dependent hurricane overwash and erosion modeling at Santa Rosa Island

    USGS Publications Warehouse

    McCall, R.T.; Van Theil de Vries, J. S. M.; Plant, N.G.; Van Dongeren, A. R.; Roelvink, J.A.; Thompson, D.M.; Reniers, A.J.H.M.

    2010-01-01

    A 2DH numerical, model which is capable of computing nearshore circulation and morphodynamics, including dune erosion, breaching and overwash, is used to simulate overwash caused by Hurricane Ivan (2004) on a barrier island. The model is forced using parametric wave and surge time series based on field data and large-scale numerical model results. The model predicted beach face and dune erosion reasonably well as well as the development of washover fans. Furthermore, the model demonstrated considerable quantitative skill (upwards of 66% of variance explained, maximum bias - 0.21 m) in hindcasting the post-storm shape and elevation of the subaerial barrier island when a sheet flow sediment transport limiter was applied. The prediction skill ranged between 0.66 and 0.77 in a series of sensitivity tests in which several hydraulic forcing parameters were varied. The sensitivity studies showed that the variations in the incident wave height and wave period affected the entire simulated island morphology while variations in the surge level gradient between the ocean and back barrier bay affected the amount of deposition on the back barrier and in the back barrier bay. The model sensitivity to the sheet flow sediment transport limiter, which served as a proxy for unknown factors controlling the resistance to erosion, was significantly greater than the sensitivity to the hydraulic forcing parameters. If no limiter was applied the simulated morphological response of the barrier island was an order of magnitude greater than the measured morphological response.

  9. Large-Scale Comparative Phenotypic and Genomic Analyses Reveal Ecological Preferences of Shewanella Species and Identify Metabolic Pathways Conserved at the Genus Level ▿ †

    PubMed Central

    Rodrigues, Jorge L. M.; Serres, Margrethe H.; Tiedje, James M.

    2011-01-01

    The use of comparative genomics for the study of different microbiological species has increased substantially as sequence technologies become more affordable. However, efforts to fully link a genotype to its phenotype remain limited to the development of one mutant at a time. In this study, we provided a high-throughput alternative to this limiting step by coupling comparative genomics to the use of phenotype arrays for five sequenced Shewanella strains. Positive phenotypes were obtained for 441 nutrients (C, N, P, and S sources), with N-based compounds being the most utilized for all strains. Many genes and pathways predicted by genome analyses were confirmed with the comparative phenotype assay, and three degradation pathways believed to be missing in Shewanella were confirmed as missing. A number of previously unknown gene products were predicted to be parts of pathways or to have a function, expanding the number of gene targets for future genetic analyses. Ecologically, the comparative high-throughput phenotype analysis provided insights into niche specialization among the five different strains. For example, Shewanella amazonensis strain SB2B, isolated from the Amazon River delta, was capable of utilizing 60 C compounds, whereas Shewanella sp. strain W3-18-1, isolated from deep marine sediment, utilized only 25 of them. In spite of the large number of nutrient sources yielding positive results, our study indicated that except for the N sources, they were not sufficiently informative to predict growth phenotypes from increasing evolutionary distances. Our results indicate the importance of phenotypic evaluation for confirming genome predictions. This strategy will accelerate the functional discovery of genes and provide an ecological framework for microbial genome sequencing projects. PMID:21642407

  10. PyPLIF: Python-based Protein-Ligand Interaction Fingerprinting.

    PubMed

    Radifar, Muhammad; Yuniarti, Nunung; Istyastono, Enade Perdana

    2013-01-01

    Structure-based virtual screening (SBVS) methods often rely on docking score. The docking score is an over-simplification of the actual ligand-target binding. Its capability to model and predict the actual binding reality is limited. Recently, interaction fingerprinting (IFP) has come and offered us an alternative way to model reality. IFP provides us an alternate way to examine protein-ligand interactions. The docking score indicates the approximate affinity and IFP shows the interaction specificity. IFP is a method to convert three dimensional (3D) protein-ligand interactions into one dimensional (1D) bitstrings. The bitstrings are subsequently employed to compare the protein-ligand interaction predicted by the docking tool against the reference ligand. These comparisons produce scores that can be used to enhance the quality of SBVS campaigns. However, some IFP tools are either proprietary or using a proprietary library, which limits the access to the tools and the development of customized IFP algorithm. Therefore, we have developed PyPLIF, a Python-based open source tool to analyze IFP. In this article, we describe PyPLIF and its application to enhance the quality of SBVS in order to identify antagonists for estrogen α receptor (ERα). PyPLIF is freely available at http://code.google.com/p/pyplif.

  11. Non-invasive monitoring of below ground cassava storage root bulking by ground penetrating radar technology

    NASA Astrophysics Data System (ADS)

    Ruiz Vera, U. M.; Larson, T. H.; Mwakanyamale, K. E.; Grennan, A. K.; Souza, A. P.; Ort, D. R.; Balikian, R. J.

    2017-12-01

    Agriculture needs a new technological revolution to be able to meet the food demands, to overcome weather and natural hazards events, and to monitor better crop productivity. Advanced technologies used in other fields have recently been applied in agriculture. Thus, imagine instrumentation has been applied to phenotype above-ground biomass and predict yield. However, the capability to monitor belowground biomass is still limited. There are some existing technologies available, for example the ground penetrating radar (GPR) which has been used widely in the area of geology and civil engineering to detect different kind of formations under the ground without the disruption of the soil. GPR technology has been used also to monitor tree roots but as yet not crop roots. Some limitation are that the GPR cannot discern roots smaller than 2 cm in diameter, but it make it feasible for application in tuber crops like Cassava since harvest diameter is greater than 4 cm. The objective of this research is to test the availability to use GPR technology to monitor the growth of cassava roots by testing this technique in the greenhouse and in the field. So far, results from the greenhouse suggest that GPR can detect mature roots of cassava and this data could be used to predict biomass.

  12. BIOFILTRATION OF VOLATILE POLLUTANTS: Fundamental Mechanisms for Improved Design, Long-term Operation, Prediction, and Implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davison,Brian H.

    2000-12-31

    Biofiltration systems can be used for treatment of volatile organic compounds (VOCs); however, the systems are poorly understood and are normally operated as ''black boxes''. Common operational problems associated with biofilters include fouling, deactivation, and overgrowth, all of which make them ineffective for continuous, long-term use. The objective of this investigation was to develop generic methods for long-term stable operation, in particular by using selective limitation of supplemental nutrients while maintaining high activity. As part of this effort, we have provided a deeper fundamental understanding of the important biological and transport mechanisms in biodestruction of sparingly soluble VOCs and havemore » extended this approach and mathematical models to additional systems of high priority EM relevance--direct degradation and cometabolic degradation of priority pollutants such as BTEX and chlorinated organics. Innovative aspects of this project included development of a user-friendly two-dimensional predictive model/program for MS Windows 95/98/2000 to elucidate mass transfer and kinetic limitations in these systems, isolation of a unique microorganism capable of using sparingly soluble organic and chloroorganic VOCs as its sole carbon and energy source, and making long-term growth possible by successfully decoupling growth and degradation metabolisms in operating trickle bed bioreactors.« less

  13. Experimental validation of systematically designed acoustic hyperbolic meta material slab exhibiting negative refraction

    NASA Astrophysics Data System (ADS)

    Christiansen, Rasmus E.; Sigmund, Ole

    2016-09-01

    This Letter reports on the experimental validation of a two-dimensional acoustic hyperbolic metamaterial slab optimized to exhibit negative refractive behavior. The slab was designed using a topology optimization based systematic design method allowing for tailoring the refractive behavior. The experimental results confirm the predicted refractive capability as well as the predicted transmission at an interface. The study simultaneously provides an estimate of the attenuation inside the slab stemming from the boundary layer effects—insight which can be utilized in the further design of the metamaterial slabs. The capability of tailoring the refractive behavior opens possibilities for different applications. For instance, a slab exhibiting zero refraction across a wide angular range is capable of funneling acoustic energy through it, while a material exhibiting the negative refractive behavior across a wide angular range provides lensing and collimating capabilities.

  14. The NASA Severe Thunderstorm Observations and Regional Modeling (NASA STORM) Project

    NASA Technical Reports Server (NTRS)

    Schultz, Christopher J.; Gatlin, Patrick N.; Lang, Timothy J.; Srikishen, Jayanthi; Case, Jonathan L.; Molthan, Andrew L.; Zavodsky, Bradley T.; Bailey, Jeffrey; Blakeslee, Richard J.; Jedlovec, Gary J.

    2016-01-01

    The NASA Severe Storm Thunderstorm Observations and Regional Modeling(NASA STORM) project enhanced NASA’s severe weather research capabilities, building upon existing Earth Science expertise at NASA Marshall Space Flight Center (MSFC). During this project, MSFC extended NASA’s ground-based lightning detection capacity to include a readily deployable lightning mapping array (LMA). NASA STORM also enabled NASA’s Short-term Prediction and Research Transition (SPoRT) to add convection allowing ensemble modeling to its portfolio of regional numerical weather prediction (NWP) capabilities. As a part of NASA STORM, MSFC developed new open-source capabilities for analyzing and displaying weather radar observations integrated from both research and operational networks. These accomplishments enabled by NASA STORM are a step towards enhancing NASA’s capabilities for studying severe weather and positions them for any future NASA related severe storm field campaigns.

  15. Toward a US National Air Quality Forecast Capability: Current and Planned Capabilities

    EPA Science Inventory

    As mandated by Congress, NOAA is establishing a US national air quality forecast capability. This capability is being built with EPA, to provide air quality forecast information with enough accuracy and lead-time so that people can take actions to limit harmful effects of poor a...

  16. PGT: A Statistical Approach to Prediction and Mechanism Design

    NASA Astrophysics Data System (ADS)

    Wolpert, David H.; Bono, James W.

    One of the biggest challenges facing behavioral economics is the lack of a single theoretical framework that is capable of directly utilizing all types of behavioral data. One of the biggest challenges of game theory is the lack of a framework for making predictions and designing markets in a manner that is consistent with the axioms of decision theory. An approach in which solution concepts are distribution-valued rather than set-valued (i.e. equilibrium theory) has both capabilities. We call this approach Predictive Game Theory (or PGT). This paper outlines a general Bayesian approach to PGT. It also presents one simple example to illustrate the way in which this approach differs from equilibrium approaches in both prediction and mechanism design settings.

  17. Prediction of Launch Vehicle Ignition Overpressure and Liftoff Acoustics

    NASA Technical Reports Server (NTRS)

    Casiano, Matthew

    2009-01-01

    The LAIOP (Launch Vehicle Ignition Overpressure and Liftoff Acoustic Environments) program predicts the external pressure environment generated during liftoff for a large variety of rocket types. These environments include ignition overpressure, produced by the rapid acceleration of exhaust gases during rocket-engine start transient, and launch acoustics, produced by turbulence in the rocket plume. The ignition overpressure predictions are time-based, and the launch acoustic predictions are frequency-based. Additionally, the software can predict ignition overpressure mitigation, using water-spray injection into the rocket exhaust stream, for a limited number of configurations. The framework developed for these predictions is extensive, though some options require additional relevant data and development time. Once these options are enabled, the already extensively capable code will be further enhanced. The rockets, or launch vehicles, can either be elliptically or cylindrically shaped, and up to eight strap-on structures (boosters or tanks) are allowed. Up to four engines are allowed for the core launch vehicle, which can be of two different types. Also, two different sizes of strap-on structures can be used, and two different types of booster engines are allowed. Both tabular and graphical presentations of the predicted environments at the selected locations can be reviewed by the user. The output includes summaries of rocket-engine operation, ignition overpressure time histories, and one-third octave sound pressure spectra of the predicted launch acoustics. Also, documentation is available to the user to help him or her understand the various aspects of the graphical user interface and the required input parameters.

  18. Predicting Fire Severity and Hydrogeomorphic Effects for Wildland Fire Decision Support

    NASA Astrophysics Data System (ADS)

    Hyde, K.; Woods, S. W.; Calkin, D.; Ryan, K.; Keane, R.

    2007-12-01

    The Wildland Fire Decision Support System (WFDSS) uses the Fire Spread Probability (FSPro) model to predict the spatial extent of fire, and to assess values-at-risk within probable spread zones. This information is used to support Appropriate Management Response (AMR), which involves decision making regarding fire-fighter deployment, fire suppression requirements, and identification of areas where fire may be safely permitted to take its course. Current WFDSS assessments are generally limited to a binary prediction of whether or not a fire will reach a given location and an assessment of the infrastructure which may be damaged or destroyed by fire. However, an emerging challenge is to expand the capabilities of WFDSS so that it also estimates the probable fire severity, and hence the effect on soil, vegetation and on hydrologic and geomorphic processes such as runoff and soil erosion. We present a conceptual framework within which derivatives of predictive fire modelling are used to predict impacts upon vegetation and soil, from which fire severity and probable post-fire watershed response can be inferred, before a fire actually occurs. Fire severity predictions are validated using Burned Area Reflectance Classification imagery. Recent tests indicate that satellite derived BARC images are a simple and effective means to predict post-fire erosion response based on relative vegetation disturbance. A fire severity prediction which reasonably approximates a BARC image may therefore be used to assess post-fire erosion and flood potential before fire reaches an area. This information may provide a new avenue of reliable support for fire management decisions.

  19. Helicopter Rotor Noise Prediction: Background, Current Status, and Future Direction

    NASA Technical Reports Server (NTRS)

    Brentner, Kenneth S.

    1997-01-01

    Helicopter noise prediction is increasingly important. The purpose of this viewgraph presentation is to: 1) Put into perspective the recent progress; 2) Outline current prediction capabilities; 3) Forecast direction of future prediction research; 4) Identify rotorcraft noise prediction needs. The presentation includes an historical perspective, a description of governing equations, and the current status of source noise prediction.

  20. JPL's Role in Advancing Earth System Science to Meet the Challenges of Climate and Environmental Change

    NASA Technical Reports Server (NTRS)

    Evans, Diane

    2012-01-01

    Objective 2.1.1: Improve understanding of and improve the predictive capability for changes in the ozone layer, climate forcing, and air quality associated with changes in atmospheric composition. Objective 2.1.2: Enable improved predictive capability for weather and extreme weather events. Objective 2.1.3: Quantify, understand, and predict changes in Earth s ecosystems and biogeochemical cycles, including the global carbon cycle, land cover, and biodiversity. Objective 2.1.4: Quantify the key reservoirs and fluxes in the global water cycle and assess water cycle change and water quality. Objective 2.1.5: Improve understanding of the roles of the ocean, atmosphere, land and ice in the climate system and improve predictive capability for its future evolution. Objective 2.1.6: Characterize the dynamics of Earth s surface and interior and form the scientific basis for the assessment and mitigation of natural hazards and response to rare and extreme events. Objective 2.1.7: Enable the broad use of Earth system science observations and results in decision-making activities for societal benefits.

  1. Microbial Functional Gene Diversity Predicts Groundwater Contamination and Ecosystem Functioning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Zhili; Zhang, Ping; Wu, Linwei

    Contamination from anthropogenic activities has significantly impacted Earth’s biosphere. However, knowledge about how environmental contamination affects the biodiversity of groundwater microbiomes and ecosystem functioning remains very limited. Here, we used a comprehensive functional gene array to analyze groundwater microbiomes from 69 wells at the Oak Ridge Field Research Center (Oak Ridge, TN), representing a wide pH range and uranium, nitrate, and other contaminants. We hypothesized that the functional diversity of groundwater microbiomes would decrease as environmental contamination (e.g., uranium or nitrate) increased or at low or high pH, while some specific populations capable of utilizing or resistant to those contaminantsmore » would increase, and thus, such key microbial functional genes and/or populations could be used to predict groundwater contamination and ecosystem functioning. Our results indicated that functional richness/diversity decreased as uranium (but not nitrate) increased in groundwater. In addition, about 5.9% of specific key functional populations targeted by a comprehensive functional gene array (GeoChip 5) increased significantly (P < 0.05) as uranium or nitrate increased, and their changes could be used to successfully predict uranium and nitrate contamination and ecosystem functioning. Here, this study indicates great potential for using microbial functional genes to predict environmental contamination and ecosystem functioning.« less

  2. Network Location-Aware Service Recommendation with Random Walk in Cyber-Physical Systems.

    PubMed

    Yin, Yuyu; Yu, Fangzheng; Xu, Yueshen; Yu, Lifeng; Mu, Jinglong

    2017-09-08

    Cyber-physical systems (CPS) have received much attention from both academia and industry. An increasing number of functions in CPS are provided in the way of services, which gives rise to an urgent task, that is, how to recommend the suitable services in a huge number of available services in CPS. In traditional service recommendation, collaborative filtering (CF) has been studied in academia, and used in industry. However, there exist several defects that limit the application of CF-based methods in CPS. One is that under the case of high data sparsity, CF-based methods are likely to generate inaccurate prediction results. In this paper, we discover that mining the potential similarity relations among users or services in CPS is really helpful to improve the prediction accuracy. Besides, most of traditional CF-based methods are only capable of using the service invocation records, but ignore the context information, such as network location, which is a typical context in CPS. In this paper, we propose a novel service recommendation method for CPS, which utilizes network location as context information and contains three prediction models using random walking. We conduct sufficient experiments on two real-world datasets, and the results demonstrate the effectiveness of our proposed methods and verify that the network location is indeed useful in QoS prediction.

  3. Microbial Functional Gene Diversity Predicts Groundwater Contamination and Ecosystem Functioning

    PubMed Central

    Zhang, Ping; Wu, Linwei; Rocha, Andrea M.; Shi, Zhou; Wu, Bo; Qin, Yujia; Wang, Jianjun; Yan, Qingyun; Curtis, Daniel; Ning, Daliang; Van Nostrand, Joy D.; Wu, Liyou; Watson, David B.; Adams, Michael W. W.; Alm, Eric J.; Adams, Paul D.; Arkin, Adam P.

    2018-01-01

    ABSTRACT Contamination from anthropogenic activities has significantly impacted Earth’s biosphere. However, knowledge about how environmental contamination affects the biodiversity of groundwater microbiomes and ecosystem functioning remains very limited. Here, we used a comprehensive functional gene array to analyze groundwater microbiomes from 69 wells at the Oak Ridge Field Research Center (Oak Ridge, TN), representing a wide pH range and uranium, nitrate, and other contaminants. We hypothesized that the functional diversity of groundwater microbiomes would decrease as environmental contamination (e.g., uranium or nitrate) increased or at low or high pH, while some specific populations capable of utilizing or resistant to those contaminants would increase, and thus, such key microbial functional genes and/or populations could be used to predict groundwater contamination and ecosystem functioning. Our results indicated that functional richness/diversity decreased as uranium (but not nitrate) increased in groundwater. In addition, about 5.9% of specific key functional populations targeted by a comprehensive functional gene array (GeoChip 5) increased significantly (P < 0.05) as uranium or nitrate increased, and their changes could be used to successfully predict uranium and nitrate contamination and ecosystem functioning. This study indicates great potential for using microbial functional genes to predict environmental contamination and ecosystem functioning. PMID:29463661

  4. Microbial Functional Gene Diversity Predicts Groundwater Contamination and Ecosystem Functioning

    DOE PAGES

    He, Zhili; Zhang, Ping; Wu, Linwei; ...

    2018-02-20

    Contamination from anthropogenic activities has significantly impacted Earth’s biosphere. However, knowledge about how environmental contamination affects the biodiversity of groundwater microbiomes and ecosystem functioning remains very limited. Here, we used a comprehensive functional gene array to analyze groundwater microbiomes from 69 wells at the Oak Ridge Field Research Center (Oak Ridge, TN), representing a wide pH range and uranium, nitrate, and other contaminants. We hypothesized that the functional diversity of groundwater microbiomes would decrease as environmental contamination (e.g., uranium or nitrate) increased or at low or high pH, while some specific populations capable of utilizing or resistant to those contaminantsmore » would increase, and thus, such key microbial functional genes and/or populations could be used to predict groundwater contamination and ecosystem functioning. Our results indicated that functional richness/diversity decreased as uranium (but not nitrate) increased in groundwater. In addition, about 5.9% of specific key functional populations targeted by a comprehensive functional gene array (GeoChip 5) increased significantly (P < 0.05) as uranium or nitrate increased, and their changes could be used to successfully predict uranium and nitrate contamination and ecosystem functioning. Here, this study indicates great potential for using microbial functional genes to predict environmental contamination and ecosystem functioning.« less

  5. Predicting protein structures with a multiplayer online game.

    PubMed

    Cooper, Seth; Khatib, Firas; Treuille, Adrien; Barbero, Janos; Lee, Jeehyung; Beenen, Michael; Leaver-Fay, Andrew; Baker, David; Popović, Zoran; Players, Foldit

    2010-08-05

    People exert large amounts of problem-solving effort playing computer games. Simple image- and text-recognition tasks have been successfully 'crowd-sourced' through games, but it is not clear if more complex scientific problems can be solved with human-directed computing. Protein structure prediction is one such problem: locating the biologically relevant native conformation of a protein is a formidable computational challenge given the very large size of the search space. Here we describe Foldit, a multiplayer online game that engages non-scientists in solving hard prediction problems. Foldit players interact with protein structures using direct manipulation tools and user-friendly versions of algorithms from the Rosetta structure prediction methodology, while they compete and collaborate to optimize the computed energy. We show that top-ranked Foldit players excel at solving challenging structure refinement problems in which substantial backbone rearrangements are necessary to achieve the burial of hydrophobic residues. Players working collaboratively develop a rich assortment of new strategies and algorithms; unlike computational approaches, they explore not only the conformational space but also the space of possible search strategies. The integration of human visual problem-solving and strategy development capabilities with traditional computational algorithms through interactive multiplayer games is a powerful new approach to solving computationally-limited scientific problems.

  6. On-line self-learning time forward voltage prognosis for lithium-ion batteries using adaptive neuro-fuzzy inference system

    NASA Astrophysics Data System (ADS)

    Fleischer, Christian; Waag, Wladislaw; Bai, Ziou; Sauer, Dirk Uwe

    2013-12-01

    The battery management system (BMS) of a battery-electric road vehicle must ensure an optimal operation of the electrochemical storage system to guarantee for durability and reliability. In particular, the BMS must provide precise information about the battery's state-of-functionality, i.e. how much dis-/charging power can the battery accept at current state and condition while at the same time preventing it from operating outside its safe operating area. These critical limits have to be calculated in a predictive manner, which serve as a significant input factor for the supervising vehicle energy management (VEM). The VEM must provide enough power to the vehicle's drivetrain for certain tasks and especially in critical driving situations. Therefore, this paper describes a new approach which can be used for state-of-available-power estimation with respect to lowest/highest cell voltage prediction using an adaptive neuro-fuzzy inference system (ANFIS). The estimated voltage for a given time frame in the future is directly compared with the actual voltage, verifying the effectiveness and accuracy of a relative voltage prediction error of less than 1%. Moreover, the real-time operating capability of the proposed algorithm was verified on a battery test bench while running on a real-time system performing voltage prediction.

  7. On improvement to the Shock Propagation Model (SPM) applied to interplanetary shock transit time forecasting

    NASA Astrophysics Data System (ADS)

    Li, H. J.; Wei, F. S.; Feng, X. S.; Xie, Y. Q.

    2008-09-01

    This paper investigates methods to improve the predictions of Shock Arrival Time (SAT) of the original Shock Propagation Model (SPM). According to the classical blast wave theory adopted in the SPM, the shock propagating speed is determined by the total energy of the original explosion together with the background solar wind speed. Noting that there exists an intrinsic limit to the transit times computed by the SPM predictions for a specified ambient solar wind, we present a statistical analysis on the forecasting capability of the SPM using this intrinsic property. Two facts about SPM are found: (1) the error in shock energy estimation is not the only cause of the prediction errors and we should not expect that the accuracy of SPM to be improved drastically by an exact shock energy input; and (2) there are systematic differences in prediction results both for the strong shocks propagating into a slow ambient solar wind and for the weak shocks into a fast medium. Statistical analyses indicate the physical details of shock propagation and thus clearly point out directions of the future improvement of the SPM. A simple modification is presented here, which shows that there is room for improvement of SPM and thus that the original SPM is worthy of further development.

  8. Incorporation of satellite remote sensing pan-sharpened imagery into digital soil prediction and mapping models to characterize soil property variability in small agricultural fields

    NASA Astrophysics Data System (ADS)

    Xu, Yiming; Smith, Scot E.; Grunwald, Sabine; Abd-Elrahman, Amr; Wani, Suhas P.

    2017-01-01

    Soil prediction models based on spectral indices from some multispectral images are too coarse to characterize spatial pattern of soil properties in small and heterogeneous agricultural lands. Image pan-sharpening has seldom been utilized in Digital Soil Mapping research before. This research aimed to analyze the effects of pan-sharpened (PAN) remote sensing spectral indices on soil prediction models in smallholder farm settings. This research fused the panchromatic band and multispectral (MS) bands of WorldView-2, GeoEye-1, and Landsat 8 images in a village in Southern India by Brovey, Gram-Schmidt and Intensity-Hue-Saturation methods. Random Forest was utilized to develop soil total nitrogen (TN) and soil exchangeable potassium (Kex) prediction models by incorporating multiple spectral indices from the PAN and MS images. Overall, our results showed that PAN remote sensing spectral indices have similar spectral characteristics with soil TN and Kex as MS remote sensing spectral indices. There is no soil prediction model incorporating the specific type of pan-sharpened spectral indices always had the strongest prediction capability of soil TN and Kex. The incorporation of pan-sharpened remote sensing spectral data not only increased the spatial resolution of the soil prediction maps, but also enhanced the prediction accuracy of soil prediction models. Small farms with limited footprint, fragmented ownership and diverse crop cycle should benefit greatly from the pan-sharpened high spatial resolution imagery for soil property mapping. Our results show that multiple high and medium resolution images can be used to map soil properties suggesting the possibility of an improvement in the maps' update frequency. Additionally, the results should benefit the large agricultural community through the reduction of routine soil sampling cost and improved prediction accuracy.

  9. Centaur Standard Shroud (CSS) static limit load structural tests

    NASA Technical Reports Server (NTRS)

    Eastwood, C.

    1975-01-01

    The structural capabilities of the jettisonable metal shroud were tested and the interaction of the shroud with the Centaur stage was evaluated. A flight-configured shroud and the assemblies of the associated Centaur stage were tested for applied axial and shear loads to flight limit values. The tests included various thermal, pressure, and load conditions to verify localized strength capabilities, to evaluate subsystem performance, and to determine the aging effect on insulation system properties. The tests series verified the strength capabilities of the shroud and of all associated flight assembles. Shroud deflections were shown to remain within allowable limits so long as load sharing members were connected between the shroud and the Centaur stage.

  10. Development of JSDF Cyber Warfare Defense Critical Capability

    DTIC Science & Technology

    2010-03-01

    attack identification capability is essential for a nation to defend her vital infrastructures against offensive cyber warfare . Although the necessity of...cyber-attack identification capability is quite clear, the Japans preparation against cyber warfare is quite limited.

  11. Optic Nerve Sheath Diameter: Translating a Terrestrial Focused Technique Into a Clinical Monitoring Tool for Space Flight

    NASA Technical Reports Server (NTRS)

    Mason, Sara S.; Foy, Millennia; Sargsyan, Ashot; Garcia, Kathleen; Wear, Mary L.; Bedi, Deepak; Ernst, Randy; Van Baalen, Mary

    2014-01-01

    Emergency medicine physicians recently adopted the use of ultrasonography to quickly measure optic nerve sheath diameter (ONSD) as concomitant with increased intracranial pressure. NASA Space and Clinical Operations Division has been using ground and on-orbit ultrasound capabilities since 2009 to consider this anatomical measure as a proxy for intracranial pressure in the microgravity environment. In the terrestrial emergency room population, an ONSD greater than 0.59 cm is considered highly predictive of elevated intracranial pressure. However, this cut-off limit is not applicable to the spaceflight setting since over 50% of US Operating Segment (USOS) astronauts have an ONSD greater than 0.60 cm even before missions. Crew Surgeon clinical decision-making is complicated by the fact that many astronauts have history of previous spaceflights. Data will be presented characterizing the distribution of baseline ONSD in the astronaut corps, longitudinal trends in-flight, and the predictive power of this measure related to increased intracranial pressure outcomes.

  12. Finite Element Simulation of a Space Shuttle Solid Rocket Booster Aft Skirt Splashdown Using an Arbitrary Lagrangian-Eulerian Approach

    NASA Astrophysics Data System (ADS)

    Melis, Matthew E.

    2003-01-01

    Explicit finite element techniques employing an Arbitrary Lagrangian-Eulerian (ALE) methodology, within the transient dynamic code LS-DYNA, are used to predict splashdown loads on a proposed replacement/upgrade of the hydrazine tanks on the thrust vector control system housed within the aft skirt of a Space Shuttle Solid Rocket Booster. Two preliminary studies are performed prior to the full aft skirt analysis: An analysis of the proposed tank impacting water without supporting aft skirt structure, and an analysis of space capsule water drop tests conducted at NASA's Langley Research Center. Results from the preliminary studies provide confidence that useful predictions can be made by applying the ALE methodology to a detailed analysis of a 26-degree section of the skirt with proposed tank attached. Results for all three studies are presented and compared to limited experimental data. The challenges of using the LS-DYNA ALE capability for this type of analysis are discussed.

  13. Predictive Rotation Profile Control for the DIII-D Tokamak

    NASA Astrophysics Data System (ADS)

    Wehner, W. P.; Schuster, E.; Boyer, M. D.; Walker, M. L.; Humphreys, D. A.

    2017-10-01

    Control-oriented modeling and model-based control of the rotation profile are employed to build a suitable control capability for aiding rotation-related physics studies at DIII-D. To obtain a control-oriented model, a simplified version of the momentum balance equation is combined with empirical representations of the momentum sources. The control approach is rooted in a Model Predictive Control (MPC) framework to regulate the rotation profile while satisfying constraints associated with the desired plasma stored energy and/or βN limit. Simple modifications allow for alternative control objectives, such as maximizing the plasma rotation while maintaining a specified input torque. Because the MPC approach can explicitly incorporate various types of constraints, this approach is well suited to a variety of control objectives, and therefore serves as a valuable tool for experimental physics studies. Closed-loop TRANSP simulations are presented to demonstrate the effectiveness of the control approach. Supported by the US DOE under DE-SC0010661 and DE-FC02-04ER54698.

  14. Constitutive Equation with Varying Parameters for Superplastic Flow Behavior

    NASA Astrophysics Data System (ADS)

    Guan, Zhiping; Ren, Mingwen; Jia, Hongjie; Zhao, Po; Ma, Pinkui

    2014-03-01

    In this study, constitutive equations for superplastic materials with an extra large elongation were investigated through mechanical analysis. From the view of phenomenology, firstly, some traditional empirical constitutive relations were standardized by restricting some strain paths and parameter conditions, and the coefficients in these relations were strictly given new mechanical definitions. Subsequently, a new, general constitutive equation with varying parameters was theoretically deduced based on the general mechanical equation of state. The superplastic tension test data of Zn-5%Al alloy at 340 °C under strain rates, velocities, and loads were employed for building a new constitutive equation and examining its validity. Analysis results indicated that the constitutive equation with varying parameters could characterize superplastic flow behavior in practical superplastic forming with high prediction accuracy and without any restriction of strain path or deformation condition, showing good industrial or scientific interest. On the contrary, those empirical equations have low prediction capabilities due to constant parameters and poor applicability because of the limit of special strain path or parameter conditions based on strict phenomenology.

  15. Finite Element Simulation of a Space Shuttle Solid Rocket Booster Aft Skirt Splashdown Using an Arbitrary Lagrangian-eulerian Approach

    NASA Technical Reports Server (NTRS)

    Melis, Matthew E.

    2003-01-01

    Explicit finite element techniques employing an Arbitrary Lagrangian-Eulerian (ALE) methodology, within the transient dynamic code LS-DYNA, are used to predict splashdown loads on a proposed replacement/upgrade of the hydrazine tanks on the thrust vector control system housed within the aft skirt of a Space Shuttle Solid Rocket Booster. Two preliminary studies are performed prior to the full aft skirt analysis: An analysis of the proposed tank impacting water without supporting aft skirt structure, and an analysis of space capsule water drop tests conducted at NASA's Langley Research Center. Results from the preliminary studies provide confidence that useful predictions can be made by applying the ALE methodology to a detailed analysis of a 26-degree section of the skirt with proposed tank attached. Results for all three studies are presented and compared to limited experimental data. The challenges of using the LS-DYNA ALE capability for this type of analysis are discussed.

  16. Reply by the Authors to C. K. W. Tam

    NASA Technical Reports Server (NTRS)

    Morris, Philip J.; Farassat, F.

    2002-01-01

    The prediction of noise generation and radiation by turbulence has been the subject of continuous research for over fifty years. The essential problem is how to model the noise sources when one s knowledge of the detailed space-time properties of the turbulence is limited. We attempted to provide a comparison of models based on acoustic analogies and recent alternative models. Our goal was to demonstrate that the predictive capabilities of any model are based on the choice of the turbulence property that is modeled as a source of noise. Our general definition of an acoustic analogy is a rearrangement of the equations of motion into the form L(u) = Q, where L is a linear operator that reduces to an acoustic propagation operator outside a region upsilon; u is a variable that reduces to acoustic pressure (or a related linear acoustic variable) outside upsilon; and Q is a source term that can be meaningfully estimated without knowing u and tends to zero outside upsilon.

  17. Energy in a Planetary Context

    NASA Technical Reports Server (NTRS)

    Hoehler, Tori M.

    2017-01-01

    The potential present day habitability of solar system bodies beyond Earth is limited to subsurface environments, where the availability of energy in biologically useful form is a paramount consideration. Energy availability is commonly quantified in terms of molar Gibbs energy changes for metabolisms of interest, but this can provide an incomplete and even misleading picture. A second aspect of life's requirement for energy - the rate of delivery, or power - strongly influences habitability, biomass abundance, growth rates, and, ultimately, rates of evolution. We are developing an approach to quantify metabolic power, using a cell-scale reactive transport model in which physical and chemical environmental parameters are varied. Simultaneously, we evaluate cell-specific energy flux requirements and their dependence on environmental "extremes". Comparison of metabolic power supply and demand provides a constraint on how biomass abundance varies across a range of environmental parameters, and thereby a prediction of the relative habitability of different environments. We are evaluating the predictive capability of this approach through comparison to observed distributions of microbial abundance in a range of subsurface (predominantly serpentinizing) systems.

  18. Extending BPM Environments of Your Choice with Performance Related Decision Support

    NASA Astrophysics Data System (ADS)

    Fritzsche, Mathias; Picht, Michael; Gilani, Wasif; Spence, Ivor; Brown, John; Kilpatrick, Peter

    What-if Simulations have been identified as one solution for business performance related decision support. Such support is especially useful in cases where it can be automatically generated out of Business Process Management (BPM) Environments from the existing business process models and performance parameters monitored from the executed business process instances. Currently, some of the available BPM Environments offer basic-level performance prediction capabilities. However, these functionalities are normally too limited to be generally useful for performance related decision support at business process level. In this paper, an approach is presented which allows the non-intrusive integration of sophisticated tooling for what-if simulations, analytic performance prediction tools, process optimizations or a combination of such solutions into already existing BPM environments. The approach abstracts from process modelling techniques which enable automatic decision support spanning processes across numerous BPM Environments. For instance, this enables end-to-end decision support for composite processes modelled with the Business Process Modelling Notation (BPMN) on top of existing Enterprise Resource Planning (ERP) processes modelled with proprietary languages.

  19. Contact and Impact Dynamic Modeling Capabilities of LS-DYNA for Fluid-Structure Interaction Problems

    DTIC Science & Technology

    2010-12-02

    rigid sphere in a vertical water entry,” Applied Ocean Research, 13(1), pp. 43-48. Monaghan, J.J., 1994. “ Simulating free surface flows with SPH ...The kinematic free surface condition was used to determine the intersection between the free surface and the body in the outer flow domain...and the results were compared with analytical and numerical predictions. The predictive capability of ALE and SPH features of LS-DYNA for simulation

  20. Real-time scene and signature generation for ladar and imaging sensors

    NASA Astrophysics Data System (ADS)

    Swierkowski, Leszek; Christie, Chad L.; Antanovskii, Leonid; Gouthas, Efthimios

    2014-05-01

    This paper describes development of two key functionalities within the VIRSuite scene simulation program, broadening its scene generation capabilities and increasing accuracy of thermal signatures. Firstly, a new LADAR scene generation module has been designed. It is capable of simulating range imagery for Geiger mode LADAR, in addition to the already existing functionality for linear mode systems. Furthermore, a new 3D heat diffusion solver has been developed within the VIRSuite signature prediction module. It is capable of calculating the temperature distribution in complex three-dimensional objects for enhanced dynamic prediction of thermal signatures. With these enhancements, VIRSuite is now a robust tool for conducting dynamic simulation for missiles with multi-mode seekers.

  1. Meta-analysis of gene expression profiles associated with histological classification and survival in 829 ovarian cancer samples.

    PubMed

    Fekete, Tibor; Rásó, Erzsébet; Pete, Imre; Tegze, Bálint; Liko, István; Munkácsy, Gyöngyi; Sipos, Norbert; Rigó, János; Györffy, Balázs

    2012-07-01

    Transcriptomic analysis of global gene expression in ovarian carcinoma can identify dysregulated genes capable to serve as molecular markers for histology subtypes and survival. The aim of our study was to validate previous candidate signatures in an independent setting and to identify single genes capable to serve as biomarkers for ovarian cancer progression. As several datasets are available in the GEO today, we were able to perform a true meta-analysis. First, 829 samples (11 datasets) were downloaded, and the predictive power of 16 previously published gene sets was assessed. Of these, eight were capable to discriminate histology subtypes, and none was capable to predict survival. To overcome the differences in previous studies, we used the 829 samples to identify new predictors. Then, we collected 64 ovarian cancer samples (median relapse-free survival 24.5 months) and performed TaqMan Real Time Polimerase Chain Reaction (RT-PCR) analysis for the best 40 genes associated with histology subtypes and survival. Over 90% of subtype-associated genes were confirmed. Overall survival was effectively predicted by hormone receptors (PGR and ESR2) and by TSPAN8. Relapse-free survival was predicted by MAPT and SNCG. In summary, we successfully validated several gene sets in a meta-analysis in large datasets of ovarian samples. Additionally, several individual genes identified were validated in a clinical cohort. Copyright © 2011 UICC.

  2. Temporary Network Development Capability in High Velocity Environments: A Dynamic Capability Study of Disaster Relief Organizations

    ERIC Educational Resources Information Center

    O'Brien, William Ross

    2010-01-01

    Organizations involved in crisis relief after a natural disaster face the multifaceted challenge of significantly changing needs of their various stakeholders, limited, ambiguous and even incorrect information, and highly compressed time limitations. Yet the performance of these organization in these high velocity environments is critical for the…

  3. NetMHCIIpan-2.0 - Improved pan-specific HLA-DR predictions using a novel concurrent alignment and weight optimization training procedure.

    PubMed

    Nielsen, Morten; Justesen, Sune; Lund, Ole; Lundegaard, Claus; Buus, Søren

    2010-11-13

    Binding of peptides to Major Histocompatibility class II (MHC-II) molecules play a central role in governing responses of the adaptive immune system. MHC-II molecules sample peptides from the extracellular space allowing the immune system to detect the presence of foreign microbes from this compartment. Predicting which peptides bind to an MHC-II molecule is therefore of pivotal importance for understanding the immune response and its effect on host-pathogen interactions. The experimental cost associated with characterizing the binding motif of an MHC-II molecule is significant and large efforts have therefore been placed in developing accurate computer methods capable of predicting this binding event. Prediction of peptide binding to MHC-II is complicated by the open binding cleft of the MHC-II molecule, allowing binding of peptides extending out of the binding groove. Moreover, the genes encoding the MHC molecules are immensely diverse leading to a large set of different MHC molecules each potentially binding a unique set of peptides. Characterizing each MHC-II molecule using peptide-screening binding assays is hence not a viable option. Here, we present an MHC-II binding prediction algorithm aiming at dealing with these challenges. The method is a pan-specific version of the earlier published allele-specific NN-align algorithm and does not require any pre-alignment of the input data. This allows the method to benefit also from information from alleles covered by limited binding data. The method is evaluated on a large and diverse set of benchmark data, and is shown to significantly out-perform state-of-the-art MHC-II prediction methods. In particular, the method is found to boost the performance for alleles characterized by limited binding data where conventional allele-specific methods tend to achieve poor prediction accuracy. The method thus shows great potential for efficient boosting the accuracy of MHC-II binding prediction, as accurate predictions can be obtained for novel alleles at highly reduced experimental costs. Pan-specific binding predictions can be obtained for all alleles with know protein sequence and the method can benefit by including data in the training from alleles even where only few binders are known. The method and benchmark data are available at http://www.cbs.dtu.dk/services/NetMHCIIpan-2.0.

  4. In silico predictions of gastrointestinal drug absorption in pharmaceutical product development: application of the mechanistic absorption model GI-Sim.

    PubMed

    Sjögren, Erik; Westergren, Jan; Grant, Iain; Hanisch, Gunilla; Lindfors, Lennart; Lennernäs, Hans; Abrahamsson, Bertil; Tannergren, Christer

    2013-07-16

    Oral drug delivery is the predominant administration route for a major part of the pharmaceutical products used worldwide. Further understanding and improvement of gastrointestinal drug absorption predictions is currently a highly prioritized area of research within the pharmaceutical industry. The fraction absorbed (fabs) of an oral dose after administration of a solid dosage form is a key parameter in the estimation of the in vivo performance of an orally administrated drug formulation. This study discloses an evaluation of the predictive performance of the mechanistic physiologically based absorption model GI-Sim. GI-Sim deploys a compartmental gastrointestinal absorption and transit model as well as algorithms describing permeability, dissolution rate, salt effects, partitioning into micelles, particle and micelle drifting in the aqueous boundary layer, particle growth and amorphous or crystalline precipitation. Twelve APIs with reported or expected absorption limitations in humans, due to permeability, dissolution and/or solubility, were investigated. Predictions of the intestinal absorption for different doses and formulations were performed based on physicochemical and biopharmaceutical properties, such as solubility in buffer and simulated intestinal fluid, molecular weight, pK(a), diffusivity and molecule density, measured or estimated human effective permeability and particle size distribution. The performance of GI-Sim was evaluated by comparing predicted plasma concentration-time profiles along with oral pharmacokinetic parameters originating from clinical studies in healthy individuals. The capability of GI-Sim to correctly predict impact of dose and particle size as well as the in vivo performance of nanoformulations was also investigated. The overall predictive performance of GI-Sim was good as >95% of the predicted pharmacokinetic parameters (C(max) and AUC) were within a 2-fold deviation from the clinical observations and the predicted plasma AUC was within one standard deviation of the observed mean plasma AUC in 74% of the simulations. GI-Sim was also able to correctly capture the trends in dose- and particle size dependent absorption for the study drugs with solubility and dissolution limited absorption, respectively. In addition, GI-Sim was also shown to be able to predict the increase in absorption and plasma exposure achieved with nanoformulations. Based on the results, the performance of GI-Sim was shown to be suitable for early risk assessment as well as to guide decision making in pharmaceutical formulation development. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. ROTONET Primer

    NASA Technical Reports Server (NTRS)

    Prichard, Devon S.

    1996-01-01

    This document provides a brief overview of use of the ROTONET rotorcraft system noise prediction capability within the Aircraft Noise Program (ANOPP). Reviews are given on rotorcraft noise, the state-of-the-art of system noise prediction, and methods for using the various ROTONET prediction modules.

  6. Observational breakthroughs lead the way to improved hydrological predictions

    NASA Astrophysics Data System (ADS)

    Lettenmaier, Dennis P.

    2017-04-01

    New data sources are revolutionizing the hydrological sciences. The capabilities of hydrological models have advanced greatly over the last several decades, but until recently model capabilities have outstripped the spatial resolution and accuracy of model forcings (atmospheric variables at the land surface) and the hydrologic state variables (e.g., soil moisture; snow water equivalent) that the models predict. This has begun to change, as shown in two examples here: soil moisture and drought evolution over Africa as predicted by a hydrology model forced with satellite-derived precipitation, and observations of snow water equivalent at very high resolution over a river basin in California's Sierra Nevada.

  7. Independent data validation of an in vitro method for ...

    EPA Pesticide Factsheets

    In vitro bioaccessibility assays (IVBA) estimate arsenic (As) relative bioavailability (RBA) in contaminated soils to improve the accuracy of site-specific human exposure assessments and risk calculations. For an IVBA assay to gain acceptance for use in risk assessment, it must be shown to reliably predict in vivo RBA that is determined in an established animal model. Previous studies correlating soil As IVBA with RBA have been limited by the use of few soil types as the source of As. Furthermore, the predictive value of As IVBA assays has not been validated using an independent set of As-contaminated soils. Therefore, the current study was undertaken to develop a robust linear model to predict As RBA in mice using an IVBA assay and to independently validate the predictive capability of this assay using a unique set of As-contaminated soils. Thirty-six As-contaminated soils varying in soil type, As contaminant source, and As concentration were included in this study, with 27 soils used for initial model development and nine soils used for independent model validation. The initial model reliably predicted As RBA values in the independent data set, with a mean As RBA prediction error of 5.3% (range 2.4 to 8.4%). Following validation, all 36 soils were used for final model development, resulting in a linear model with the equation: RBA = 0.59 * IVBA + 9.8 and R2 of 0.78. The in vivo-in vitro correlation and independent data validation presented here provide

  8. Earthquake prediction in seismogenic areas of the Iberian Peninsula based on computational intelligence

    NASA Astrophysics Data System (ADS)

    Morales-Esteban, A.; Martínez-Álvarez, F.; Reyes, J.

    2013-05-01

    A method to predict earthquakes in two of the seismogenic areas of the Iberian Peninsula, based on Artificial Neural Networks (ANNs), is presented in this paper. ANNs have been widely used in many fields but only very few and very recent studies have been conducted on earthquake prediction. Two kinds of predictions are provided in this study: a) the probability of an earthquake, of magnitude equal or larger than a preset threshold magnitude, within the next 7 days, to happen; b) the probability of an earthquake of a limited magnitude interval to happen, during the next 7 days. First, the physical fundamentals related to earthquake occurrence are explained. Second, the mathematical model underlying ANNs is explained and the configuration chosen is justified. Then, the ANNs have been trained in both areas: The Alborán Sea and the Western Azores-Gibraltar fault. Later, the ANNs have been tested in both areas for a period of time immediately subsequent to the training period. Statistical tests are provided showing meaningful results. Finally, ANNs were compared to other well known classifiers showing quantitatively and qualitatively better results. The authors expect that the results obtained will encourage researchers to conduct further research on this topic. Development of a system capable of predicting earthquakes for the next seven days Application of ANN is particularly reliable to earthquake prediction. Use of geophysical information modeling the soil behavior as ANN's input data Successful analysis of one region with large seismic activity

  9. Modeling mechanical restriction differences between car and heavy truck in two-lane cellular automata traffic flow model

    NASA Astrophysics Data System (ADS)

    Li, Xin; Li, Xingang; Xiao, Yao; Jia, Bin

    2016-06-01

    Real traffic is heterogeneous with car and truck. Due to mechanical restrictions, the car and the truck have different limited deceleration capabilities, which are important factors in safety driving. This paper extends the single lane safety driving (SD) model with limited deceleration capability to two-lane SD model, in which car-truck heterogeneous traffic is considered. A car has a larger limited deceleration capability while a heavy truck has a smaller limited deceleration capability as a result of loaded goods. Then the safety driving conditions are different as the types of the following and the leading vehicles vary. In order to eliminate the well-known plug in heterogeneous two-lane traffic, it is assumed that heavy truck has active deceleration behavior when the heavy truck perceives the forming plug. The lane-changing decisions are also determined by the safety driving conditions. The fundamental diagram, spatiotemporal diagram, and lane-changing frequency were investigated to show the effect of mechanical restriction on heterogeneous traffic flow. It was shown that there would be still three traffic phases in heterogeneous traffic condition; the active deceleration of the heavy truck could well eliminate the plug; the lane-changing frequency was low in synchronized flow; the flow and velocity would decrease as the proportion of heavy truck grows or the limited deceleration capability of heavy truck drops; and the flow could be improved with lane control measures.

  10. Meta-tips for lab-on-fiber optrodes

    NASA Astrophysics Data System (ADS)

    Principe, M.; Consales, M.; Micco, A.; Crescitelli, A.; Castaldi, G.; Esposito, E.; La Ferrara, V.; Cutolo, A.; Galdi, V.; Cusano, A.

    2016-05-01

    We realize the first optical-fiber "meta-tip" that integrates a metasurface on the tip of an optical fiber. In our proposed configuration a Babinet-inverted plasmonic metasurface is fabricated by patterning (via focused-ion-beam) an array of rectangular aperture nanoantennas in a thin gold film. Via spatial modulation of the nanoantennas size, we properly tune their resonances so as to impress abrupt arbitrary phase variations in the transmitted field wavefront. As a proof-of-principle, we fabricate and characterize several prototypes implementing in the near-infrared the beam-steering with various angles. We also explore the limit case where surface waves are excited, and its capability to work as refractive index sensors. Notably, its sensitivity overwhelms that of the corresponding gradient-free plasmonic array, thus paving the way to the use of metasurfaces for label-free chemical and biological sensing. Our experimental results, in fairly good agreement with numerical predictions, demonstrate the practical feasibility of the meta-tip concept, and set the stage for the integration of metasurfaces, and their exceptional capabilities to manipulate light, in fiber-optics technological platforms, within the emerging "lab-on-fiber" paradigm.

  11. Prediction of Agglomeration, Fouling, and Corrosion Tendency of Fuels in CFB Co-Combustion

    NASA Astrophysics Data System (ADS)

    Barišć, Vesna; Zabetta, Edgardo Coda; Sarkki, Juha

    Prediction of agglomeration, fouling, and corrosion tendency of fuels is essential to the design of any CFB boiler. During the years, tools have been successfully developed at Foster Wheeler to help with such predictions for the most commercial fuels. However, changes in fuel market and the ever-growing demand for co-combustion capabilities pose a continuous need for development. This paper presents results from recently upgraded models used at Foster Wheeler to predict agglomeration, fouling, and corrosion tendency of a variety of fuels and mixtures. The models, subject of this paper, are semi-empirical computer tools that combine the theoretical basics of agglomeration/fouling/corrosion phenomena with empirical correlations. Correlations are derived from Foster Wheeler's experience in fluidized beds, including nearly 10,000 fuel samples and over 1,000 tests in about 150 CFB units. In these models, fuels are evaluated based on their classification, their chemical and physical properties by standard analyses (proximate, ultimate, fuel ash composition, etc.;.) alongside with Foster Wheeler own characterization methods. Mixtures are then evaluated taking into account the component fuels. This paper presents the predictive capabilities of the agglomeration/fouling/corrosion probability models for selected fuels and mixtures fired in full-scale. The selected fuels include coals and different types of biomass. The models are capable to predict the behavior of most fuels and mixtures, but also offer possibilities for further improvements.

  12. High-fidelity modeling and impact footprint prediction for vehicle breakup analysis

    NASA Astrophysics Data System (ADS)

    Ling, Lisa

    For decades, vehicle breakup analysis had been performed for space missions that used nuclear heater or power units in order to assess aerospace nuclear safety for potential launch failures leading to inadvertent atmospheric reentry. Such pre-launch risk analysis is imperative to assess possible environmental impacts, obtain launch approval, and for launch contingency planning. In order to accurately perform a vehicle breakup analysis, the analysis tool should include a trajectory propagation algorithm coupled with thermal and structural analyses and influences. Since such a software tool was not available commercially or in the public domain, a basic analysis tool was developed by Dr. Angus McRonald prior to this study. This legacy software consisted of low-fidelity modeling and had the capability to predict vehicle breakup, but did not predict the surface impact point of the nuclear component. Thus the main thrust of this study was to develop and verify the additional dynamics modeling and capabilities for the analysis tool with the objectives to (1) have the capability to predict impact point and footprint, (2) increase the fidelity in the prediction of vehicle breakup, and (3) reduce the effort and time required to complete an analysis. The new functions developed for predicting the impact point and footprint included 3-degrees-of-freedom trajectory propagation, the generation of non-arbitrary entry conditions, sensitivity analysis, and the calculation of impact footprint. The functions to increase the fidelity in the prediction of vehicle breakup included a panel code to calculate the hypersonic aerodynamic coefficients for an arbitrary-shaped body and the modeling of local winds. The function to reduce the effort and time required to complete an analysis included the calculation of node failure criteria. The derivation and development of these new functions are presented in this dissertation, and examples are given to demonstrate the new capabilities and the improvements made, with comparisons between the results obtained from the upgraded analysis tool and the legacy software wherever applicable.

  13. A formulation of multidimensional growth models for the assessment and forecast of technology attributes

    NASA Astrophysics Data System (ADS)

    Danner, Travis W.

    Developing technology systems requires all manner of investment---engineering talent, prototypes, test facilities, and more. Even for simple design problems the investment can be substantial; for complex technology systems, the development costs can be staggering. The profitability of a corporation in a technology-driven industry is crucially dependent on maximizing the effectiveness of research and development investment. Decision-makers charged with allocation of this investment are forced to choose between the further evolution of existing technologies and the pursuit of revolutionary technologies. At risk on the one hand is excessive investment in an evolutionary technology which has only limited availability for further improvement. On the other hand, the pursuit of a revolutionary technology may mean abandoning momentum and the potential for substantial evolutionary improvement resulting from the years of accumulated knowledge. The informed answer to this question, evolutionary or revolutionary, requires knowledge of the expected rate of improvement and the potential a technology offers for further improvement. This research is dedicated to formulating the assessment and forecasting tools necessary to acquire this knowledge. The same physical laws and principles that enable the development and improvement of specific technologies also limit the ultimate capability of those technologies. Researchers have long used this concept as the foundation for modeling technological advancement through extrapolation by analogy to biological growth models. These models are employed to depict technology development as it asymptotically approaches limits established by the fundamental principles on which the technological approach is based. This has proven an effective and accurate approach to modeling and forecasting simple single-attribute technologies. With increased system complexity and the introduction of multiple system objectives, however, the usefulness of this modeling technique begins to diminish. With the introduction of multiple objectives, researchers often abandon technology growth models for scoring models and technology frontiers. While both approaches possess advantages over current growth models for the assessment of multi-objective technologies, each lacks a necessary dimension for comprehensive technology assessment. By collapsing multiple system metrics into a single, non-intuitive technology measure, scoring models provide a succinct framework for multi-objective technology assessment and forecasting. Yet, with no consideration of physical limits, scoring models provide no insight as to the feasibility of a particular combination of system capabilities. They only indicate that a given combination of system capabilities yields a particular score. Conversely, technology frontiers are constructed with the distinct objective of providing insight into the feasibility of system capability combinations. Yet again, upper limits to overall system performance are ignored. Furthermore, the data required to forecast subsequent technology frontiers is often inhibitive. In an attempt to reincorporate the fundamental nature of technology advancement as bound by physical principles, researchers have sought to normalize multi-objective systems whereby the variability of a single system objective is eliminated as a result of changes in the remaining objectives. This drastically limits the applicability of the resulting technology model because it is only applicable for a single setting of all other system attributes. Attempts to maintain the interaction between the growth curves of each technical objective of a complex system have thus far been limited to qualitative and subjective consideration. This research proposes the formulation of multidimensional growth models as an approach to simulating the advancement of multi-objective technologies towards their upper limits. Multidimensional growth models were formulated by noticing and exploiting the correlation between technology growth models and technology frontiers. Both are frontiers in actuality. The technology growth curve is a frontier between capability levels of a single attribute and time, while a technology frontier is a frontier between the capability levels of two or more attributes. Multidimensional growth models are formulated by exploiting the mathematical significance of this correlation. The result is a model that can capture both the interaction between multiple system attributes and their expected rates of improvement over time. The fundamental nature of technology development is maintained, and interdependent growth curves are generated for each system metric with minimal data requirements. Being founded on the basic nature of technology advancement, relative to physical limits, the availability for further improvement can be determined for a single metric relative to other system measures of merit. A by-product of this modeling approach is a single n-dimensional technology frontier linking all n system attributes with time. This provides an environment capable of forecasting future system capability in the form of advancing technology frontiers. The ability of a multidimensional growth model to capture the expected improvement of a specific technological approach is dependent on accurately identifying the physical limitations to each pertinent attribute. This research investigates two potential approaches to identifying those physical limits, a physics-based approach and a regression-based approach. The regression-based approach has found limited acceptance among forecasters, although it does show potential for estimating upper limits with a specified degree of uncertainty. Forecasters have long favored physics-based approaches for establishing the upper limit to unidimensional growth models. The task of accurately identifying upper limits has become increasingly difficult with the extension of growth models into multiple dimensions. A lone researcher may be able to identify the physical limitation to a single attribute of a simple system; however, as system complexity and the number of attributes increases, the attention of researchers from multiple fields of study is required. Thus, limit identification is itself an area of research and development requiring some level of investment. Whether estimated by physics or regression-based approaches, predicted limits will always have some degree of uncertainty. This research takes the approach of quantifying the impact of that uncertainty on model forecasts rather than heavily endorsing a single technique to limit identification. In addition to formulating the multidimensional growth model, this research provides a systematic procedure for applying that model to specific technology architectures. Researchers and decision-makers are able to investigate the potential for additional improvement within that technology architecture and to estimate the expected cost of each incremental improvement relative to the cost of past improvements. In this manner, multidimensional growth models provide the necessary information to set reasonable program goals for the further evolution of a particular technological approach or to establish the need for revolutionary approaches in light of the constraining limits of conventional approaches.

  14. Micromechanics and Piezo Enhancements of HyperSizer

    NASA Technical Reports Server (NTRS)

    Arnold, Steven M.; Bednarcyk, Brett A.; Yarrington, Phillip; Collier, Craig S.

    2006-01-01

    The commercial HyperSizer aerospace-composite-material-structure-sizing software has been enhanced by incorporating capabilities for representing coupled thermal, piezoelectric, and piezomagnetic effects on the levels of plies, laminates, and stiffened panels. This enhancement is based on a formulation similar to that of the pre-existing HyperSizer capability for representing thermal effects. As a result of this enhancement, the electric and/or magnetic response of a material or structure to a mechanical or thermal load, or its mechanical response to an applied electric or magnetic field can be predicted. In another major enhancement, a capability for representing micromechanical effects has been added by establishment of a linkage between HyperSizer and Glenn Research Center s Micromechanics Analysis Code With Generalized Method of Cells (MAC/GMC) computer program, which was described in several prior NASA Tech Briefs articles. The linkage enables Hyper- Sizer to localize to the fiber and matrix level rather than only to the ply level, making it possible to predict local failures and to predict properties of plies from those of the component fiber and matrix materials. Advanced graphical user interfaces and database structures have been developed to support the new HyperSizer micromechanics capabilities.

  15. Multi-timescale power and energy assessment of lithium-ion battery and supercapacitor hybrid system using extended Kalman filter

    NASA Astrophysics Data System (ADS)

    Wang, Yujie; Zhang, Xu; Liu, Chang; Pan, Rui; Chen, Zonghai

    2018-06-01

    The power capability and maximum charge and discharge energy are key indicators for energy management systems, which can help the energy storage devices work in a suitable area and prevent them from over-charging and over-discharging. In this work, a model based power and energy assessment approach is proposed for the lithium-ion battery and supercapacitor hybrid system. The model framework of the lithium-ion battery and supercapacitor hybrid system is developed based on the equivalent circuit model, and the model parameters are identified by regression method. Explicit analyses of the power capability and maximum charge and discharge energy prediction with multiple constraints are elaborated. Subsequently, the extended Kalman filter is employed for on-board power capability and maximum charge and discharge energy prediction to overcome estimation error caused by system disturbance and sensor noise. The charge and discharge power capability, and the maximum charge and discharge energy are quantitatively assessed under both the dynamic stress test and the urban dynamometer driving schedule. The maximum charge and discharge energy prediction of the lithium-ion battery and supercapacitor hybrid system with different time scales are explored and discussed.

  16. Beyond the standard two-film theory: Computational fluid dynamics simulations for carbon dioxide capture in a wetted wall column

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Chao; Xu, Zhijie; Lai, Canhai

    The standard two-film theory (STFT) is a diffusion-based mechanism that can be used to describe gas mass transfer across liquid film. Fundamental assumptions of the STFT impose serious limitations on its ability to predict mass transfer coefficients. To better understand gas absorption across liquid film in practical situations, a multiphase computational fluid dynamics (CFD) model fully equipped with mass transport and chemistry capabilities has been developed for solvent-based carbon dioxide (CO 2) capture to predict the CO 2 mass transfer coefficient in a wetted wall column. The hydrodynamics is modeled using a volume of fluid method, and the diffusive andmore » reactive mass transfer between the two phases is modeled by adopting a one-fluid formulation. We demonstrate that the proposed CFD model can naturally account for the influence of many important factors on the overall mass transfer that cannot be quantitatively explained by the STFT, such as the local variation in fluid velocities and properties, flow instabilities, and complex geometries. The CFD model also can predict the local mass transfer coefficient variation along the column height, which the STFT typically does not consider.« less

  17. Beyond the standard two-film theory: Computational fluid dynamics simulations for carbon dioxide capture in a wetted wall column

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Chao; Xu, Zhijie; Lai, Canhai

    The standard two-film theory (STFT) is a diffusion-based mechanism that can be used to describe gas mass transfer across liquid film. Fundamental assumptions of the STFT impose serious limitations on its ability to predict mass transfer coefficients. To better understand gas absorption across liquid film in practical situations, a multiphase computational fluid dynamics (CFD) model fully equipped with mass transport and chemistry capabilities has been developed for solvent-based carbon dioxide (CO2) capture to predict the CO2 mass transfer coefficient in a wetted wall column. The hydrodynamics is modeled using a volume of fluid method, and the diffusive and reactive massmore » transfer between the two phases is modeled by adopting a one-fluid formulation. We demonstrate that the proposed CFD model can naturally account for the influence of many important factors on the overall mass transfer that cannot be quantitatively explained by the STFT, such as the local variation in fluid velocities and properties, flow instabilities, and complex geometries. The CFD model also can predict the local mass transfer coefficient variation along the column height, which the STFT typically does not consider.« less

  18. Mathematical modeling analysis of intratumoral disposition of anticancer agents and drug delivery systems.

    PubMed

    Popilski, Hen; Stepensky, David

    2015-05-01

    Solid tumors are characterized by complex morphology. Numerous factors relating to the composition of the cells and tumor stroma, vascularization and drainage of fluids affect the local microenvironment within a specific location inside the tumor. As a result, the intratumoral drug/drug delivery system (DDS) disposition following systemic or local administration is non-homogeneous and its complexity reflects the differences in the local microenvironment. Mathematical models can be used to analyze the intratumoral drug/DDS disposition and pharmacological effects and to assist in choice of optimal anticancer treatment strategies. The mathematical models that have been applied by different research groups to describe the intratumoral disposition of anticancer drugs/DDSs are summarized in this article. The properties of these models and of their suitability for prediction of the drug/DDS intratumoral disposition and pharmacological effects are reviewed. Currently available mathematical models appear to neglect some of the major factors that govern the drug/DDS intratumoral disposition, and apparently possess limited prediction capabilities. More sophisticated and detailed mathematical models and their extensive validation are needed for reliable prediction of different treatment scenarios and for optimization of drug treatment in the individual cancer patients.

  19. Empirical membrane lifetime model for heavy duty fuel cell systems

    NASA Astrophysics Data System (ADS)

    Macauley, Natalia; Watson, Mark; Lauritzen, Michael; Knights, Shanna; Wang, G. Gary; Kjeang, Erik

    2016-12-01

    Heavy duty fuel cells used in transportation system applications such as transit buses expose the fuel cell membranes to conditions that can lead to lifetime-limiting membrane failure via combined chemical and mechanical degradation. Highly durable membranes and reliable predictive models are therefore needed in order to achieve the ultimate heavy duty fuel cell lifetime target of 25,000 h. In the present work, an empirical membrane lifetime model was developed based on laboratory data from a suite of accelerated membrane durability tests. The model considers the effects of cell voltage, temperature, oxygen concentration, humidity cycling, humidity level, and platinum in the membrane using inverse power law and exponential relationships within the framework of a general log-linear Weibull life-stress statistical distribution. The obtained model is capable of extrapolating the membrane lifetime from accelerated test conditions to use level conditions during field operation. Based on typical conditions for the Whistler, British Columbia fuel cell transit bus fleet, the model predicts a stack lifetime of 17,500 h and a membrane leak initiation time of 9200 h. Validation performed with the aid of a field operated stack confirmed the initial goal of the model to predict membrane lifetime within 20% of the actual operating time.

  20. Predicted Performance of a Thrust-Enhanced SR-71 Aircraft with an External Payload

    NASA Technical Reports Server (NTRS)

    Conners, Timothy R.

    1997-01-01

    NASA Dryden Flight Research Center has completed a preliminary performance analysis of the SR-71 aircraft for use as a launch platform for high-speed research vehicles and for carrying captive experimental packages to high altitude and Mach number conditions. Externally mounted research platforms can significantly increase drag, limiting test time and, in extreme cases, prohibiting penetration through the high-drag, transonic flight regime. To provide supplemental SR-71 acceleration, methods have been developed that could increase the thrust of the J58 turbojet engines. These methods include temperature and speed increases and augmentor nitrous oxide injection. The thrust-enhanced engines would allow the SR-71 aircraft to carry higher drag research platforms than it could without enhancement. This paper presents predicted SR-71 performance with and without enhanced engines. A modified climb-dive technique is shown to reduce fuel consumption when flying through the transonic flight regime with a large external payload. Estimates are included of the maximum platform drag profiles with which the aircraft could still complete a high-speed research mission. In this case, enhancement was found to increase the SR-71 payload drag capability by 25 percent. The thrust enhancement techniques and performance prediction methodology are described.

  1. Beyond the standard two-film theory: Computational fluid dynamics simulations for carbon dioxide capture in a wetted wall column

    DOE PAGES

    Wang, Chao; Xu, Zhijie; Lai, Canhai; ...

    2018-03-27

    The standard two-film theory (STFT) is a diffusion-based mechanism that can be used to describe gas mass transfer across liquid film. Fundamental assumptions of the STFT impose serious limitations on its ability to predict mass transfer coefficients. To better understand gas absorption across liquid film in practical situations, a multiphase computational fluid dynamics (CFD) model fully equipped with mass transport and chemistry capabilities has been developed for solvent-based carbon dioxide (CO 2) capture to predict the CO 2 mass transfer coefficient in a wetted wall column. The hydrodynamics is modeled using a volume of fluid method, and the diffusive andmore » reactive mass transfer between the two phases is modeled by adopting a one-fluid formulation. We demonstrate that the proposed CFD model can naturally account for the influence of many important factors on the overall mass transfer that cannot be quantitatively explained by the STFT, such as the local variation in fluid velocities and properties, flow instabilities, and complex geometries. The CFD model also can predict the local mass transfer coefficient variation along the column height, which the STFT typically does not consider.« less

  2. Resistance gene identification from Larimichthys crocea with machine learning techniques

    NASA Astrophysics Data System (ADS)

    Cai, Yinyin; Liao, Zhijun; Ju, Ying; Liu, Juan; Mao, Yong; Liu, Xiangrong

    2016-12-01

    The research on resistance genes (R-gene) plays a vital role in bioinformatics as it has the capability of coping with adverse changes in the external environment, which can form the corresponding resistance protein by transcription and translation. It is meaningful to identify and predict R-gene of Larimichthys crocea (L.Crocea). It is friendly for breeding and the marine environment as well. Large amounts of L.Crocea’s immune mechanisms have been explored by biological methods. However, much about them is still unclear. In order to break the limited understanding of the L.Crocea’s immune mechanisms and to detect new R-gene and R-gene-like genes, this paper came up with a more useful combination prediction method, which is to extract and classify the feature of available genomic data by machine learning. The effectiveness of feature extraction and classification methods to identify potential novel R-gene was evaluated, and different statistical analyzes were utilized to explore the reliability of prediction method, which can help us further understand the immune mechanisms of L.Crocea against pathogens. In this paper, a webserver called LCRG-Pred is available at http://server.malab.cn/rg_lc/.

  3. A Prospective Examination of the Interpersonal-Psychological Theory of Suicidal Behavior Among Psychiatric Adolescent Inpatients

    PubMed Central

    Czyz, Ewa K.; Berona, Johnny; King, Cheryl A.

    2016-01-01

    The challenge of identifying suicide risk in adolescents, and particularly among high-risk subgroups such as adolescent inpatients, calls for further study of models of suicidal behavior that could meaningfully aid in the prediction of risk. This study examined how well the Interpersonal-Psychological Theory of Suicidal Behavior (IPTS)—with its constructs of thwarted belongingness (TB), perceived burdensomeness (PB), and an acquired capability (AC) for lethal self-injury—predicts suicide attempts among adolescents (N = 376) 3 and 12 months after hospitalization. The three-way interaction between PB, TB, and AC, defined as a history of multiple suicide attempts, was not significant. However, there were significant 2-way interaction effects, which varied by sex: girls with low AC and increasing TB, and boys with high AC and increasing PB, were more likely to attempt suicide at 3 months. Only high AC predicted 12-month attempts. Results suggest gender-specific associations between theory components and attempts. The time-limited effects of these associations point to TB and PB being dynamic and modifiable in high-risk populations, whereas the effects of AC are more lasting. The study also fills an important gap in existing research by examining IPTS prospectively. PMID:25263410

  4. A Prospective Examination of the Interpersonal-Psychological Theory of Suicidal Behavior Among Psychiatric Adolescent Inpatients

    PubMed Central

    Czyz, Ewa K.; Berona, Johnny; King, Cheryl A.

    2016-01-01

    The challenge of identifying suicide risk in adolescents, and particularly among high-risk subgroups such as adolescent inpatients, calls for further study of models of suicidal behavior that could meaningfully aid in the prediction of risk. This study examined how well the Interpersonal-Psychological Theory of Suicidal Behavior (IPTS)—with its constructs of thwarted belongingness (TB), perceived burdensomeness (PB), and an acquired capability (AC) for lethal self-injury—predicts suicide attempts among adolescents (N = 376) 3 and 12 months after hospitalization. The three-way interaction between PB, TB, and AC, defined as a history of multiple suicide attempts, was not significant. However, there were significant 2-way interaction effects, which varied by sex: girls with low AC and increasing TB, and boys with high AC and increasing PB, were more likely to attempt suicide at 3 months. Only high AC predicted 12-month attempts. Results suggest gender-specific associations between theory components and attempts. The time-limited effects of these associations point to TB and PB being dynamic and modifiable in high-risk populations, whereas the effects of AC are more lasting. The study also fills an important gap in existing research by examining IPTS prospectively. PMID:26872965

  5. U.S. Geological Survey Water science strategy--observing, understanding, predicting, and delivering water science to the nation

    USGS Publications Warehouse

    Evenson, Eric J.; Orndorff, Randall C.; Blome, Charles D.; Böhlke, John Karl; Hershberger, Paul K.; Langenheim, V.E.; McCabe, Gregory J.; Morlock, Scott E.; Reeves, Howard W.; Verdin, James P.; Weyers, Holly S.; Wood, Tamara M.

    2013-01-01

    This report expands the Water Science Strategy that began with the USGS Science Strategy, “Facing Tomorrow’s Challenges—U.S. Geological Survey Science in the Decade 2007–2017” (U.S. Geological Survey, 2007). This report looks at the relevant issues facing society and develops a strategy built around observing, understanding, predicting, and delivering water science for the next 5 to 10 years by building new capabilities, tools, and delivery systems to meet the Nation’s water-resource needs. This report begins by presenting the vision of water science for the USGS and the societal issues that are influenced by, and in turn influence, the water resources of our Nation. The essence of the Water Science Strategy is built on the concept of “water availability,” defined as spatial and temporal distribution of water quantity and quality, as related to human and ecosystem needs, as affected by human and natural influences. The report also describes the core capabilities of the USGS in water science—the strengths, partnerships, and science integrity that the USGS has built over its 134-year history. Nine priority actions are presented in the report, which combine and elevate the numerous specific strategic actions listed throughout the report. Priority actions were developed as a means of providing the audience of this report with a list for focused attention, even if resources and time limit the ability of managers to address all of the strategic actions in the report.

  6. Radio-frequency energy quantification in magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Alon, Leeor

    Mapping of radio frequency (RF) energy deposition has been challenging for 50+ years, especially, when scanning patients in the magnetic resonance imaging (MRI) environment. As result, electromagnetic simulation software is often used for estimating the specific absorption rate (SAR), the rate of RF energy deposition in tissue. The thesis work presents challenges associated with aligning information provided by electromagnetic simulation and MRI experiments. As result of the limitations of simulations, experimental methods for the quantification of SAR were established. A system for quantification of the total RF energy deposition was developed for parallel transmit MRI (a system that uses multiple antennas to excite and image the body). The system is capable of monitoring and predicting channel-by-channel RF energy deposition, whole body SAR and capable of tracking potential hardware failures that occur in the transmit chain and may cause the deposition of excessive energy into patients. Similarly, we demonstrated that local RF power deposition can be mapped and predicted for parallel transmit systems based on a series of MRI temperature mapping acquisitions. Resulting from the work, we developed tools for optimal reconstruction temperature maps from MRI acquisitions. The tools developed for temperature mapping paved the way for utilizing MRI as a diagnostic tool for evaluation of RF/microwave emitting device safety. Quantification of the RF energy was demonstrated for both MRI compatible and non-MRI-compatible devices (such as cell phones), while having the advantage of being noninvasive, of providing millimeter resolution and high accuracy.

  7. THE FUTURE OF TOXICOLOGY-PREDICTIVE TOXICOLOGY: AN EXPANDED VIEW OF CHEMICAL TOXICITY

    EPA Science Inventory

    A chemistry approach to predictive toxicology relies on structure−activity relationship (SAR) modeling to predict biological activity from chemical structure. Such approaches have proven capabilities when applied to well-defined toxicity end points or regions of chemical space. T...

  8. Thermal niche estimators and the capability of poor dispersal species to cope with climate change

    PubMed Central

    Sánchez-Fernández, David; Rizzo, Valeria; Cieslak, Alexandra; Faille, Arnaud; Fresneda, Javier; Ribera, Ignacio

    2016-01-01

    For management strategies in the context of global warming, accurate predictions of species response are mandatory. However, to date most predictions are based on niche (bioclimatic) models that usually overlook biotic interactions, behavioral adjustments or adaptive evolution, and assume that species can disperse freely without constraints. The deep subterranean environment minimises these uncertainties, as it is simple, homogeneous and with constant environmental conditions. It is thus an ideal model system to study the effect of global change in species with poor dispersal capabilities. We assess the potential fate of a lineage of troglobitic beetles under global change predictions using different approaches to estimate their thermal niche: bioclimatic models, rates of thermal niche change estimated from a molecular phylogeny, and data from physiological studies. Using bioclimatic models, at most 60% of the species were predicted to have suitable conditions in 2080. Considering the rates of thermal niche change did not improve this prediction. However, physiological data suggest that subterranean species have a broad thermal tolerance, allowing them to stand temperatures never experienced through their evolutionary history. These results stress the need of experimental approaches to assess the capability of poor dispersal species to cope with temperatures outside those they currently experience. PMID:26983802

  9. Turbulent convection in microchannels

    NASA Astrophysics Data System (ADS)

    Adams, Thomas Mcdaniel

    1998-10-01

    Single-phase forced convection in microchannels is an effective cooling mechanism capable of accommodating the high heat fluxes encountered in fission reactor cores, accelerator targets, microelectronic heat sinks and micro-heat exchangers. Traditional Nusselt type correlations, however, have generally been obtained using data from channels with hydraulic diameters >2 cm. Application of such relationships to microchannels is therefore questionable. A diameter limit below which traditional correlations are invalid had not been established. The objective of this investigation was to systematically address the effect of small hydraulic diameter on turbulent single-phase forced convection of water. A number of microchannels having hydraulic diameters ranging from 0.76 to 1.13 mm were constructed and tested over a wide range of flow rates and heat fluxes. Experimentally obtained Nusselt numbers were significantly higher than the values predicted by the Gnielinski correlation for large channels, the effect of decreasing diameter being to further increase heat transfer enhancement. A working correlation predicting the heat transfer enhancement for turbulent convection in microchannels was developed. The correlation predicts the lower diameter limit below which traditional correlations are no longer valid to be approximately 1.2 mm. Of further interest was the effect of the desorption of noncondensable gases dissolved in the water on turbulent convection. In large channels noncondensables undergo little desorption and their effect is negligible. The large pressure drops coupled with large temperature increases for high heat fluxes in microchannels, however, leads to a two-phase, two-component flow thereby enhancing heat transfer coefficients above their liquid- only values. A detailed mathematical model was developed to predict the resulting void fractions and liquid- coolant accelerations due to the desorption of noncondensables in microchannels. Experiments were also performed to compare heat transfer coefficients for fully-degassed water to water saturated with air at test section inlet conditions. The data showed significant heat transfer enhancement for the air-saturated case over the fully-degassed case. The degree of enhancement was greatly under-predicted by current two-phase, two- component heat transfer correlations.

  10. The Multipurpose Black Hawk Utility Helicopter: Rotary-wing Versatility Required for U.S. Marine Corps Enhanced Company Operations

    DTIC Science & Technology

    2009-01-01

    THE FOREGOING STATEMENT. QUOTATION FROM, ABSTRACTION FROM, OR REPRODUCTION OF ALL OR ANY PART OF THIS DOCVMENT IS PERMITTED PROVIDED PROPER...Capabilities and Limitations 7 AH-IW/Z Cobra’s Role in Support ofECO 8 CH-53E Super Stallion Capabilities and Limitations 9 CH-53E Super Stallion’s Role...of aircraft. Analysis of the roles and capabilities of the AH-IW Super Cobra, CH-53E Super Stallion , MV-22B Osprey, and the UH- IN Huey will identify

  11. Projected Applications of a "Weather in a Box" Computing System at the NASA Short-Term Prediction Research and Transition (SPoRT) Center

    NASA Technical Reports Server (NTRS)

    Jedlovec, Gary J.; Molthan, Andrew; Zavodsky, Bradley T.; Case, Jonathan L.; LaFontaine, Frank J.; Srikishen, Jayanthi

    2010-01-01

    The NASA Short-term Prediction Research and Transition Center (SPoRT)'s new "Weather in a Box" resources will provide weather research and forecast modeling capabilities for real-time application. Model output will provide additional forecast guidance and research into the impacts of new NASA satellite data sets and software capabilities. By combining several research tools and satellite products, SPoRT can generate model guidance that is strongly influenced by unique NASA contributions.

  12. In-silico wear prediction for knee replacements--methodology and corroboration.

    PubMed

    Strickland, M A; Taylor, M

    2009-07-22

    The capability to predict in-vivo wear of knee replacements is a valuable pre-clinical analysis tool for implant designers. Traditionally, time-consuming experimental tests provided the principal means of investigating wear. Today, computational models offer an alternative. However, the validity of these models has not been demonstrated across a range of designs and test conditions, and several different formulas are in contention for estimating wear rates, limiting confidence in the predictive power of these in-silico models. This study collates and retrospectively simulates a wide range of experimental wear tests using fast rigid-body computational models with extant wear prediction algorithms, to assess the performance of current in-silico wear prediction tools. The number of tests corroborated gives a broader, more general assessment of the performance of these wear-prediction tools, and provides better estimates of the wear 'constants' used in computational models. High-speed rigid-body modelling allows a range of alternative algorithms to be evaluated. Whilst most cross-shear (CS)-based models perform comparably, the 'A/A+B' wear model appears to offer the best predictive power amongst existing wear algorithms. However, the range and variability of experimental data leaves considerable uncertainty in the results. More experimental data with reduced variability and more detailed reporting of studies will be necessary to corroborate these models with greater confidence. With simulation times reduced to only a few minutes, these models are ideally suited to large-volume 'design of experiment' or probabilistic studies (which are essential if pre-clinical assessment tools are to begin addressing the degree of variation observed clinically and in explanted components).

  13. Forests and Soil Erosion across Europe

    NASA Astrophysics Data System (ADS)

    Bathurst, J. C.

    2012-04-01

    Land use and climate change threaten the ability of Europe's forests to provide a vital service in limiting soil erosion, e.g. from forest fires and landslides. However, our ability to define the threat and to propose mitigation measures suffers from two deficiencies concerning the forest/erosion interface: 1) While there have been a considerable number of field studies of the relationship between forest cover and erosion in different parts of Europe, the data sets are scattered among research groups and a range of literature outlets. There is no comprehensive overview of the forest/erosion interface at the European scale, essential for considering regional variations and investigating the effects of future changes in land use and climate. 2) Compared with forest/water studies, we have a poorer quantitative appreciation of forest/erosion interactions. In the forest/water area it is possible to make quantitative statements such as that a 20% change in forest cover across a river catchment is needed for the effect on annual water yield to be measurable or that a forested catchment in upland UK has an annual water yield around 15% lower than an otherwise comparable grassland catchment. Comparable statements are not yet possible for forest/erosion interactions and there are uncertainties in the mathematical representation of forest/erosion interactions which limit our ability to make predictions, for example of the impact of forest loss in a given area. This presentation therefore considers the next step in improving our predictive capability. It proposes the integration of existing research and data to construct the "big picture" across Europe, i.e. erosion rates and sediment yields associated with forest cover and its loss in a range of erosion regimes (e.g. post-forest fire erosion or post-logging landslides). This would provide a basis for generalizations at the European scale. However, such an overview would not form a predictive capability. Therefore it is also necessary to identify a range of predictive methods, from empirical guidelines to computer models, which can be recommended for applications such as extrapolating from the local to the regional scale and for planning mitigation strategies. Such developments could help improve efficiency in the integrated management of forest, soil and water resources, benefit local engineering projects ranging from hazard mitigation plans to road culvert design, contribute to the implementation of the EU Water Framework Development, form a more objective basis for cost/benefit analysis of proposed management actions and help in putting a value on forest services.

  14. Assessment of Process Capability: the case of Soft Drinks Processing Unit

    NASA Astrophysics Data System (ADS)

    Sri Yogi, Kottala

    2018-03-01

    The process capability studies have significant impact in investigating process variation which is important in achieving product quality characteristics. Its indices are to measure the inherent variability of a process and thus to improve the process performance radically. The main objective of this paper is to understand capability of the process being produced within specification of the soft drinks processing unit, a premier brands being marketed in India. A few selected critical parameters in soft drinks processing: concentration of gas volume, concentration of brix, torque of crock has been considered for this study. Assessed some relevant statistical parameters: short term capability, long term capability as a process capability indices perspective. For assessment we have used real time data of soft drinks bottling company which is located in state of Chhattisgarh, India. As our research output suggested reasons for variations in the process which is validated using ANOVA and also predicted Taguchi cost function, assessed also predicted waste monetarily this shall be used by organization for improving process parameters. This research work has substantially benefitted the organization in understanding the various variations of selected critical parameters for achieving zero rejection.

  15. Working memory training improves visual short-term memory capacity.

    PubMed

    Schwarb, Hillary; Nail, Jayde; Schumacher, Eric H

    2016-01-01

    Since antiquity, philosophers, theologians, and scientists have been interested in human memory. However, researchers today are still working to understand the capabilities, boundaries, and architecture. While the storage capabilities of long-term memory are seemingly unlimited (Bahrick, J Exp Psychol 113:1-2, 1984), working memory, or the ability to maintain and manipulate information held in memory, seems to have stringent capacity limits (e.g., Cowan, Behav Brain Sci 24:87-185, 2001). Individual differences, however, do exist and these differences can often predict performance on a wide variety of tasks (cf. Engle What is working-memory capacity? 297-314, 2001). Recently, researchers have promoted the enticing possibility that simple behavioral training can expand the limits of working memory which indeed may also lead to improvements on other cognitive processes as well (cf. Morrison and Chein, Psychol Bull Rev 18:46-60 2011). However, initial investigations across a wide variety of cognitive functions have produced mixed results regarding the transferability of training-related improvements. Across two experiments, the present research focuses on the benefit of working memory training on visual short-term memory capacity-a cognitive process that has received little attention in the training literature. Data reveal training-related improvement of global measures of visual short-term memory as well as of measures of the independent sub-processes that contribute to capacity (Awh et al., Psychol Sci 18(7):622-628, 2007). These results suggest that the ability to inhibit irrelevant information within and between trials is enhanced via n-back training allowing for selective improvement on untrained tasks. Additionally, we highlight a potential limitation of the standard adaptive training procedure and propose a modified design to ensure variability in the training environment.

  16. Implementation of Premixed Equilibrium Chemistry Capability in OVERFLOW

    NASA Technical Reports Server (NTRS)

    Olsen, M. E.; Liu, Y.; Vinokur, M.; Olsen, T.

    2003-01-01

    An implementation of premixed equilibrium chemistry has been completed for the OVERFLOW code, a chimera capable, complex geometry flow code widely used to predict transonic flowfields. The implementation builds on the computational efficiency and geometric generality of the solver.

  17. Implementation of Premixed Equilibrium Chemistry Capability in OVERFLOW

    NASA Technical Reports Server (NTRS)

    Olsen, Mike E.; Liu, Yen; Vinokur, M.; Olsen, Tom

    2004-01-01

    An implementation of premixed equilibrium chemistry has been completed for the OVERFLOW code, a chimera capable, complex geometry flow code widely used to predict transonic flowfields. The implementation builds on the computational efficiency and geometric generality of the solver.

  18. Limitations on the developing preterm brain: impact of periventricular white matter lesions on brain connectivity and cognition.

    PubMed

    Pavlova, Marina A; Krägeloh-Mann, Ingeborg

    2013-04-01

    Brain lesions to the white matter in peritrigonal regions, periventricular leukomalacia, in children who were born prematurely represent an important model for studying limitations on brain development. The lesional pattern is of early origin and bilateral, that constrains the compensatory potential of the brain. We suggest that (i) topography and severity of periventricular lesions may have a long-term predictive value for cognitive and social capabilities in preterm birth survivors; and (ii) periventricular lesions may impact cognitive and social functions by affecting brain connectivity, and thereby, the dissociable neural networks underpinning these functions. A further pathway to explore is the relationship between cerebral palsy and cognitive outcome. Restrictions caused by motor disability may affect active exploration of surrounding and social participation that may in turn differentially impinge on cognitive development and social cognition. As an outline for future research, we underscore sex differences, as the sex of a preterm newborn may shape the mechanisms by which the developing brain is affected.

  19. Tackling sampling challenges in biomolecular simulations.

    PubMed

    Barducci, Alessandro; Pfaendtner, Jim; Bonomi, Massimiliano

    2015-01-01

    Molecular dynamics (MD) simulations are a powerful tool to give an atomistic insight into the structure and dynamics of proteins. However, the time scales accessible in standard simulations, which often do not match those in which interesting biological processes occur, limit their predictive capabilities. Many advanced sampling techniques have been proposed over the years to overcome this limitation. This chapter focuses on metadynamics, a method based on the introduction of a time-dependent bias potential to accelerate sampling and recover equilibrium properties of a few descriptors that are able to capture the complexity of a process at a coarse-grained level. The theory of metadynamics and its combination with other popular sampling techniques such as the replica exchange method is briefly presented. Practical applications of these techniques to the study of the Trp-Cage miniprotein folding are also illustrated. The examples contain a guide for performing these calculations with PLUMED, a plugin to perform enhanced sampling simulations in combination with many popular MD codes.

  20. Cancer Bioinformatics for Updating Anticancer Drug Developments and Personalized Therapeutics.

    PubMed

    Lu, Da-Yong; Qu, Rong-Xin; Lu, Ting-Ren; Wu, Hong-Ying

    2017-01-01

    Last two to three decades, this world witnesses a rapid progress of biomarkers and bioinformatics technologies. Cancer bioinformatics is one of such important omics branches for experimental/clinical studies and applications. Same as other biological techniques or systems, bioinformatics techniques will be widely used. But they are presently not omni-potent. Despite great popularity and improvements, cancer bioinformatics has its own limitations and shortcomings at this stage of technical advancements. This article will offer a panorama of bioinformatics in cancer researches and clinical therapeutic applications-possible advantages and limitations relating to cancer therapeutics. A lot of beneficial capabilities and outcomes have been described. As a result, a successful new era for cancer bioinformatics is waiting for us if we can adhere on scientific studies of cancer bioinformatics in malignant- origin mining, medical verifications and clinical diagnostic applications. Cancer bioinformatics gave a great significance in disease diagnosis and therapeutic predictions. Many creative ideas and future perspectives are highlighted. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  1. Refined Zigzag Theory for Homogeneous, Laminated Composite, and Sandwich Plates: A Homogeneous Limit Methodology for Zigzag Function Selection

    NASA Technical Reports Server (NTRS)

    Tessler, Alexander; DiSciuva, Marco; Gherlone, marco

    2010-01-01

    The Refined Zigzag Theory (RZT) for homogeneous, laminated composite, and sandwich plates is presented from a multi-scale formalism starting with the inplane displacement field expressed as a superposition of coarse and fine contributions. The coarse kinematic field is that of first-order shear-deformation theory, whereas the fine kinematic field has a piecewise-linear zigzag distribution through the thickness. The condition of limiting homogeneity of transverse-shear properties is proposed and yields four distinct sets of zigzag functions. By examining elastostatic solutions for highly heterogeneous sandwich plates, the best-performing zigzag functions are identified. The RZT predictive capabilities to model homogeneous and highly heterogeneous sandwich plates are critically assessed, demonstrating its superior efficiency, accuracy ; and a wide range of applicability. The present theory, which is derived from the virtual work principle, is well-suited for developing computationally efficient CO-continuous finite elements, and is thus appropriate for the analysis and design of high-performance load-bearing aerospace structures.

  2. Aerosol Delivery for Amendment Distribution in Contaminated Vadose Zones

    NASA Astrophysics Data System (ADS)

    Hall, R. J.; Murdoch, L.; Riha, B.; Looney, B.

    2011-12-01

    Remediation of contaminated vadose zones is often hindered by an inability to effectively distribute amendments. Many amendment-based approaches have been successful in saturated formations, however, have not been widely pursued when treating contaminated unsaturated materials due to amendment distribution limitations. Aerosol delivery is a promising new approach for distributing amendments in contaminated vadose zones. Amendments are aerosolized and injected through well screens. During injection the aerosol particles are transported with the gas and deposited on the surfaces of soil grains. Resulting distributions are radially and vertically broad, which could not be achieved by injecting pure liquid-phase solutions. The objectives of this work were A) to characterize transport and deposition behaviors of aerosols; and B) to develop capabilities for predicting results of aerosol injection scenarios. Aerosol transport and deposition processes were investigated by conducting lab-scale injection experiments. These experiments involved injection of aerosols through a 2m radius, sand-filled wedge. A particle analyzer was used to measure aerosol particle distributions with time, and sand samples were taken for amendment content analysis. Predictive capabilities were obtained by constructing a numerical model capable of simulating aerosol transport and deposition in porous media. Results from tests involving vegetable oil aerosol injection show that liquid contents appropriate for remedial applications could be readily achieved throughout the sand-filled wedge. Lab-scale tests conducted with aqueous aerosols show that liquid accumulation only occurs near the point of injection. Tests were also conducted using 200 g/L salt water as the aerosolized liquid. Liquid accumulations observed during salt water tests were minimal and similar to aqueous aerosol results. However, particles were measured, and salt deposited distal to the point of injection. Differences between aqueous and oil deposition are assumed to occur due to surface interactions, and susceptibility to evaporation of aqueous aerosols. Distal salt accumulation during salt water aerosol tests suggests that solid salt forms as salt water aerosols evaporate. The solid salt aerosols are less likely to deposit, so they travel further than aqueous aerosols. A numerical model was calibrated using results from lab-scale tests. The calibrated model was then used to simulate field-scale aerosol injection. Results from field-scale simulations suggest that effective radii of influence on the scale of 8-10 meters could be achieved in partially saturated sand. The aerosol delivery process appears to be capable distributing oil amendments over considerable volumes of formation at concentrations appropriate for remediation purposes. Thus far, evaporation has limited liquid accumulation observed when distributing aqueous aerosols, however, results from salt water experiments suggest that injection of solid phase aerosols can effectively distribute water soluble amendments (electron donor, pH buffer, oxidants, etc.). Utilization of aerosol delivery could considerably expand treatment options for contaminated vadose zones at a wide variety of sites.

  3. Hope as a Predictor of Interpersonal Suicide Risk

    ERIC Educational Resources Information Center

    Davidson, Collin L.; Wingate, LaRicka R.; Rasmussen, Kathy A.; Slish, Meredith L.

    2009-01-01

    The current study hypothesized that (1) hope would negatively predict burdensomeness, thwarted belongingness, and acquired capability to enact lethal injury; (2) hope would negatively predict suicidal ideation; and (3) the interpersonal suicide risk factors would predict suicidal ideation. Results indicated that hope negatively predicted…

  4. Fan Noise Prediction with Applications to Aircraft System Noise Assessment

    NASA Technical Reports Server (NTRS)

    Nark, Douglas M.; Envia, Edmane; Burley, Casey L.

    2009-01-01

    This paper describes an assessment of current fan noise prediction tools by comparing measured and predicted sideline acoustic levels from a benchmark fan noise wind tunnel test. Specifically, an empirical method and newly developed coupled computational approach are utilized to predict aft fan noise for a benchmark test configuration. Comparisons with sideline noise measurements are performed to assess the relative merits of the two approaches. The study identifies issues entailed in coupling the source and propagation codes, as well as provides insight into the capabilities of the tools in predicting the fan noise source and subsequent propagation and radiation. In contrast to the empirical method, the new coupled computational approach provides the ability to investigate acoustic near-field effects. The potential benefits/costs of these new methods are also compared with the existing capabilities in a current aircraft noise system prediction tool. The knowledge gained in this work provides a basis for improved fan source specification in overall aircraft system noise studies.

  5. Theoretical study of aerodynamic characteristics of wings having vortex flow

    NASA Technical Reports Server (NTRS)

    Reddy, C. S.

    1979-01-01

    The aerodynamic characteristics of slender wings having separation induced vortex flows are investigated by employing three different computer codes--free vortex sheet, quasi vortex lattice, and suction analogy methods. Their capabilities and limitations are examined, and modifications are discussed. Flat wings of different configurations: arrow, delta, and diamond shapes, as well as cambered delta wings, are studied. The effect of notch ratio on the load distributions and the longitudinal characteristics of a family of arrow and diamond wings is explored. The sectional lift coefficients and the accumulated span loadings are determined for an arrow wing and are seen to be unusual in comparison with the attached flow results. The theoretically predicted results are compared with the existing experimental values.

  6. Reaping the space investment. [Shuttle era geosynchronous satellite based technological trends

    NASA Technical Reports Server (NTRS)

    Calio, A. J.

    1979-01-01

    By 1999 operational space systems will be implemented routinely on a worldwide scale in many areas vital to human survival and life quality. Geosynchronous-based monitoring and observation will be extensively used. The Shuttle era will bring in the capability to allow monitoring and identifying pollution sources which fail to stay within required limits. Remotely sensed data over land masses will provide needed facts on renewable and nonrenewable earth resources. New instruments and techniques will have been developed to provide geologists with clues to the declining number of deposits of fuels and minerals. Also, practical methods for predicting earthquakes will have been elaborated by 1999. Communications will see implementation of many of the technological goals of 1978.

  7. A composition joint PDF method for the modeling of spray flames

    NASA Technical Reports Server (NTRS)

    Raju, M. S.

    1995-01-01

    This viewgraph presentation discusses an extension of the probability density function (PDF) method to the modeling of spray flames to evaluate the limitations and capabilities of this method in the modeling of gas-turbine combustor flows. The comparisons show that the general features of the flowfield are correctly predicted by the present solution procedure. The present solution appears to provide a better representation of the temperature field, particularly, in the reverse-velocity zone. The overpredictions in the centerline velocity could be attributed to the following reasons: (1) the use of k-epsilon turbulence model is known to be less precise in highly swirling flows and (2) the swirl number used here is reported to be estimated rather than measured.

  8. Evaluation of Current Planetary Boundary Layer Retrieval Capabilities from Space

    NASA Technical Reports Server (NTRS)

    Santanello, Joseph A., Jr.; Schaefer, Alexander J.; Blaisdell, John; Yorks, John

    2016-01-01

    The PBL over land remains a significant gap in our water and energy cycle understanding from space. This work combines unique NASA satellite and model products to demonstrate the ability of current sensors (advanced IR sounding and lidar) to retrieve PBL properties and in turn their potential to be used globally to evaluate and improve weather and climate prediction models. While incremental progress has been made in recent AIRS retrieval versions, insufficient vertical resolution remains in terms of detecting PBL properties. Lidar shows promise in terms of detecting vertical gradients (and PBLh) in the lower troposphere, but daytime conditions over land remain a challenge due to noise, and their coverage is limited to approximately 2 weeks or longer return times.

  9. PM2.5 monitoring system based on ZigBee wireless sensor network

    NASA Astrophysics Data System (ADS)

    Lin, Lukai; Li, Xiangshun; Gu, Weiying

    2017-06-01

    In the view of the haze problem, aiming at improving the deficiency of the traditional PM2.5 monitoring methods, such as the insufficient real-time monitoring, limited transmission distance, high cost and the difficulty to maintain, the atmosphere PM2.5 monitoring system based on ZigBee technology is designed. The system combines the advantages of ZigBee’s low cost, low power consumption, high reliability and GPRS/Internet’s capability of remote transmission of data. Furthermore, it adopts TI’s Z-Stack protocol stack, and selects CC2530 chip and TI’s MSP430 microcontroller as the core, which establishes the air pollution monitoring network that is helpful for the early prediction of major air pollution disasters.

  10. Thermal/structural design verification strategies for large space structures

    NASA Technical Reports Server (NTRS)

    Benton, David

    1988-01-01

    Requirements for space structures of increasing size, complexity, and precision have engendered a search for thermal design verification methods that do not impose unreasonable costs, that fit within the capabilities of existing facilities, and that still adequately reduce technical risk. This requires a combination of analytical and testing methods. This requires two approaches. The first is to limit thermal testing to sub-elements of the total system only in a compact configuration (i.e., not fully deployed). The second approach is to use a simplified environment to correlate analytical models with test results. These models can then be used to predict flight performance. In practice, a combination of these approaches is needed to verify the thermal/structural design of future very large space systems.

  11. Theory of wing rock

    NASA Technical Reports Server (NTRS)

    Hsu, C.-H.; Lan, C. E.

    1985-01-01

    Wing rock is one type of lateral-directional instabilities at high angles of attack. To predict wing rock characteristics and to design airplanes to avoid wing rock, parameters affecting wing rock characteristics must be known. A new nonlinear aerodynamic model is developed to investigate the main aerodynamic nonlinearities causing wing rock. In the present theory, the Beecham-Titchener asymptotic method is used to derive expressions for the limit-cycle amplitude and frequency of wing rock from nonlinear flight dynamics equations. The resulting expressions are capable of explaining the existence of wing rock for all types of aircraft. Wing rock is developed by negative or weakly positive roll damping, and sustained by nonlinear aerodynamic roll damping. Good agreement between theoretical and experimental results is obtained.

  12. Public health surveillance and infectious disease detection.

    PubMed

    Morse, Stephen S

    2012-03-01

    Emerging infectious diseases, such as HIV/AIDS, SARS, and pandemic influenza, and the anthrax attacks of 2001, have demonstrated that we remain vulnerable to health threats caused by infectious diseases. The importance of strengthening global public health surveillance to provide early warning has been the primary recommendation of expert groups for at least the past 2 decades. However, despite improvements in the past decade, public health surveillance capabilities remain limited and fragmented, with uneven global coverage. Recent initiatives provide hope of addressing this issue, and new technological and conceptual advances could, for the first time, place capability for global surveillance within reach. Such advances include the revised International Health Regulations (IHR 2005) and the use of new data sources and methods to improve global coverage, sensitivity, and timeliness, which show promise for providing capabilities to extend and complement the existing infrastructure. One example is syndromic surveillance, using nontraditional and often automated data sources. Over the past 20 years, other initiatives, including ProMED-mail, GPHIN, and HealthMap, have demonstrated new mechanisms for acquiring surveillance data. In 2009 the U.S. Agency for International Development (USAID) began the Emerging Pandemic Threats (EPT) program, which includes the PREDICT project, to build global capacity for surveillance of novel infections that have pandemic potential (originating in wildlife and at the animal-human interface) and to develop a framework for risk assessment. Improved understanding of factors driving infectious disease emergence and new technological capabilities in modeling, diagnostics and pathogen identification, and communications, such as using the increasing global coverage of cellphones for public health surveillance, can further enhance global surveillance.

  13. A cellular automata model for traffic flow based on kinetics theory, vehicles capabilities and driver reactions

    NASA Astrophysics Data System (ADS)

    Guzmán, H. A.; Lárraga, M. E.; Alvarez-Icaza, L.; Carvajal, J.

    2018-02-01

    In this paper, a reliable cellular automata model oriented to faithfully reproduce deceleration and acceleration according to realistic reactions of drivers, when vehicles with different deceleration capabilities are considered is presented. The model focuses on describing complex traffic phenomena by coding in its rules the basic mechanisms of drivers behavior, vehicles capabilities and kinetics, while preserving simplicity. In particular, vehiclés kinetics is based on uniform accelerated motion, rather than in impulsive accelerated motion as in most existing CA models. Thus, the proposed model calculates in an analytic way three safe preserving distances to determine the best action a follower vehicle can take under a worst case scenario. Besides, the prediction analysis guarantees that under the proper assumptions, collision between vehicles may not happen at any future time. Simulations results indicate that all interactions of heterogeneous vehicles (i.e., car-truck, truck-car, car-car and truck-truck) are properly reproduced by the model. In addition, the model overcomes one of the major limitations of CA models for traffic modeling: the inability to perform smooth approach to slower or stopped vehicles. Moreover, the model is also capable of reproducing most empirical findings including the backward speed of the downstream front of the traffic jam, and different congested traffic patterns induced by a system with open boundary conditions with an on-ramp. Like most CA models, integer values are used to make the model run faster, which makes the proposed model suitable for real time traffic simulation of large networks.

  14. Soviet short-range nuclear forces: flexible response or flexible aggression. Student essay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, T.R.

    1987-03-23

    This essay takes a critical look at Soviet short-range nuclear forces in an effort to identify Soviet capabilities to fight a limited nuclear war with NATO. From an analysis of Soviet military art, weapon-system capabilities and tactics, the author concludes that the Soviets have developed a viable limited-nuclear-attack option. Unless NATO reacts to this option, the limited nuclear attack may become favored Soviet option and result in the rapid defeat of NATO.

  15. CLAES Product Improvement by use of GSFC Data Assimilation System

    NASA Technical Reports Server (NTRS)

    Kumer, J. B.; Douglass, Anne (Technical Monitor)

    2001-01-01

    Recent development in chemistry transport models (CTM) and in data assimilation systems (DAS) indicate impressive predictive capability for the movement of airparcels and the chemistry that goes on within these. This project was aimed at exploring the use of this capability to achieve improved retrieval of geophysical parameters from remote sensing data. The specific goal was to improve retrieval of the CLAES CH4 data obtained during the active north high latitude dynamics event of 18 to 25 February 1992. The model capabilities would be used: (1) rather than climatology to improve on the first guess and the a-priori fields, and (2) to provide horizontal gradients to include in the retrieval forward model. The retrieval would be implemented with the first forward DAS prediction. The results would feed back to the DAS and a second DAS prediction for first guess, a-priori and gradients would feed to the retrieval. The process would repeat to convergence and then proceed to the next day.

  16. Background: Preflight Screening, In-flight Capabilities, and Postflight Testing

    NASA Technical Reports Server (NTRS)

    Gibson, Charles Robert; Duncan, James

    2009-01-01

    Recommendations for minimal in-flight capabilities: Retinal Imaging - provide in-flight capability for the visual monitoring of ocular health (specifically, imaging of the retina and optic nerve head) with the capability of downlinking video/still images. Tonometry - provide more accurate and reliable in-flight capability for measuring intraocular pressure. Ultrasound - explore capabilities of current on-board system for monitoring ocular health. We currently have limited in-flight capabilities on board the International Space Station for performing an internal ocular health assessment. Visual Acuity, Direct Ophthalmoscope, Ultrasound, Tonometry(Tonopen):

  17. The Ultimate Big Data Enterprise Initiative: Defining Functional Capabilities for an International Information System (IIS) for Orbital Space Data (OSD)

    NASA Astrophysics Data System (ADS)

    Raygan, R.

    Global collaboration in support of an International Information System (IIS) for Orbital Space Data (OSD) literally requires a global enterprise. As with many information technology enterprise initiatives attempting to coral the desires of business with the budgets and limitations of technology, Space Situational Awareness (SSA) includes many of the same challenges: 1) Adaptive / Intuitive Dash Board that facilitates User Experience Design for a variety of users. 2) Asset Management of hundreds of thousands of objects moving at thousands of miles per hour hundreds of miles in space. 3) Normalization and integration of diverse data in various languages, possibly hidden or protected from easy access. 4) Expectations of near real-time information availability coupled with predictive analysis to affect decisions before critical points of no return, such as Space Object Conjunction Assessment (CA). 5) Data Ownership, management, taxonomy, and accuracy. 6) Integrated metrics and easily modified algorithms for "what if" analysis. This paper proposes an approach to define the functional capabilities for an IIS for OSD. These functional capabilities not only address previously identified gaps in current systems but incorporate lessons learned from other big data, enterprise, and agile information technology initiatives that correlate to the space domain. Viewing the IIS as the "data service provider" allows adoption of existing information technology processes which strengthen governance and ensure service consumers certain levels of service dependability and accuracy.

  18. The STarT Back Screening Tool and Individual Psychological Measures: Evaluation of Prognostic Capabilities for Low Back Pain Clinical Outcomes in Outpatient Physical Therapy Settings

    PubMed Central

    Bishop, Mark D.; Fritz, Julie M.; Robinson, Michael E.; Asal, Nabih R.; Nisenzon, Anne N.

    2013-01-01

    Background Psychologically informed practice emphasizes routine identification of modifiable psychological risk factors being highlighted. Objective The purpose of this study was to test the predictive validity of the STarT Back Screening Tool (SBT) in comparison with single-construct psychological measures for 6-month clinical outcomes. Design This was an observational, prospective cohort study. Methods Patients (n=146) receiving physical therapy for low back pain were administered the SBT and a battery of psychological measures (Fear-Avoidance Beliefs Questionnaire physical activity scale and work scale [FABQ-PA and FABQ-W, respectively], Pain Catastrophizing Scale [PCS], 11-item version of the Tampa Scale of Kinesiophobia [TSK-11], and 9-item Patient Health Questionnaire [PHQ-9]) at initial evaluation and 4 weeks later. Treatment was at the physical therapist's discretion. Clinical outcomes consisted of pain intensity and self-reported disability. Prediction of 6-month clinical outcomes was assessed for intake SBT and psychological measure scores using multiple regression models while controlling for other prognostic variables. In addition, the predictive capabilities of intake to 4-week changes in SBT and psychological measure scores for 6-month clinical outcomes were assessed. Results Intake pain intensity scores (β=.39 to .45) and disability scores (β=.47 to .60) were the strongest predictors in all final regression models, explaining 22% and 24% and 43% and 48% of the variance for the respective clinical outcome at 6 months. Neither SBT nor psychological measure scores improved prediction of 6-month pain intensity. The SBT overall scores (β=.22) and SBT psychosocial scores (β=.25) added to the prediction of disability at 6 months. Four-week changes in TSK-11 scores (β=−.18) were predictive of pain intensity at 6 months. Four-week changes in FABQ-PA scores (β=−.21), TSK-11 scores (β=−.20) and SBT overall scores (β=−.18) were predictive of disability at 6 months. Limitations Physical therapy treatment was not standardized or accounted for in the analysis. Conclusions Prediction of clinical outcomes by psychology-based measures was dependent upon the clinical outcome domain of interest. Similar to studies from the primary care setting, initial screening with the SBT provided additional prognostic information for 6-month disability and changes in SBT overall scores may provide important clinical decision-making information for treatment monitoring. PMID:23125279

  19. Predictive Measures of Locomotor Performance on an Unstable Walking Surface

    NASA Technical Reports Server (NTRS)

    Bloomberg, J. J.; Peters, B. T.; Mulavara, A. P.; Caldwell, E. E.; Batson, C. D.; De Dios, Y. E.; Gadd, N. E.; Goel, R.; Wood, S. J.; Cohen, H. S.; hide

    2016-01-01

    Locomotion requires integration of visual, vestibular, and somatosensory information to produce the appropriate motor output to control movement. The degree to which these sensory inputs are weighted and reorganized in discordant sensory environments varies by individual and may be predictive of the ability to adapt to novel environments. The goals of this project are to: 1) develop a set of predictive measures capable of identifying individual differences in sensorimotor adaptability, and 2) use this information to inform the design of training countermeasures designed to enhance the ability of astronauts to adapt to gravitational transitions improving balance and locomotor performance after a Mars landing and enhancing egress capability after a landing on Earth.

  20. Center-TRACON Automation System (CTAS) En Route Trajectory Predictor Requirements and Capabilities

    NASA Technical Reports Server (NTRS)

    Vivona, Robert; Cate, Karen Tung

    2013-01-01

    This requirements framework document is designed to support the capture of requirements and capabilities for state-of-the-art trajectory predictors (TPs). This framework has been developed to assist TP experts in capturing a clear, consistent, and cross-comparable set of requirements and capabilities. The goal is to capture capabilities (types of trajectories that can be built), functional requirements (including inputs and outputs), non-functional requirements (including prediction accuracy and computational performance), approaches for constraint relaxation, and input uncertainties. The sections of this framework are based on the Common Trajectory Predictor structure developed by the FAA/Eurocontrol Cooperative R&D Action Plan 16 Committee on Common Trajectory Prediction. It is assumed that the reader is familiar with the Common TP Structure.1 This initial draft is intended as a first cut capture of the En Route TS Capabilities and Requirements. As such, it contains many annotations indicating possible logic errors in the CTAS code or in the description provided. It is intended to work out the details of the annotations with NASA and to update this document at a later time.

Top