Sample records for analytical prediction methods

  1. Effluent composition prediction of a two-stage anaerobic digestion process: machine learning and stoichiometry techniques.

    PubMed

    Alejo, Luz; Atkinson, John; Guzmán-Fierro, Víctor; Roeckel, Marlene

    2018-05-16

    Computational self-adapting methods (Support Vector Machines, SVM) are compared with an analytical method in effluent composition prediction of a two-stage anaerobic digestion (AD) process. Experimental data for the AD of poultry manure were used. The analytical method considers the protein as the only source of ammonia production in AD after degradation. Total ammonia nitrogen (TAN), total solids (TS), chemical oxygen demand (COD), and total volatile solids (TVS) were measured in the influent and effluent of the process. The TAN concentration in the effluent was predicted, this being the most inhibiting and polluting compound in AD. Despite the limited data available, the SVM-based model outperformed the analytical method for the TAN prediction, achieving a relative average error of 15.2% against 43% for the analytical method. Moreover, SVM showed higher prediction accuracy in comparison with Artificial Neural Networks. This result reveals the future promise of SVM for prediction in non-linear and dynamic AD processes. Graphical abstract ᅟ.

  2. Predictive Analytical Model for Isolator Shock-Train Location in a Mach 2.2 Direct-Connect Supersonic Combustion Tunnel

    NASA Astrophysics Data System (ADS)

    Lingren, Joe; Vanstone, Leon; Hashemi, Kelley; Gogineni, Sivaram; Donbar, Jeffrey; Akella, Maruthi; Clemens, Noel

    2016-11-01

    This study develops an analytical model for predicting the leading shock of a shock-train in the constant area isolator section in a Mach 2.2 direct-connect scramjet simulation tunnel. The effective geometry of the isolator is assumed to be a weakly converging duct owing to boundary-layer growth. For some given pressure rise across the isolator, quasi-1D equations relating to isentropic or normal shock flows can be used to predict the normal shock location in the isolator. The surface pressure distribution through the isolator was measured during experiments and both the actual and predicted locations can be calculated. Three methods of finding the shock-train location are examined, one based on the measured pressure rise, one using a non-physics-based control model, and one using the physics-based analytical model. It is shown that the analytical model performs better than the non-physics-based model in all cases. The analytic model is less accurate than the pressure threshold method but requires significantly less information to compute. In contrast to other methods for predicting shock-train location, this method is relatively accurate and requires as little as a single pressure measurement. This makes this method potentially useful for unstart control applications.

  3. Analysis methods for Kevlar shield response to rotor fragments

    NASA Technical Reports Server (NTRS)

    Gerstle, J. H.

    1977-01-01

    Several empirical and analytical approaches to rotor burst shield sizing are compared and principal differences in metal and fabric dynamic behavior are discussed. The application of transient structural response computer programs to predict Kevlar containment limits is described. For preliminary shield sizing, present analytical methods are useful if insufficient test data for empirical modeling are available. To provide other information useful for engineering design, analytical methods require further developments in material characterization, failure criteria, loads definition, and post-impact fragment trajectory prediction.

  4. Analysis of a virtual memory model for maintaining database views

    NASA Technical Reports Server (NTRS)

    Kinsley, Kathryn C.; Hughes, Charles E.

    1992-01-01

    This paper presents an analytical model for predicting the performance of a new support strategy for database views. This strategy, called the virtual method, is compared with traditional methods for supporting views. The analytical model's predictions of improved performance by the virtual method are then validated by comparing these results with those achieved in an experimental implementation.

  5. Literature search of publications concerning the prediction of dynamic inlet flow distortion and related topics

    NASA Technical Reports Server (NTRS)

    Schweikhhard, W. G.; Chen, Y. S.

    1983-01-01

    Publications prior to March 1981 were surveyed to determine inlet flow dynamic distortion prediction methods and to catalog experimental and analytical information concerning inlet flow dynamic distortion prediction methods and to catalog experimental and analytical information concerning inlet flow dynamics at the engine-inlet interface of conventional aircraft (excluding V/STOL). The sixty-five publications found are briefly summarized and tabulated according to topic and are cross-referenced according to content and nature of the investigation (e.g., predictive, experimental, analytical and types of tests). Three appendices include lists of references, authors, organizations and agencies conducting the studies. Also, selected materials summaries, introductions and conclusions - from the reports are included. Few reports were found covering methods for predicting the probable maximum distortion. The three predictive methods found are those of Melick, Jacox and Motycka. The latter two require extensive high response pressure measurements at the compressor face, while the Melick Technique can function with as few as one or two measurements.

  6. The Development of MST Test Information for the Prediction of Test Performances

    ERIC Educational Resources Information Center

    Park, Ryoungsun; Kim, Jiseon; Chung, Hyewon; Dodd, Barbara G.

    2017-01-01

    The current study proposes novel methods to predict multistage testing (MST) performance without conducting simulations. This method, called MST test information, is based on analytic derivation of standard errors of ability estimates across theta levels. We compared standard errors derived analytically to the simulation results to demonstrate the…

  7. Analytical Algorithms to Quantify the Uncertainty in Remaining Useful Life Prediction

    NASA Technical Reports Server (NTRS)

    Sankararaman, Shankar; Saxena, Abhinav; Daigle, Matthew; Goebel, Kai

    2013-01-01

    This paper investigates the use of analytical algorithms to quantify the uncertainty in the remaining useful life (RUL) estimate of components used in aerospace applications. The prediction of RUL is affected by several sources of uncertainty and it is important to systematically quantify their combined effect by computing the uncertainty in the RUL prediction in order to aid risk assessment, risk mitigation, and decisionmaking. While sampling-based algorithms have been conventionally used for quantifying the uncertainty in RUL, analytical algorithms are computationally cheaper and sometimes, are better suited for online decision-making. While exact analytical algorithms are available only for certain special cases (for e.g., linear models with Gaussian variables), effective approximations can be made using the the first-order second moment method (FOSM), the first-order reliability method (FORM), and the inverse first-order reliability method (Inverse FORM). These methods can be used not only to calculate the entire probability distribution of RUL but also to obtain probability bounds on RUL. This paper explains these three methods in detail and illustrates them using the state-space model of a lithium-ion battery.

  8. Synthesized airfoil data method for prediction of dynamic stall and unsteady airloads

    NASA Technical Reports Server (NTRS)

    Gangwani, S. T.

    1983-01-01

    A detailed analysis of dynamic stall experiments has led to a set of relatively compact analytical expressions, called synthesized unsteady airfoil data, which accurately describe in the time-domain the unsteady aerodynamic characteristics of stalled airfoils. An analytical research program was conducted to expand and improve this synthesized unsteady airfoil data method using additional available sets of unsteady airfoil data. The primary objectives were to reduce these data to synthesized form for use in rotor airload prediction analyses and to generalize the results. Unsteady drag data were synthesized which provided the basis for successful expansion of the formulation to include computation of the unsteady pressure drag of airfoils and rotor blades. Also, an improved prediction model for airfoil flow reattachment was incorporated in the method. Application of this improved unsteady aerodynamics model has resulted in an improved correlation between analytic predictions and measured full scale helicopter blade loads and stress data.

  9. Fluid mechanics of dynamic stall. II - Prediction of full scale characteristics

    NASA Technical Reports Server (NTRS)

    Ericsson, L. E.; Reding, J. P.

    1988-01-01

    Analytical extrapolations are made from experimental subscale dynamics to predict full scale characteristics of dynamic stall. The method proceeds by establishing analytic relationships between dynamic and static aerodynamic characteristics induced by viscous flow effects. The method is then validated by predicting dynamic test results on the basis of corresponding static test data obtained at the same subscale flow conditions, and the effect of Reynolds number on the static aerodynamic characteristics are determined from subscale to full scale flow conditions.

  10. Towards an Airframe Noise Prediction Methodology: Survey of Current Approaches

    NASA Technical Reports Server (NTRS)

    Farassat, Fereidoun; Casper, Jay H.

    2006-01-01

    In this paper, we present a critical survey of the current airframe noise (AFN) prediction methodologies. Four methodologies are recognized. These are the fully analytic method, CFD combined with the acoustic analogy, the semi-empirical method and fully numerical method. It is argued that for the immediate need of the aircraft industry, the semi-empirical method based on recent high quality acoustic database is the best available method. The method based on CFD and the Ffowcs William- Hawkings (FW-H) equation with penetrable data surface (FW-Hpds ) has advanced considerably and much experience has been gained in its use. However, more research is needed in the near future particularly in the area of turbulence simulation. The fully numerical method will take longer to reach maturity. Based on the current trends, it is predicted that this method will eventually develop into the method of choice. Both the turbulence simulation and propagation methods need to develop more for this method to become useful. Nonetheless, the authors propose that the method based on a combination of numerical and analytical techniques, e.g., CFD combined with FW-H equation, should also be worked on. In this effort, the current symbolic algebra software will allow more analytical approaches to be incorporated into AFN prediction methods.

  11. Using predictive analytics and big data to optimize pharmaceutical outcomes.

    PubMed

    Hernandez, Inmaculada; Zhang, Yuting

    2017-09-15

    The steps involved, the resources needed, and the challenges associated with applying predictive analytics in healthcare are described, with a review of successful applications of predictive analytics in implementing population health management interventions that target medication-related patient outcomes. In healthcare, the term big data typically refers to large quantities of electronic health record, administrative claims, and clinical trial data as well as data collected from smartphone applications, wearable devices, social media, and personal genomics services; predictive analytics refers to innovative methods of analysis developed to overcome challenges associated with big data, including a variety of statistical techniques ranging from predictive modeling to machine learning to data mining. Predictive analytics using big data have been applied successfully in several areas of medication management, such as in the identification of complex patients or those at highest risk for medication noncompliance or adverse effects. Because predictive analytics can be used in predicting different outcomes, they can provide pharmacists with a better understanding of the risks for specific medication-related problems that each patient faces. This information will enable pharmacists to deliver interventions tailored to patients' needs. In order to take full advantage of these benefits, however, clinicians will have to understand the basics of big data and predictive analytics. Predictive analytics that leverage big data will become an indispensable tool for clinicians in mapping interventions and improving patient outcomes. Copyright © 2017 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  12. Durability predictions of adhesively bonded composite structures using accelerated characterization methods

    NASA Technical Reports Server (NTRS)

    Brinson, H. F.

    1985-01-01

    The utilization of adhesive bonding for composite structures is briefly assessed. The need for a method to determine damage initiation and propagation for such joints is outlined. Methods currently in use to analyze both adhesive joints and fiber reinforced plastics is mentioned and it is indicated that all methods require the input of the mechanical properties of the polymeric adhesive and composite matrix material. The mechanical properties of polymers are indicated to be viscoelastic and sensitive to environmental effects. A method to analytically characterize environmentally dependent linear and nonlinear viscoelastic properties is given. It is indicated that the methodology can be used to extrapolate short term data to long term design lifetimes. That is, the method can be used for long term durability predictions. Experimental results for near adhesive resins, polymers used as composite matrices and unidirectional composite laminates is given. The data is fitted well with the analytical durability methodology. Finally, suggestions are outlined for the development of an analytical methodology for the durability predictions of adhesively bonded composite structures.

  13. Hybrid experimental/analytical models of structural dynamics - Creation and use for predictions

    NASA Technical Reports Server (NTRS)

    Balmes, Etienne

    1993-01-01

    An original complete methodology for the construction of predictive models of damped structural vibrations is introduced. A consistent definition of normal and complex modes is given which leads to an original method to accurately identify non-proportionally damped normal mode models. A new method to create predictive hybrid experimental/analytical models of damped structures is introduced, and the ability of hybrid models to predict the response to system configuration changes is discussed. Finally a critical review of the overall methodology is made by application to the case of the MIT/SERC interferometer testbed.

  14. Net analyte signal standard addition method (NASSAM) as a novel spectrofluorimetric and spectrophotometric technique for simultaneous determination, application to assay of melatonin and pyridoxine

    NASA Astrophysics Data System (ADS)

    Asadpour-Zeynali, Karim; Bastami, Mohammad

    2010-02-01

    In this work a new modification of the standard addition method called "net analyte signal standard addition method (NASSAM)" is presented for the simultaneous spectrofluorimetric and spectrophotometric analysis. The proposed method combines the advantages of standard addition method with those of net analyte signal concept. The method can be applied for the determination of analyte in the presence of known interferents. The accuracy of the predictions against H-point standard addition method is not dependent on the shape of the analyte and interferent spectra. The method was successfully applied to simultaneous spectrofluorimetric and spectrophotometric determination of pyridoxine (PY) and melatonin (MT) in synthetic mixtures and in a pharmaceutical formulation.

  15. Functionality of empirical model-based predictive analytics for the early detection of hemodynamic instabilty.

    PubMed

    Summers, Richard L; Pipke, Matt; Wegerich, Stephan; Conkright, Gary; Isom, Kristen C

    2014-01-01

    Background. Monitoring cardiovascular hemodynamics in the modern clinical setting is a major challenge. Increasing amounts of physiologic data must be analyzed and interpreted in the context of the individual patient’s pathology and inherent biologic variability. Certain data-driven analytical methods are currently being explored for smart monitoring of data streams from patients as a first tier automated detection system for clinical deterioration. As a prelude to human clinical trials, an empirical multivariate machine learning method called Similarity-Based Modeling (“SBM”), was tested in an In Silico experiment using data generated with the aid of a detailed computer simulator of human physiology (Quantitative Circulatory Physiology or “QCP”) which contains complex control systems with realistic integrated feedback loops. Methods. SBM is a kernel-based, multivariate machine learning method that that uses monitored clinical information to generate an empirical model of a patient’s physiologic state. This platform allows for the use of predictive analytic techniques to identify early changes in a patient’s condition that are indicative of a state of deterioration or instability. The integrity of the technique was tested through an In Silico experiment using QCP in which the output of computer simulations of a slowly evolving cardiac tamponade resulted in progressive state of cardiovascular decompensation. Simulator outputs for the variables under consideration were generated at a 2-min data rate (0.083Hz) with the tamponade introduced at a point 420 minutes into the simulation sequence. The functionality of the SBM predictive analytics methodology to identify clinical deterioration was compared to the thresholds used by conventional monitoring methods. Results. The SBM modeling method was found to closely track the normal physiologic variation as simulated by QCP. With the slow development of the tamponade, the SBM model are seen to disagree while the simulated biosignals in the early stages of physiologic deterioration and while the variables are still within normal ranges. Thus, the SBM system was found to identify pathophysiologic conditions in a timeframe that would not have been detected in a usual clinical monitoring scenario. Conclusion. In this study the functionality of a multivariate machine learning predictive methodology that that incorporates commonly monitored clinical information was tested using a computer model of human physiology. SBM and predictive analytics were able to differentiate a state of decompensation while the monitored variables were still within normal clinical ranges. This finding suggests that the SBM could provide for early identification of a clinical deterioration using predictive analytic techniques. predictive analytics, hemodynamic, monitoring.

  16. Predicting and explaining inflammation in Crohn's disease patients using predictive analytics methods and electronic medical record data.

    PubMed

    Reddy, Bhargava K; Delen, Dursun; Agrawal, Rupesh K

    2018-01-01

    Crohn's disease is among the chronic inflammatory bowel diseases that impact the gastrointestinal tract. Understanding and predicting the severity of inflammation in real-time settings is critical to disease management. Extant literature has primarily focused on studies that are conducted in clinical trial settings to investigate the impact of a drug treatment on the remission status of the disease. This research proposes an analytics methodology where three different types of prediction models are developed to predict and to explain the severity of inflammation in patients diagnosed with Crohn's disease. The results show that machine-learning-based analytic methods such as gradient boosting machines can predict the inflammation severity with a very high accuracy (area under the curve = 92.82%), followed by regularized regression and logistic regression. According to the findings, a combination of baseline laboratory parameters, patient demographic characteristics, and disease location are among the strongest predictors of inflammation severity in Crohn's disease patients.

  17. Prediction of thermal cycling induced matrix cracking

    NASA Technical Reports Server (NTRS)

    Mcmanus, Hugh L.

    1992-01-01

    Thermal fatigue has been observed to cause matrix cracking in laminated composite materials. A method is presented to predict transverse matrix cracks in composite laminates subjected to cyclic thermal load. Shear lag stress approximations and a simple energy-based fracture criteria are used to predict crack densities as a function of temperature. Prediction of crack densities as a function of thermal cycling is accomplished by assuming that fatigue degrades the material's inherent resistance to cracking. The method is implemented as a computer program. A simple experiment provides data on progressive cracking of a laminate with decreasing temperature. Existing data on thermal fatigue is also used. Correlations of the analytical predictions to the data are very good. A parametric study using the analytical method is presented which provides insight into material behavior under cyclical thermal loads.

  18. Comparison of analytical and predictive methods for water, protein, fat, sugar, and gross energy in marine mammal milk.

    PubMed

    Oftedal, O T; Eisert, R; Barrell, G K

    2014-01-01

    Mammalian milks may differ greatly in composition from cow milk, and these differences may affect the performance of analytical methods. High-fat, high-protein milks with a preponderance of oligosaccharides, such as those produced by many marine mammals, present a particular challenge. We compared the performance of several methods against reference procedures using Weddell seal (Leptonychotes weddellii) milk of highly varied composition (by reference methods: 27-63% water, 24-62% fat, 8-12% crude protein, 0.5-1.8% sugar). A microdrying step preparatory to carbon-hydrogen-nitrogen (CHN) gas analysis slightly underestimated water content and had a higher repeatability relative standard deviation (RSDr) than did reference oven drying at 100°C. Compared with a reference macro-Kjeldahl protein procedure, the CHN (or Dumas) combustion method had a somewhat higher RSDr (1.56 vs. 0.60%) but correlation between methods was high (0.992), means were not different (CHN: 17.2±0.46% dry matter basis; Kjeldahl 17.3±0.49% dry matter basis), there were no significant proportional or constant errors, and predictive performance was high. A carbon stoichiometric procedure based on CHN analysis failed to adequately predict fat (reference: Röse-Gottlieb method) or total sugar (reference: phenol-sulfuric acid method). Gross energy content, calculated from energetic factors and results from reference methods for fat, protein, and total sugar, accurately predicted gross energy as measured by bomb calorimetry. We conclude that the CHN (Dumas) combustion method and calculation of gross energy are acceptable analytical approaches for marine mammal milk, but fat and sugar require separate analysis by appropriate analytic methods and cannot be adequately estimated by carbon stoichiometry. Some other alternative methods-low-temperature drying for water determination; Bradford, Lowry, and biuret methods for protein; the Folch and the Bligh and Dyer methods for fat; and enzymatic and reducing sugar methods for total sugar-appear likely to produce substantial error in marine mammal milks. It is important that alternative analytical methods be properly validated against a reference method before being used, especially for mammalian milks that differ greatly from cow milk in analyte characteristics and concentrations. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  19. Verification of spatial and temporal pressure distributions in segmented solid rocket motors

    NASA Technical Reports Server (NTRS)

    Salita, Mark

    1989-01-01

    A wide variety of analytical tools are in use today to predict the history and spatial distributions of pressure in the combustion chambers of solid rocket motors (SRMs). Experimental and analytical methods are presented here that allow the verification of many of these predictions. These methods are applied to the redesigned space shuttle booster (RSRM). Girth strain-gage data is compared to the predictions of various one-dimensional quasisteady analyses in order to verify the axial drop in motor static pressure during ignition transients as well as quasisteady motor operation. The results of previous modeling of radial flows in the bore, slots, and around grain overhangs are supported by approximate analytical and empirical techniques presented here. The predictions of circumferential flows induced by inhibitor asymmetries, nozzle vectoring, and propellant slump are compared to each other and to subscale cold air and water tunnel measurements to ascertain their validity.

  20. Random Forest as a Predictive Analytics Alternative to Regression in Institutional Research

    ERIC Educational Resources Information Center

    He, Lingjun; Levine, Richard A.; Fan, Juanjuan; Beemer, Joshua; Stronach, Jeanne

    2018-01-01

    In institutional research, modern data mining approaches are seldom considered to address predictive analytics problems. The goal of this paper is to highlight the advantages of tree-based machine learning algorithms over classic (logistic) regression methods for data-informed decision making in higher education problems, and stress the success of…

  1. An analytical method for prediction of stability lobes diagram of milling of large-size thin-walled workpiece

    NASA Astrophysics Data System (ADS)

    Yao, Jiming; Lin, Bin; Guo, Yu

    2017-01-01

    Different from common thin-walled workpiece, in the process of milling of large-size thin-walled workpiece chatter in the axial direction along the spindle is also likely to happen because of the low stiffness of the workpiece in this direction. An analytical method for prediction of stability lobes of milling of large-size thin-walled workpiece is presented in this paper. In the method, not only frequency response function of the tool point but also frequency response function of the workpiece is considered.

  2. An analytical method for designing low noise helicopter transmissions

    NASA Technical Reports Server (NTRS)

    Bossler, R. B., Jr.; Bowes, M. A.; Royal, A. C.

    1978-01-01

    The development and experimental validation of a method for analytically modeling the noise mechanism in the helicopter geared power transmission systems is described. This method can be used within the design process to predict interior noise levels and to investigate the noise reducing potential of alternative transmission design details. Examples are discussed.

  3. SU-C-204-01: A Fast Analytical Approach for Prompt Gamma and PET Predictions in a TPS for Proton Range Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroniger, K; Herzog, M; Landry, G

    2015-06-15

    Purpose: We describe and demonstrate a fast analytical tool for prompt-gamma emission prediction based on filter functions applied on the depth dose profile. We present the implementation in a treatment planning system (TPS) of the same algorithm for positron emitter distributions. Methods: The prediction of the desired observable is based on the convolution of filter functions with the depth dose profile. For both prompt-gammas and positron emitters, the results of Monte Carlo simulations (MC) are compared with those of the analytical tool. For prompt-gamma emission from inelastic proton-induced reactions, homogeneous and inhomogeneous phantoms alongside with patient data are used asmore » irradiation targets of mono-energetic proton pencil beams. The accuracy of the tool is assessed in terms of the shape of the analytically calculated depth profiles and their absolute yields, compared to MC. For the positron emitters, the method is implemented in a research RayStation TPS and compared to MC predictions. Digital phantoms and patient data are used and positron emitter spatial density distributions are analyzed. Results: Calculated prompt-gamma profiles agree with MC within 3 % in terms of absolute yield and reproduce the correct shape. Based on an arbitrary reference material and by means of 6 filter functions (one per chemical element), profiles in any other material composed of those elements can be predicted. The TPS implemented algorithm is accurate enough to enable, via the analytically calculated positron emitters profiles, detection of range differences between the TPS and MC with errors of the order of 1–2 mm. Conclusion: The proposed analytical method predicts prompt-gamma and positron emitter profiles which generally agree with the distributions obtained by a full MC. The implementation of the tool in a TPS shows that reliable profiles can be obtained directly from the dose calculated by the TPS, without the need of full MC simulation.« less

  4. Reports of the AAAI 2009 Spring Symposia: Technosocial Predictive Analytics.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanfilippo, Antonio P.

    2009-10-01

    The Technosocial Predictive Analytics AAAI symposium was held at Stanford University, Stanford, CA, March 23-25, 2009. The goal of this symposium was to explore new methods for anticipatory analytical thinking that provide decision advantage through the integration of human and physical models. Special attention was also placed on how to leverage supporting disciplines to (a) facilitate the achievement of knowledge inputs, (b) improve the user experience, and (c) foster social intelligence through collaborative/competitive work.

  5. Progressive damage, fracture predictions and post mortem correlations for fiber composites

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Lewis Research Center is involved in the development of computational mechanics methods for predicting the structural behavior and response of composite structures. In conjunction with the analytical methods development, experimental programs including post failure examination are conducted to study various factors affecting composite fracture such as laminate thickness effects, ply configuration, and notch sensitivity. Results indicate that the analytical capabilities incorporated in the CODSTRAN computer code are effective in predicting the progressive damage and fracture of composite structures. In addition, the results being generated are establishing a data base which will aid in the characterization of composite fracture.

  6. Statistical Learning Theory for High Dimensional Prediction: Application to Criterion-Keyed Scale Development

    PubMed Central

    Chapman, Benjamin P.; Weiss, Alexander; Duberstein, Paul

    2016-01-01

    Statistical learning theory (SLT) is the statistical formulation of machine learning theory, a body of analytic methods common in “big data” problems. Regression-based SLT algorithms seek to maximize predictive accuracy for some outcome, given a large pool of potential predictors, without overfitting the sample. Research goals in psychology may sometimes call for high dimensional regression. One example is criterion-keyed scale construction, where a scale with maximal predictive validity must be built from a large item pool. Using this as a working example, we first introduce a core principle of SLT methods: minimization of expected prediction error (EPE). Minimizing EPE is fundamentally different than maximizing the within-sample likelihood, and hinges on building a predictive model of sufficient complexity to predict the outcome well, without undue complexity leading to overfitting. We describe how such models are built and refined via cross-validation. We then illustrate how three common SLT algorithms–Supervised Principal Components, Regularization, and Boosting—can be used to construct a criterion-keyed scale predicting all-cause mortality, using a large personality item pool within a population cohort. Each algorithm illustrates a different approach to minimizing EPE. Finally, we consider broader applications of SLT predictive algorithms, both as supportive analytic tools for conventional methods, and as primary analytic tools in discovery phase research. We conclude that despite their differences from the classic null-hypothesis testing approach—or perhaps because of them–SLT methods may hold value as a statistically rigorous approach to exploratory regression. PMID:27454257

  7. Viscoelastic behavior and lifetime (durability) predictions. [for laminated fiber reinforced plastics

    NASA Technical Reports Server (NTRS)

    Brinson, R. F.

    1985-01-01

    A method for lifetime or durability predictions for laminated fiber reinforced plastics is given. The procedure is similar to but not the same as the well known time-temperature-superposition principle for polymers. The method is better described as an analytical adaptation of time-stress-super-position methods. The analytical constitutive modeling is based upon a nonlinear viscoelastic constitutive model developed by Schapery. Time dependent failure models are discussed and are related to the constitutive models. Finally, results of an incremental lamination analysis using the constitutive and failure model are compared to experimental results. Favorable results between theory and predictions are presented using data from creep tests of about two months duration.

  8. Beyond Engagement Analytics: Which Online Mixed-Data Factors Predict Student Learning Outcomes?

    ERIC Educational Resources Information Center

    Strang, Kenneth David

    2017-01-01

    This mixed-method study focuses on online learning analytics, a research area of importance. Several important student attributes and their online activities are examined to identify what seems to work best to predict higher grades. The purpose is to explore the relationships between student grade and key learning engagement factors using a large…

  9. Correlation of ground tests and analyses of a dynamically scaled Space Station model configuration

    NASA Technical Reports Server (NTRS)

    Javeed, Mehzad; Edighoffer, Harold H.; Mcgowan, Paul E.

    1993-01-01

    Verification of analytical models through correlation with ground test results of a complex space truss structure is demonstrated. A multi-component, dynamically scaled space station model configuration is the focus structure for this work. Previously established test/analysis correlation procedures are used to develop improved component analytical models. Integrated system analytical models, consisting of updated component analytical models, are compared with modal test results to establish the accuracy of system-level dynamic predictions. Design sensitivity model updating methods are shown to be effective for providing improved component analytical models. Also, the effects of component model accuracy and interface modeling fidelity on the accuracy of integrated model predictions is examined.

  10. Design and analysis of tubular permanent magnet linear generator for small-scale wave energy converter

    NASA Astrophysics Data System (ADS)

    Kim, Jeong-Man; Koo, Min-Mo; Jeong, Jae-Hoon; Hong, Keyyong; Cho, Il-Hyoung; Choi, Jang-Young

    2017-05-01

    This paper reports the design and analysis of a tubular permanent magnet linear generator (TPMLG) for a small-scale wave-energy converter. The analytical field computation is performed by applying a magnetic vector potential and a 2-D analytical model to determine design parameters. Based on analytical solutions, parametric analysis is performed to meet the design specifications of a wave-energy converter (WEC). Then, 2-D FEA is employed to validate the analytical method. Finally, the experimental result confirms the predictions of the analytical and finite element analysis (FEA) methods under regular and irregular wave conditions.

  11. PAUSE: Predictive Analytics Using SPARQL-Endpoints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sukumar, Sreenivas R; Ainsworth, Keela; Bond, Nathaniel

    2014-07-11

    This invention relates to the medical industry and more specifically to methods of predicting risks. With the impetus towards personalized and evidence-based medicine, the need for a framework to analyze/interpret quantitative measurements (blood work, toxicology, etc.) with qualitative descriptions (specialist reports after reading images, bio-medical knowledgebase, etc.) to predict diagnostic risks is fast emerging. We describe a software solution that leverages hardware for scalable in-memory analytics and applies next-generation semantic query tools on medical data.

  12. Statistical learning theory for high dimensional prediction: Application to criterion-keyed scale development.

    PubMed

    Chapman, Benjamin P; Weiss, Alexander; Duberstein, Paul R

    2016-12-01

    Statistical learning theory (SLT) is the statistical formulation of machine learning theory, a body of analytic methods common in "big data" problems. Regression-based SLT algorithms seek to maximize predictive accuracy for some outcome, given a large pool of potential predictors, without overfitting the sample. Research goals in psychology may sometimes call for high dimensional regression. One example is criterion-keyed scale construction, where a scale with maximal predictive validity must be built from a large item pool. Using this as a working example, we first introduce a core principle of SLT methods: minimization of expected prediction error (EPE). Minimizing EPE is fundamentally different than maximizing the within-sample likelihood, and hinges on building a predictive model of sufficient complexity to predict the outcome well, without undue complexity leading to overfitting. We describe how such models are built and refined via cross-validation. We then illustrate how 3 common SLT algorithms-supervised principal components, regularization, and boosting-can be used to construct a criterion-keyed scale predicting all-cause mortality, using a large personality item pool within a population cohort. Each algorithm illustrates a different approach to minimizing EPE. Finally, we consider broader applications of SLT predictive algorithms, both as supportive analytic tools for conventional methods, and as primary analytic tools in discovery phase research. We conclude that despite their differences from the classic null-hypothesis testing approach-or perhaps because of them-SLT methods may hold value as a statistically rigorous approach to exploratory regression. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  13. Hybrid perturbation methods based on statistical time series models

    NASA Astrophysics Data System (ADS)

    San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario

    2016-04-01

    In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Simonetto, Andrea

    This paper focuses on the design of online algorithms based on prediction-correction steps to track the optimal solution of a time-varying constrained problem. Existing prediction-correction methods have been shown to work well for unconstrained convex problems and for settings where obtaining the inverse of the Hessian of the cost function can be computationally affordable. The prediction-correction algorithm proposed in this paper addresses the limitations of existing methods by tackling constrained problems and by designing a first-order prediction step that relies on the Hessian of the cost function (and do not require the computation of its inverse). Analytical results are establishedmore » to quantify the tracking error. Numerical simulations corroborate the analytical results and showcase performance and benefits of the algorithms.« less

  15. Sustained prediction ability of net analyte preprocessing methods using reduced calibration sets. Theoretical and experimental study involving the spectrophotometric analysis of multicomponent mixtures.

    PubMed

    Goicoechea, H C; Olivieri, A C

    2001-07-01

    A newly developed multivariate method involving net analyte preprocessing (NAP) was tested using central composite calibration designs of progressively decreasing size regarding the multivariate simultaneous spectrophotometric determination of three active components (phenylephrine, diphenhydramine and naphazoline) and one excipient (methylparaben) in nasal solutions. Its performance was evaluated and compared with that of partial least-squares (PLS-1). Minimisation of the calibration predicted error sum of squares (PRESS) as a function of a moving spectral window helped to select appropriate working spectral ranges for both methods. The comparison of NAP and PLS results was carried out using two tests: (1) the elliptical joint confidence region for the slope and intercept of a predicted versus actual concentrations plot for a large validation set of samples and (2) the D-optimality criterion concerning the information content of the calibration data matrix. Extensive simulations and experimental validation showed that, unlike PLS, the NAP method is able to furnish highly satisfactory results when the calibration set is reduced from a full four-component central composite to a fractional central composite, as expected from the modelling requirements of net analyte based methods.

  16. An automated ranking platform for machine learning regression models for meat spoilage prediction using multi-spectral imaging and metabolic profiling.

    PubMed

    Estelles-Lopez, Lucia; Ropodi, Athina; Pavlidis, Dimitris; Fotopoulou, Jenny; Gkousari, Christina; Peyrodie, Audrey; Panagou, Efstathios; Nychas, George-John; Mohareb, Fady

    2017-09-01

    Over the past decade, analytical approaches based on vibrational spectroscopy, hyperspectral/multispectral imagining and biomimetic sensors started gaining popularity as rapid and efficient methods for assessing food quality, safety and authentication; as a sensible alternative to the expensive and time-consuming conventional microbiological techniques. Due to the multi-dimensional nature of the data generated from such analyses, the output needs to be coupled with a suitable statistical approach or machine-learning algorithms before the results can be interpreted. Choosing the optimum pattern recognition or machine learning approach for a given analytical platform is often challenging and involves a comparative analysis between various algorithms in order to achieve the best possible prediction accuracy. In this work, "MeatReg", a web-based application is presented, able to automate the procedure of identifying the best machine learning method for comparing data from several analytical techniques, to predict the counts of microorganisms responsible of meat spoilage regardless of the packaging system applied. In particularly up to 7 regression methods were applied and these are ordinary least squares regression, stepwise linear regression, partial least square regression, principal component regression, support vector regression, random forest and k-nearest neighbours. MeatReg" was tested with minced beef samples stored under aerobic and modified atmosphere packaging and analysed with electronic nose, HPLC, FT-IR, GC-MS and Multispectral imaging instrument. Population of total viable count, lactic acid bacteria, pseudomonads, Enterobacteriaceae and B. thermosphacta, were predicted. As a result, recommendations of which analytical platforms are suitable to predict each type of bacteria and which machine learning methods to use in each case were obtained. The developed system is accessible via the link: www.sorfml.com. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. IBM Watson Analytics: Automating Visualization, Descriptive, and Predictive Statistics

    PubMed Central

    2016-01-01

    Background We live in an era of explosive data generation that will continue to grow and involve all industries. One of the results of this explosion is the need for newer and more efficient data analytics procedures. Traditionally, data analytics required a substantial background in statistics and computer science. In 2015, International Business Machines Corporation (IBM) released the IBM Watson Analytics (IBMWA) software that delivered advanced statistical procedures based on the Statistical Package for the Social Sciences (SPSS). The latest entry of Watson Analytics into the field of analytical software products provides users with enhanced functions that are not available in many existing programs. For example, Watson Analytics automatically analyzes datasets, examines data quality, and determines the optimal statistical approach. Users can request exploratory, predictive, and visual analytics. Using natural language processing (NLP), users are able to submit additional questions for analyses in a quick response format. This analytical package is available free to academic institutions (faculty and students) that plan to use the tools for noncommercial purposes. Objective To report the features of IBMWA and discuss how this software subjectively and objectively compares to other data mining programs. Methods The salient features of the IBMWA program were examined and compared with other common analytical platforms, using validated health datasets. Results Using a validated dataset, IBMWA delivered similar predictions compared with several commercial and open source data mining software applications. The visual analytics generated by IBMWA were similar to results from programs such as Microsoft Excel and Tableau Software. In addition, assistance with data preprocessing and data exploration was an inherent component of the IBMWA application. Sensitivity and specificity were not included in the IBMWA predictive analytics results, nor were odds ratios, confidence intervals, or a confusion matrix. Conclusions IBMWA is a new alternative for data analytics software that automates descriptive, predictive, and visual analytics. This program is very user-friendly but requires data preprocessing, statistical conceptual understanding, and domain expertise. PMID:27729304

  18. Analytic Formulation and Numerical Implementation of an Acoustic Pressure Gradient Prediction

    NASA Technical Reports Server (NTRS)

    Lee, Seongkyu; Brentner, Kenneth S.; Farassat, F.; Morris, Philip J.

    2008-01-01

    Two new analytical formulations of the acoustic pressure gradient have been developed and implemented in the PSU-WOPWOP rotor noise prediction code. The pressure gradient can be used to solve the boundary condition for scattering problems and it is a key aspect to solve acoustic scattering problems. The first formulation is derived from the gradient of the Ffowcs Williams-Hawkings (FW-H) equation. This formulation has a form involving the observer time differentiation outside the integrals. In the second formulation, the time differentiation is taken inside the integrals analytically. This formulation avoids the numerical time differentiation with respect to the observer time, which is computationally more efficient. The acoustic pressure gradient predicted by these new formulations is validated through comparison with available exact solutions for a stationary and moving monopole sources. The agreement between the predictions and exact solutions is excellent. The formulations are applied to the rotor noise problems for two model rotors. A purely numerical approach is compared with the analytical formulations. The agreement between the analytical formulations and the numerical method is excellent for both stationary and moving observer cases.

  19. An improved method for predicting the lightning performance of high and extra-high-voltage substation shielding

    NASA Astrophysics Data System (ADS)

    Vinh, T.

    1980-08-01

    There is a need for better and more effective lightning protection for transmission and switching substations. In the past, a number of empirical methods were utilized to design systems to protect substations and transmission lines from direct lightning strokes. The need exists for convenient analytical lightning models adequate for engineering usage. In this study, analytical lightning models were developed along with a method for improved analysis of the physical properties of lightning through their use. This method of analysis is based upon the most recent statistical field data. The result is an improved method for predicting the occurrence of sheilding failure and for designing more effective protection for high and extra high voltage substations from direct strokes.

  20. Uncertainty in the Bayesian meta-analysis of normally distributed surrogate endpoints

    PubMed Central

    Thompson, John R; Spata, Enti; Abrams, Keith R

    2015-01-01

    We investigate the effect of the choice of parameterisation of meta-analytic models and related uncertainty on the validation of surrogate endpoints. Different meta-analytical approaches take into account different levels of uncertainty which may impact on the accuracy of the predictions of treatment effect on the target outcome from the treatment effect on a surrogate endpoint obtained from these models. A range of Bayesian as well as frequentist meta-analytical methods are implemented using illustrative examples in relapsing–remitting multiple sclerosis, where the treatment effect on disability worsening is the primary outcome of interest in healthcare evaluation, while the effect on relapse rate is considered as a potential surrogate to the effect on disability progression, and in gastric cancer, where the disease-free survival has been shown to be a good surrogate endpoint to the overall survival. Sensitivity analysis was carried out to assess the impact of distributional assumptions on the predictions. Also, sensitivity to modelling assumptions and performance of the models were investigated by simulation. Although different methods can predict mean true outcome almost equally well, inclusion of uncertainty around all relevant parameters of the model may lead to less certain and hence more conservative predictions. When investigating endpoints as candidate surrogate outcomes, a careful choice of the meta-analytical approach has to be made. Models underestimating the uncertainty of available evidence may lead to overoptimistic predictions which can then have an effect on decisions made based on such predictions. PMID:26271918

  1. Uncertainty in the Bayesian meta-analysis of normally distributed surrogate endpoints.

    PubMed

    Bujkiewicz, Sylwia; Thompson, John R; Spata, Enti; Abrams, Keith R

    2017-10-01

    We investigate the effect of the choice of parameterisation of meta-analytic models and related uncertainty on the validation of surrogate endpoints. Different meta-analytical approaches take into account different levels of uncertainty which may impact on the accuracy of the predictions of treatment effect on the target outcome from the treatment effect on a surrogate endpoint obtained from these models. A range of Bayesian as well as frequentist meta-analytical methods are implemented using illustrative examples in relapsing-remitting multiple sclerosis, where the treatment effect on disability worsening is the primary outcome of interest in healthcare evaluation, while the effect on relapse rate is considered as a potential surrogate to the effect on disability progression, and in gastric cancer, where the disease-free survival has been shown to be a good surrogate endpoint to the overall survival. Sensitivity analysis was carried out to assess the impact of distributional assumptions on the predictions. Also, sensitivity to modelling assumptions and performance of the models were investigated by simulation. Although different methods can predict mean true outcome almost equally well, inclusion of uncertainty around all relevant parameters of the model may lead to less certain and hence more conservative predictions. When investigating endpoints as candidate surrogate outcomes, a careful choice of the meta-analytical approach has to be made. Models underestimating the uncertainty of available evidence may lead to overoptimistic predictions which can then have an effect on decisions made based on such predictions.

  2. Multi-analytical Approaches Informing the Risk of Sepsis

    NASA Astrophysics Data System (ADS)

    Gwadry-Sridhar, Femida; Lewden, Benoit; Mequanint, Selam; Bauer, Michael

    Sepsis is a significant cause of mortality and morbidity and is often associated with increased hospital resource utilization, prolonged intensive care unit (ICU) and hospital stay. The economic burden associated with sepsis is huge. With advances in medicine, there are now aggressive goal oriented treatments that can be used to help these patients. If we were able to predict which patients may be at risk for sepsis we could start treatment early and potentially reduce the risk of mortality and morbidity. Analytic methods currently used in clinical research to determine the risk of a patient developing sepsis may be further enhanced by using multi-modal analytic methods that together could be used to provide greater precision. Researchers commonly use univariate and multivariate regressions to develop predictive models. We hypothesized that such models could be enhanced by using multiple analytic methods that together could be used to provide greater insight. In this paper, we analyze data about patients with and without sepsis using a decision tree approach and a cluster analysis approach. A comparison with a regression approach shows strong similarity among variables identified, though not an exact match. We compare the variables identified by the different approaches and draw conclusions about the respective predictive capabilities,while considering their clinical significance.

  3. Shape sensitivity analysis of flutter response of a laminated wing

    NASA Technical Reports Server (NTRS)

    Bergen, Fred D.; Kapania, Rakesh K.

    1988-01-01

    A method is presented for calculating the shape sensitivity of a wing aeroelastic response with respect to changes in geometric shape. Yates' modified strip method is used in conjunction with Giles' equivalent plate analysis to predict the flutter speed, frequency, and reduced frequency of the wing. Three methods are used to calculate the sensitivity of the eigenvalue. The first method is purely a finite difference calculation of the eigenvalue derivative directly from the solution of the flutter problem corresponding to the two different values of the shape parameters. The second method uses an analytic expression for the eigenvalue sensitivities of a general complex matrix, where the derivatives of the aerodynamic, mass, and stiffness matrices are computed using a finite difference approximation. The third method also uses an analytic expression for the eigenvalue sensitivities, but the aerodynamic matrix is computed analytically. All three methods are found to be in good agreement with each other. The sensitivities of the eigenvalues were used to predict the flutter speed, frequency, and reduced frequency. These approximations were found to be in good agreement with those obtained using a complete reanalysis.

  4. New method for probabilistic traffic demand predictions for en route sectors based on uncertain predictions of individual flight events.

    DOT National Transportation Integrated Search

    2011-06-14

    This paper presents a novel analytical approach to and techniques for translating characteristics of uncertainty in predicting sector entry times and times in sector for individual flights into characteristics of uncertainty in predicting one-minute ...

  5. Technosocial Predictive Analytics in Support of Naturalistic Decision Making

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanfilippo, Antonio P.; Cowell, Andrew J.; Malone, Elizabeth L.

    2009-06-23

    A main challenge we face in fostering sustainable growth is to anticipate outcomes through predictive and proactive across domains as diverse as energy, security, the environment, health and finance in order to maximize opportunities, influence outcomes and counter adversities. The goal of this paper is to present new methods for anticipatory analytical thinking which address this challenge through the development of a multi-perspective approach to predictive modeling as a core to a creative decision making process. This approach is uniquely multidisciplinary in that it strives to create decision advantage through the integration of human and physical models, and leverages knowledgemore » management and visual analytics to support creative thinking by facilitating the achievement of interoperable knowledge inputs and enhancing the user’s cognitive access. We describe a prototype system which implements this approach and exemplify its functionality with reference to a use case in which predictive modeling is paired with analytic gaming to support collaborative decision-making in the domain of agricultural land management.« less

  6. Predicting playing frequencies for clarinets: A comparison between numerical simulations and simplified analytical formulas.

    PubMed

    Coyle, Whitney L; Guillemain, Philippe; Kergomard, Jean; Dalmont, Jean-Pierre

    2015-11-01

    When designing a wind instrument such as a clarinet, it can be useful to be able to predict the playing frequencies. This paper presents an analytical method to deduce these playing frequencies using the input impedance curve. Specifically there are two control parameters that have a significant influence on the playing frequency, the blowing pressure and reed opening. Four effects are known to alter the playing frequency and are examined separately: the flow rate due to the reed motion, the reed dynamics, the inharmonicity of the resonator, and the temperature gradient within the clarinet. The resulting playing frequencies for the first register of a particular professional level clarinet are found using the analytical formulas presented in this paper. The analytical predictions are then compared to numerically simulated results to validate the prediction accuracy. The main conclusion is that in general the playing frequency decreases above the oscillation threshold because of inharmonicity, then increases above the beating reed regime threshold because of the decrease of the flow rate effect.

  7. Analytical Quality by Design Approach in RP-HPLC Method Development for the Assay of Etofenamate in Dosage Forms

    PubMed Central

    Peraman, R.; Bhadraya, K.; Reddy, Y. Padmanabha; Reddy, C. Surayaprakash; Lokesh, T.

    2015-01-01

    By considering the current regulatory requirement for an analytical method development, a reversed phase high performance liquid chromatographic method for routine analysis of etofenamate in dosage form has been optimized using analytical quality by design approach. Unlike routine approach, the present study was initiated with understanding of quality target product profile, analytical target profile and risk assessment for method variables that affect the method response. A liquid chromatography system equipped with a C18 column (250×4.6 mm, 5 μ), a binary pump and photodiode array detector were used in this work. The experiments were conducted based on plan by central composite design, which could save time, reagents and other resources. Sigma Tech software was used to plan and analyses the experimental observations and obtain quadratic process model. The process model was used for predictive solution for retention time. The predicted data from contour diagram for retention time were verified actually and it satisfied with actual experimental data. The optimized method was achieved at 1.2 ml/min flow rate of using mobile phase composition of methanol and 0.2% triethylamine in water at 85:15, % v/v, pH adjusted to 6.5. The method was validated and verified for targeted method performances, robustness and system suitability during method transfer. PMID:26997704

  8. Comparison of Several Methods of Predicting the Pressure Loss at Altitude Across a Baffled Aircraft-Engine Cylinder

    NASA Technical Reports Server (NTRS)

    Neustein, Joseph; Schafer, Louis J , Jr

    1946-01-01

    Several methods of predicting the compressible-flow pressure loss across a baffled aircraft-engine cylinder were analytically related and were experimentally investigated on a typical air-cooled aircraft-engine cylinder. Tests with and without heat transfer covered a wide range of cooling-air flows and simulated altitudes from sea level to 40,000 feet. Both the analysis and the test results showed that the method based on the density determined by the static pressure and the stagnation temperature at the baffle exit gave results comparable with those obtained from methods derived by one-dimensional-flow theory. The method based on a characteristic Mach number, although related analytically to one-dimensional-flow theory, was found impractical in the present tests because of the difficulty encountered in defining the proper characteristic state of the cooling air. Accurate predictions of altitude pressure loss can apparently be made by these methods, provided that they are based on the results of sea-level tests with heat transfer.

  9. An analytical approach to obtaining JWL parameters from cylinder tests

    NASA Astrophysics Data System (ADS)

    Sutton, B. D.; Ferguson, J. W.; Hodgson, A. N.

    2017-01-01

    An analytical method for determining parameters for the JWL Equation of State from cylinder test data is described. This method is applied to four datasets obtained from two 20.3 mm diameter EDC37 cylinder tests. The calculated pressure-relative volume (p-Vr) curves agree with those produced by hydro-code modelling. The average calculated Chapman-Jouguet (CJ) pressure is 38.6 GPa, compared to the model value of 38.3 GPa; the CJ relative volume is 0.729 for both. The analytical pressure-relative volume curves produced agree with the one used in the model out to the commonly reported expansion of 7 relative volumes, as do the predicted energies generated by integrating under the p-Vr curve. The calculated energy is within 1.6% of that predicted by the model.

  10. Application of laser Raman spectroscopy in concentration measurements of multiple analytes in human body fluids

    NASA Astrophysics Data System (ADS)

    Qu, Jianan Y.; Suria, David; Wilson, Brian C.

    1998-05-01

    The primary goal of these studies was to demonstrate that NIR Raman spectroscopy is feasible as a rapid and reagentless analytic method for clinical diagnostics. Raman spectra were collected on human serum and urine samples using a 785 nm excitation laser and a single-stage holographic spectrometer. A partial east squares method was used to predict the analyte concentrations of interest. The actual concentrations were determined by a standard clinical chemistry. The prediction accuracy of total protein, albumin, triglyceride and glucose in human sera ranged from 1.5 percent to 5 percent which is greatly acceptable for clinical diagnostics. The concentration measurements of acetaminophen, ethanol and codeine inhuman urine have demonstrated the potential of NIR Raman technology in screening of therapeutic drugs and substances of abuse.

  11. Predictive Big Data Analytics: A Study of Parkinson's Disease Using Large, Complex, Heterogeneous, Incongruent, Multi-Source and Incomplete Observations.

    PubMed

    Dinov, Ivo D; Heavner, Ben; Tang, Ming; Glusman, Gustavo; Chard, Kyle; Darcy, Mike; Madduri, Ravi; Pa, Judy; Spino, Cathie; Kesselman, Carl; Foster, Ian; Deutsch, Eric W; Price, Nathan D; Van Horn, John D; Ames, Joseph; Clark, Kristi; Hood, Leroy; Hampstead, Benjamin M; Dauer, William; Toga, Arthur W

    2016-01-01

    A unique archive of Big Data on Parkinson's Disease is collected, managed and disseminated by the Parkinson's Progression Markers Initiative (PPMI). The integration of such complex and heterogeneous Big Data from multiple sources offers unparalleled opportunities to study the early stages of prevalent neurodegenerative processes, track their progression and quickly identify the efficacies of alternative treatments. Many previous human and animal studies have examined the relationship of Parkinson's disease (PD) risk to trauma, genetics, environment, co-morbidities, or life style. The defining characteristics of Big Data-large size, incongruency, incompleteness, complexity, multiplicity of scales, and heterogeneity of information-generating sources-all pose challenges to the classical techniques for data management, processing, visualization and interpretation. We propose, implement, test and validate complementary model-based and model-free approaches for PD classification and prediction. To explore PD risk using Big Data methodology, we jointly processed complex PPMI imaging, genetics, clinical and demographic data. Collective representation of the multi-source data facilitates the aggregation and harmonization of complex data elements. This enables joint modeling of the complete data, leading to the development of Big Data analytics, predictive synthesis, and statistical validation. Using heterogeneous PPMI data, we developed a comprehensive protocol for end-to-end data characterization, manipulation, processing, cleaning, analysis and validation. Specifically, we (i) introduce methods for rebalancing imbalanced cohorts, (ii) utilize a wide spectrum of classification methods to generate consistent and powerful phenotypic predictions, and (iii) generate reproducible machine-learning based classification that enables the reporting of model parameters and diagnostic forecasting based on new data. We evaluated several complementary model-based predictive approaches, which failed to generate accurate and reliable diagnostic predictions. However, the results of several machine-learning based classification methods indicated significant power to predict Parkinson's disease in the PPMI subjects (consistent accuracy, sensitivity, and specificity exceeding 96%, confirmed using statistical n-fold cross-validation). Clinical (e.g., Unified Parkinson's Disease Rating Scale (UPDRS) scores), demographic (e.g., age), genetics (e.g., rs34637584, chr12), and derived neuroimaging biomarker (e.g., cerebellum shape index) data all contributed to the predictive analytics and diagnostic forecasting. Model-free Big Data machine learning-based classification methods (e.g., adaptive boosting, support vector machines) can outperform model-based techniques in terms of predictive precision and reliability (e.g., forecasting patient diagnosis). We observed that statistical rebalancing of cohort sizes yields better discrimination of group differences, specifically for predictive analytics based on heterogeneous and incomplete PPMI data. UPDRS scores play a critical role in predicting diagnosis, which is expected based on the clinical definition of Parkinson's disease. Even without longitudinal UPDRS data, however, the accuracy of model-free machine learning based classification is over 80%. The methods, software and protocols developed here are openly shared and can be employed to study other neurodegenerative disorders (e.g., Alzheimer's, Huntington's, amyotrophic lateral sclerosis), as well as for other predictive Big Data analytics applications.

  12. Accommodating subject and instrument variations in spectroscopic determinations

    DOEpatents

    Haas, Michael J [Albuquerque, NM; Rowe, Robert K [Corrales, NM; Thomas, Edward V [Albuquerque, NM

    2006-08-29

    A method and apparatus for measuring a biological attribute, such as the concentration of an analyte, particularly a blood analyte in tissue such as glucose. The method utilizes spectrographic techniques in conjunction with an improved instrument-tailored or subject-tailored calibration model. In a calibration phase, calibration model data is modified to reduce or eliminate instrument-specific attributes, resulting in a calibration data set modeling intra-instrument or intra-subject variation. In a prediction phase, the prediction process is tailored for each target instrument separately using a minimal number of spectral measurements from each instrument or subject.

  13. Buckling Testing and Analysis of Space Shuttle Solid Rocket Motor Cylinders

    NASA Technical Reports Server (NTRS)

    Weidner, Thomas J.; Larsen, David V.; McCool, Alex (Technical Monitor)

    2002-01-01

    A series of full-scale buckling tests were performed on the space shuttle Reusable Solid Rocket Motor (RSRM) cylinders. The tests were performed to determine the buckling capability of the cylinders and to provide data for analytical comparison. A nonlinear ANSYS Finite Element Analysis (FEA) model was used to represent and evaluate the testing. Analytical results demonstrated excellent correlation to test results, predicting the failure load within 5%. The analytical value was on the conservative side, predicting a lower failure load than was applied to the test. The resulting study and analysis indicated the important parameters for FEA to accurately predict buckling failure. The resulting method was subsequently used to establish the pre-launch buckling capability of the space shuttle system.

  14. Review of Thawing Time Prediction Models Depending
on Process Conditions and Product Characteristics

    PubMed Central

    Kluza, Franciszek; Spiess, Walter E. L.; Kozłowicz, Katarzyna

    2016-01-01

    Summary Determining thawing times of frozen foods is a challenging problem as the thermophysical properties of the product change during thawing. A number of calculation models and solutions have been developed. The proposed solutions range from relatively simple analytical equations based on a number of assumptions to a group of empirical approaches that sometimes require complex calculations. In this paper analytical, empirical and graphical models are presented and critically reviewed. The conditions of solution, limitations and possible applications of the models are discussed. The graphical and semi--graphical models are derived from numerical methods. Using the numerical methods is not always possible as running calculations takes time, whereas the specialized software and equipment are not always cheap. For these reasons, the application of analytical-empirical models is more useful for engineering. It is demonstrated that there is no simple, accurate and feasible analytical method for thawing time prediction. Consequently, simplified methods are needed for thawing time estimation of agricultural and food products. The review reveals the need for further improvement of the existing solutions or development of new ones that will enable accurate determination of thawing time within a wide range of practical conditions of heat transfer during processing. PMID:27904387

  15. Fast semi-analytical method for precise prediction of ion energy distribution functions and sheath electric field in multi-frequency capacitively coupled plasmas

    NASA Astrophysics Data System (ADS)

    Chen, Wencong; Zhang, Xi; Diao, Dongfeng

    2018-05-01

    We propose a fast semi-analytical method to predict ion energy distribution functions and sheath electric field in multi-frequency capacitively coupled plasmas, which are difficult to measure in commercial plasma reactors. In the intermediate frequency regime, the ion density within the sheath is strongly modulated by the low-frequency sheath electric field, making the time-independent ion density assumption employed in conventional models invalid. Our results are in a good agreement with experimental measurements and computer simulations. The application of this method will facilitate the understanding of ion–material interaction mechanisms and development of new-generation plasma etching devices.

  16. An augmented classical least squares method for quantitative Raman spectral analysis against component information loss.

    PubMed

    Zhou, Yan; Cao, Hui

    2013-01-01

    We propose an augmented classical least squares (ACLS) calibration method for quantitative Raman spectral analysis against component information loss. The Raman spectral signals with low analyte concentration correlations were selected and used as the substitutes for unknown quantitative component information during the CLS calibration procedure. The number of selected signals was determined by using the leave-one-out root-mean-square error of cross-validation (RMSECV) curve. An ACLS model was built based on the augmented concentration matrix and the reference spectral signal matrix. The proposed method was compared with partial least squares (PLS) and principal component regression (PCR) using one example: a data set recorded from an experiment of analyte concentration determination using Raman spectroscopy. A 2-fold cross-validation with Venetian blinds strategy was exploited to evaluate the predictive power of the proposed method. The one-way variance analysis (ANOVA) was used to access the predictive power difference between the proposed method and existing methods. Results indicated that the proposed method is effective at increasing the robust predictive power of traditional CLS model against component information loss and its predictive power is comparable to that of PLS or PCR.

  17. Predictive Big Data Analytics: A Study of Parkinson’s Disease Using Large, Complex, Heterogeneous, Incongruent, Multi-Source and Incomplete Observations

    PubMed Central

    Dinov, Ivo D.; Heavner, Ben; Tang, Ming; Glusman, Gustavo; Chard, Kyle; Darcy, Mike; Madduri, Ravi; Pa, Judy; Spino, Cathie; Kesselman, Carl; Foster, Ian; Deutsch, Eric W.; Price, Nathan D.; Van Horn, John D.; Ames, Joseph; Clark, Kristi; Hood, Leroy; Hampstead, Benjamin M.; Dauer, William; Toga, Arthur W.

    2016-01-01

    Background A unique archive of Big Data on Parkinson’s Disease is collected, managed and disseminated by the Parkinson’s Progression Markers Initiative (PPMI). The integration of such complex and heterogeneous Big Data from multiple sources offers unparalleled opportunities to study the early stages of prevalent neurodegenerative processes, track their progression and quickly identify the efficacies of alternative treatments. Many previous human and animal studies have examined the relationship of Parkinson’s disease (PD) risk to trauma, genetics, environment, co-morbidities, or life style. The defining characteristics of Big Data–large size, incongruency, incompleteness, complexity, multiplicity of scales, and heterogeneity of information-generating sources–all pose challenges to the classical techniques for data management, processing, visualization and interpretation. We propose, implement, test and validate complementary model-based and model-free approaches for PD classification and prediction. To explore PD risk using Big Data methodology, we jointly processed complex PPMI imaging, genetics, clinical and demographic data. Methods and Findings Collective representation of the multi-source data facilitates the aggregation and harmonization of complex data elements. This enables joint modeling of the complete data, leading to the development of Big Data analytics, predictive synthesis, and statistical validation. Using heterogeneous PPMI data, we developed a comprehensive protocol for end-to-end data characterization, manipulation, processing, cleaning, analysis and validation. Specifically, we (i) introduce methods for rebalancing imbalanced cohorts, (ii) utilize a wide spectrum of classification methods to generate consistent and powerful phenotypic predictions, and (iii) generate reproducible machine-learning based classification that enables the reporting of model parameters and diagnostic forecasting based on new data. We evaluated several complementary model-based predictive approaches, which failed to generate accurate and reliable diagnostic predictions. However, the results of several machine-learning based classification methods indicated significant power to predict Parkinson’s disease in the PPMI subjects (consistent accuracy, sensitivity, and specificity exceeding 96%, confirmed using statistical n-fold cross-validation). Clinical (e.g., Unified Parkinson's Disease Rating Scale (UPDRS) scores), demographic (e.g., age), genetics (e.g., rs34637584, chr12), and derived neuroimaging biomarker (e.g., cerebellum shape index) data all contributed to the predictive analytics and diagnostic forecasting. Conclusions Model-free Big Data machine learning-based classification methods (e.g., adaptive boosting, support vector machines) can outperform model-based techniques in terms of predictive precision and reliability (e.g., forecasting patient diagnosis). We observed that statistical rebalancing of cohort sizes yields better discrimination of group differences, specifically for predictive analytics based on heterogeneous and incomplete PPMI data. UPDRS scores play a critical role in predicting diagnosis, which is expected based on the clinical definition of Parkinson’s disease. Even without longitudinal UPDRS data, however, the accuracy of model-free machine learning based classification is over 80%. The methods, software and protocols developed here are openly shared and can be employed to study other neurodegenerative disorders (e.g., Alzheimer’s, Huntington’s, amyotrophic lateral sclerosis), as well as for other predictive Big Data analytics applications. PMID:27494614

  18. 3D-MICE: integration of cross-sectional and longitudinal imputation for multi-analyte longitudinal clinical data.

    PubMed

    Luo, Yuan; Szolovits, Peter; Dighe, Anand S; Baron, Jason M

    2018-06-01

    A key challenge in clinical data mining is that most clinical datasets contain missing data. Since many commonly used machine learning algorithms require complete datasets (no missing data), clinical analytic approaches often entail an imputation procedure to "fill in" missing data. However, although most clinical datasets contain a temporal component, most commonly used imputation methods do not adequately accommodate longitudinal time-based data. We sought to develop a new imputation algorithm, 3-dimensional multiple imputation with chained equations (3D-MICE), that can perform accurate imputation of missing clinical time series data. We extracted clinical laboratory test results for 13 commonly measured analytes (clinical laboratory tests). We imputed missing test results for the 13 analytes using 3 imputation methods: multiple imputation with chained equations (MICE), Gaussian process (GP), and 3D-MICE. 3D-MICE utilizes both MICE and GP imputation to integrate cross-sectional and longitudinal information. To evaluate imputation method performance, we randomly masked selected test results and imputed these masked results alongside results missing from our original data. We compared predicted results to measured results for masked data points. 3D-MICE performed significantly better than MICE and GP-based imputation in a composite of all 13 analytes, predicting missing results with a normalized root-mean-square error of 0.342, compared to 0.373 for MICE alone and 0.358 for GP alone. 3D-MICE offers a novel and practical approach to imputing clinical laboratory time series data. 3D-MICE may provide an additional tool for use as a foundation in clinical predictive analytics and intelligent clinical decision support.

  19. Forecasting hotspots using predictive visual analytics approach

    DOEpatents

    Maciejewski, Ross; Hafen, Ryan; Rudolph, Stephen; Cleveland, William; Ebert, David

    2014-12-30

    A method for forecasting hotspots is provided. The method may include the steps of receiving input data at an input of the computational device, generating a temporal prediction based on the input data, generating a geospatial prediction based on the input data, and generating output data based on the time series and geospatial predictions. The output data may be configured to display at least one user interface at an output of the computational device.

  20. Personality, Cognitive Style, Motivation, and Aptitude Predict Systematic Trends in Analytic Forecasting Behavior.

    PubMed

    Poore, Joshua C; Forlines, Clifton L; Miller, Sarah M; Regan, John R; Irvine, John M

    2014-12-01

    The decision sciences are increasingly challenged to advance methods for modeling analysts, accounting for both analytic strengths and weaknesses, to improve inferences taken from increasingly large and complex sources of data. We examine whether psychometric measures-personality, cognitive style, motivated cognition-predict analytic performance and whether psychometric measures are competitive with aptitude measures (i.e., SAT scores) as analyst sample selection criteria. A heterogeneous, national sample of 927 participants completed an extensive battery of psychometric measures and aptitude tests and was asked 129 geopolitical forecasting questions over the course of 1 year. Factor analysis reveals four dimensions among psychometric measures; dimensions characterized by differently motivated "top-down" cognitive styles predicted distinctive patterns in aptitude and forecasting behavior. These dimensions were not better predictors of forecasting accuracy than aptitude measures. However, multiple regression and mediation analysis reveals that these dimensions influenced forecasting accuracy primarily through bias in forecasting confidence. We also found that these facets were competitive with aptitude tests as forecast sampling criteria designed to mitigate biases in forecasting confidence while maximizing accuracy. These findings inform the understanding of individual difference dimensions at the intersection of analytic aptitude and demonstrate that they wield predictive power in applied, analytic domains.

  1. Personality, Cognitive Style, Motivation, and Aptitude Predict Systematic Trends in Analytic Forecasting Behavior

    PubMed Central

    Forlines, Clifton L.; Miller, Sarah M.; Regan, John R.; Irvine, John M.

    2014-01-01

    The decision sciences are increasingly challenged to advance methods for modeling analysts, accounting for both analytic strengths and weaknesses, to improve inferences taken from increasingly large and complex sources of data. We examine whether psychometric measures—personality, cognitive style, motivated cognition—predict analytic performance and whether psychometric measures are competitive with aptitude measures (i.e., SAT scores) as analyst sample selection criteria. A heterogeneous, national sample of 927 participants completed an extensive battery of psychometric measures and aptitude tests and was asked 129 geopolitical forecasting questions over the course of 1 year. Factor analysis reveals four dimensions among psychometric measures; dimensions characterized by differently motivated “top-down” cognitive styles predicted distinctive patterns in aptitude and forecasting behavior. These dimensions were not better predictors of forecasting accuracy than aptitude measures. However, multiple regression and mediation analysis reveals that these dimensions influenced forecasting accuracy primarily through bias in forecasting confidence. We also found that these facets were competitive with aptitude tests as forecast sampling criteria designed to mitigate biases in forecasting confidence while maximizing accuracy. These findings inform the understanding of individual difference dimensions at the intersection of analytic aptitude and demonstrate that they wield predictive power in applied, analytic domains. PMID:25983670

  2. An implementation of an aeroacoustic prediction model for broadband noise from a vertical axis wind turbine using a CFD informed methodology

    NASA Astrophysics Data System (ADS)

    Botha, J. D. M.; Shahroki, A.; Rice, H.

    2017-12-01

    This paper presents an enhanced method for predicting aerodynamically generated broadband noise produced by a Vertical Axis Wind Turbine (VAWT). The method improves on existing work for VAWT noise prediction and incorporates recently developed airfoil noise prediction models. Inflow-turbulence and airfoil self-noise mechanisms are both considered. Airfoil noise predictions are dependent on aerodynamic input data and time dependent Computational Fluid Dynamics (CFD) calculations are carried out to solve for the aerodynamic solution. Analytical flow methods are also benchmarked against the CFD informed noise prediction results to quantify errors in the former approach. Comparisons to experimental noise measurements for an existing turbine are encouraging. A parameter study is performed and shows the sensitivity of overall noise levels to changes in inflow velocity and inflow turbulence. Noise sources are characterised and the location and mechanism of the primary sources is determined, inflow-turbulence noise is seen to be the dominant source. The use of CFD calculations is seen to improve the accuracy of noise predictions when compared to the analytic flow solution as well as showing that, for inflow-turbulence noise sources, blade generated turbulence dominates the atmospheric inflow turbulence.

  3. Comparison of theoretically predicted lateral-directional aerodynamic characteristics with full-scale wind tunnel data on the ATLIT airplane

    NASA Technical Reports Server (NTRS)

    Griswold, M.; Roskam, J.

    1980-01-01

    An analytical method is presented for predicting lateral-directional aerodynamic characteristics of light twin engine propeller-driven airplanes. This method is applied to the Advanced Technology Light Twin Engine airplane. The calculated characteristics are correlated against full-scale wind tunnel data. The method predicts the sideslip derivatives fairly well, although angle of attack variations are not well predicted. Spoiler performance was predicted somewhat high but was still reasonable. The rudder derivatives were not well predicted, in particular the effect of angle of attack. The predicted dynamic derivatives could not be correlated due to lack of experimental data.

  4. Evaluation of analytical procedures for prediction of turbulent boundary layers on a porous wall

    NASA Technical Reports Server (NTRS)

    Towne, C. E.

    1974-01-01

    An analytical study has been made to determine how well current boundary layer prediction techniques work when there is mass transfer normal to the wall. The data that were considered in this investigation were for two-dimensional, incompressible, turbulent boundary layers with suction and blowing. Some of the bleed data were taken in an adverse pressure gradient. An integral prediction method was used three different porous wall skin friction relations, in addition to a solid-surface relation for the suction cases. A numerical prediction method was also used. Comparisons were made between theoretical and experimental skin friction coefficients, displacement and momentum thicknesses, and velocity profiles. The integral method with one of the porous wall skin friction laws gave very good agreement with data for most of the cases considered. The use of the solid-surface skin friction law caused the integral to overpredict the effectiveness of the bleed. The numerical techniques also worked well for most of the cases.

  5. Survey of NASA research on crash dynamics

    NASA Technical Reports Server (NTRS)

    Thomson, R. G.; Carden, H. D.; Hayduk, R. J.

    1984-01-01

    Ten years of structural crash dynamics research activities conducted on general aviation aircraft by the National Aeronautics and Space Administration (NASA) are described. Thirty-two full-scale crash tests were performed at Langley Research Center, and pertinent data on airframe and seat behavior were obtained. Concurrent with the experimental program, analytical methods were developed to help predict structural behavior during impact. The effects of flight parameters at impact on cabin deceleration pulses at the seat/occupant interface, experimental and analytical correlation of data on load-limiting subfloor and seat configurations, airplane section test results for computer modeling validation, and data from emergency-locator-transmitter (ELT) investigations to determine probable cause of false alarms and nonactivations are assessed. Computer programs which provide designers with analytical methods for predicting accelerations, velocities, and displacements of collapsing structures are also discussed.

  6. Prediction of turning stability using receptance coupling

    NASA Astrophysics Data System (ADS)

    Jasiewicz, Marcin; Powałka, Bartosz

    2018-01-01

    This paper presents an issue of machining stability prediction of dynamic "lathe - workpiece" system evaluated using receptance coupling method. Dynamic properties of the lathe components (the spindle and the tailstock) are assumed to be constant and can be determined experimentally based on the results of the impact test. Hence, the variable of the system "machine tool - holder - workpiece" is the machined part, which can be easily modelled analytically. The method of receptance coupling enables a synthesis of experimental (spindle, tailstock) and analytical (machined part) models, so impact testing of the entire system becomes unnecessary. The paper presents methodology of analytical and experimental models synthesis, evaluation of the stability lobes and experimental validation procedure involving both the determination of the dynamic properties of the system and cutting tests. In the summary the experimental verification results would be presented and discussed.

  7. Tire Changes, Fresh Air, and Yellow Flags: Challenges in Predictive Analytics for Professional Racing.

    PubMed

    Tulabandhula, Theja; Rudin, Cynthia

    2014-06-01

    Our goal is to design a prediction and decision system for real-time use during a professional car race. In designing a knowledge discovery process for racing, we faced several challenges that were overcome only when domain knowledge of racing was carefully infused within statistical modeling techniques. In this article, we describe how we leveraged expert knowledge of the domain to produce a real-time decision system for tire changes within a race. Our forecasts have the potential to impact how racing teams can optimize strategy by making tire-change decisions to benefit their rank position. Our work significantly expands previous research on sports analytics, as it is the only work on analytical methods for within-race prediction and decision making for professional car racing.

  8. Verification of an Analytical Method for Measuring Crystal Nucleation Rates in Glasses from DTA Data

    NASA Technical Reports Server (NTRS)

    Ranasinghe, K. S.; Wei, P. F.; Kelton, K. F.; Ray, C. S.; Day, D. E.

    2004-01-01

    A recently proposed analytical (DTA) method for estimating the nucleation rates in glasses has been evaluated by comparing experimental data with numerically computed nucleation rates for a model lithium disilicate glass. The time and temperature dependent nucleation rates were predicted using the model and compared with those values from an analysis of numerically calculated DTA curves. The validity of the numerical approach was demonstrated earlier by a comparison with experimental data. The excellent agreement between the nucleation rates from the model calculations and fiom the computer generated DTA data demonstrates the validity of the proposed analytical DTA method.

  9. An Economical Semi-Analytical Orbit Theory for Retarded Satellite Motion About an Oblate Planet

    NASA Technical Reports Server (NTRS)

    Gordon, R. A.

    1980-01-01

    Brouwer and Brouwer-Lyddanes' use of the Von Zeipel-Delaunay method is employed to develop an efficient analytical orbit theory suitable for microcomputers. A succinctly simple pseudo-phenomenologically conceptualized algorithm is introduced which accurately and economically synthesizes modeling of drag effects. The method epitomizes and manifests effortless efficient computer mechanization. Simulated trajectory data is employed to illustrate the theory's ability to accurately accommodate oblateness and drag effects for microcomputer ground based or onboard predicted orbital representation. Real tracking data is used to demonstrate that the theory's orbit determination and orbit prediction capabilities are favorably adaptable to and are comparable with results obtained utilizing complex definitive Cowell method solutions on satellites experiencing significant drag effects.

  10. Circular Functions Based Comprehensive Analysis of Plastic Creep Deformations in the Fiber Reinforced Composites

    NASA Astrophysics Data System (ADS)

    Monfared, Vahid

    2016-12-01

    Analytically based model is presented for behavioral analysis of the plastic deformations in the reinforced materials using the circular (trigonometric) functions. The analytical method is proposed to predict creep behavior of the fibrous composites based on basic and constitutive equations under a tensile axial stress. New insight of the work is to predict some important behaviors of the creeping matrix. In the present model, the prediction of the behaviors is simpler than the available methods. Principal creep strain rate behaviors are very noteworthy for designing the fibrous composites in the creeping composites. Analysis of the mentioned parameter behavior in the reinforced materials is necessary to analyze failure, fracture, and fatigue studies in the creep of the short fiber composites. Shuttles, spaceships, turbine blades and discs, and nozzle guide vanes are commonly subjected to the creep effects. Also, predicting the creep behavior is significant to design the optoelectronic and photonic advanced composites with optical fibers. As a result, the uniform behavior with constant gradient is seen in the principal creep strain rate behavior, and also creep rupture may happen at the fiber end. Finally, good agreements are found through comparing the obtained analytical and FEM results.

  11. A method of predicting flow rates required to achieve anti-icing performance with a porous leading edge ice protection system

    NASA Technical Reports Server (NTRS)

    Kohlman, D. L.; Albright, A. E.

    1983-01-01

    An analytical method was developed for predicting minimum flow rates required to provide anti-ice protection with a porous leading edge fluid ice protection system. The predicted flow rates compare with an average error of less than 10 percent to six experimentally determined flow rates from tests in the NASA Icing Research Tunnel on a general aviation wing section.

  12. Big Data Analytics for a Smart Green Infrastructure Strategy

    NASA Astrophysics Data System (ADS)

    Barrile, Vincenzo; Bonfa, Stefano; Bilotta, Giuliana

    2017-08-01

    As well known, Big Data is a term for data sets so large or complex that traditional data processing applications aren’t sufficient to process them. The term “Big Data” is referred to using predictive analytics. It is often related to user behavior analytics, or other advanced data analytics methods which from data extract value, and rarely to a particular size of data set. This is especially true for the huge amount of Earth Observation data that satellites constantly orbiting the earth daily transmit.

  13. Assessment of analytical techniques for predicting solid propellant exhaust plumes and plume impingement environments

    NASA Technical Reports Server (NTRS)

    Tevepaugh, J. A.; Smith, S. D.; Penny, M. M.

    1977-01-01

    An analysis of experimental nozzle, exhaust plume, and exhaust plume impingement data is presented. The data were obtained for subscale solid propellant motors with propellant Al loadings of 2, 10 and 15% exhausting to simulated altitudes of 50,000, 100,000 and 112,000 ft. Analytical predictions were made using a fully coupled two-phase method of characteristics numerical solution and a technique for defining thermal and pressure environments experienced by bodies immersed in two-phase exhaust plumes.

  14. Prediction of light aircraft interior noise

    NASA Technical Reports Server (NTRS)

    Howlett, J. T.; Morales, D. A.

    1976-01-01

    At the present time, predictions of aircraft interior noise depend heavily on empirical correction factors derived from previous flight measurements. However, to design for acceptable interior noise levels and to optimize acoustic treatments, analytical techniques which do not depend on empirical data are needed. This paper describes a computerized interior noise prediction method for light aircraft. An existing analytical program (developed for commercial jets by Cockburn and Jolly in 1968) forms the basis of some modal analysis work which is described. The accuracy of this modal analysis technique for predicting low-frequency coupled acoustic-structural natural frequencies is discussed along with trends indicating the effects of varying parameters such as fuselage length and diameter, structural stiffness, and interior acoustic absorption.

  15. Model uncertainty of various settlement estimation methods in shallow tunnels excavation; case study: Qom subway tunnel

    NASA Astrophysics Data System (ADS)

    Khademian, Amir; Abdollahipour, Hamed; Bagherpour, Raheb; Faramarzi, Lohrasb

    2017-10-01

    In addition to the numerous planning and executive challenges, underground excavation in urban areas is always followed by certain destructive effects especially on the ground surface; ground settlement is the most important of these effects for which estimation there exist different empirical, analytical and numerical methods. Since geotechnical models are associated with considerable model uncertainty, this study characterized the model uncertainty of settlement estimation models through a systematic comparison between model predictions and past performance data derived from instrumentation. To do so, the amount of surface settlement induced by excavation of the Qom subway tunnel was estimated via empirical (Peck), analytical (Loganathan and Poulos) and numerical (FDM) methods; the resulting maximum settlement value of each model were 1.86, 2.02 and 1.52 cm, respectively. The comparison of these predicted amounts with the actual data from instrumentation was employed to specify the uncertainty of each model. The numerical model outcomes, with a relative error of 3.8%, best matched the reality and the analytical method, with a relative error of 27.8%, yielded the highest level of model uncertainty.

  16. Spectral multivariate calibration without laboratory prepared or determined reference analyte values.

    PubMed

    Ottaway, Josh; Farrell, Jeremy A; Kalivas, John H

    2013-02-05

    An essential part to calibration is establishing the analyte calibration reference samples. These samples must characterize the sample matrix and measurement conditions (chemical, physical, instrumental, and environmental) of any sample to be predicted. Calibration usually requires measuring spectra for numerous reference samples in addition to determining the corresponding analyte reference values. Both tasks are typically time-consuming and costly. This paper reports on a method named pure component Tikhonov regularization (PCTR) that does not require laboratory prepared or determined reference values. Instead, an analyte pure component spectrum is used in conjunction with nonanalyte spectra for calibration. Nonanalyte spectra can be from different sources including pure component interference samples, blanks, and constant analyte samples. The approach is also applicable to calibration maintenance when the analyte pure component spectrum is measured in one set of conditions and nonanalyte spectra are measured in new conditions. The PCTR method balances the trade-offs between calibration model shrinkage and the degree of orthogonality to the nonanalyte content (model direction) in order to obtain accurate predictions. Using visible and near-infrared (NIR) spectral data sets, the PCTR results are comparable to those obtained using ridge regression (RR) with reference calibration sets. The flexibility of PCTR also allows including reference samples if such samples are available.

  17. Retention prediction and separation optimization under multilinear gradient elution in liquid chromatography with Microsoft Excel macros.

    PubMed

    Fasoula, S; Zisi, Ch; Gika, H; Pappa-Louisi, A; Nikitas, P

    2015-05-22

    A package of Excel VBA macros have been developed for modeling multilinear gradient retention data obtained in single or double gradient elution mode by changing organic modifier(s) content and/or eluent pH. For this purpose, ten chromatographic models were used and four methods were adopted for their application. The methods were based on (a) the analytical expression of the retention time, provided that this expression is available, (b) the retention times estimated using the Nikitas-Pappa approach, (c) the stepwise approximation, and (d) a simple numerical approximation involving the trapezoid rule for integration of the fundamental equation for gradient elution. For all these methods, Excel VBA macros have been written and implemented using two different platforms; the fitting and the optimization platform. The fitting platform calculates not only the adjustable parameters of the chromatographic models, but also the significance of these parameters and furthermore predicts the analyte elution times. The optimization platform determines the gradient conditions that lead to the optimum separation of a mixture of analytes by using the Solver evolutionary mode, provided that proper constraints are set in order to obtain the optimum gradient profile in the minimum gradient time. The performance of the two platforms was tested using experimental and artificial data. It was found that using the proposed spreadsheets, fitting, prediction, and optimization can be performed easily and effectively under all conditions. Overall, the best performance is exhibited by the analytical and Nikitas-Pappa's methods, although the former cannot be used under all circumstances. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Three-Dimensional Dynamic Deformation Measurements Using Stereoscopic Imaging and Digital Speckle Photography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prentice, H. J.; Proud, W. G.

    2006-07-28

    A technique has been developed to determine experimentally the three-dimensional displacement field on the rear surface of a dynamically deforming plate. The technique combines speckle analysis with stereoscopy, using a modified angular-lens method: this incorporates split-frame photography and a simple method by which the effective lens separation can be adjusted and calibrated in situ. Whilst several analytical models exist to predict deformation in extended or semi-infinite targets, the non-trivial nature of the wave interactions complicates the generation and development of analytical models for targets of finite depth. By interrogating specimens experimentally to acquire three-dimensional strain data points, both analytical andmore » numerical model predictions can be verified more rigorously. The technique is applied to the quasi-static deformation of a rubber sheet and dynamically to Mild Steel sheets of various thicknesses.« less

  19. Simultaneous determination of three herbicides by differential pulse voltammetry and chemometrics.

    PubMed

    Ni, Yongnian; Wang, Lin; Kokot, Serge

    2011-01-01

    A novel differential pulse voltammetry method (DPV) was researched and developed for the simultaneous determination of Pendimethalin, Dinoseb and sodium 5-nitroguaiacolate (5NG) with the aid of chemometrics. The voltammograms of these three compounds overlapped significantly, and to facilitate the simultaneous determination of the three analytes, chemometrics methods were applied. These included classical least squares (CLS), principal component regression (PCR), partial least squares (PLS) and radial basis function-artificial neural networks (RBF-ANN). A separately prepared verification data set was used to confirm the calibrations, which were built from the original and first derivative data matrices of the voltammograms. On the basis relative prediction errors and recoveries of the analytes, the RBF-ANN and the DPLS (D - first derivative spectra) models performed best and are particularly recommended for application. The DPLS calibration model was applied satisfactorily for the prediction of the three analytes from market vegetables and lake water samples.

  20. 3D analysis of eddy current loss in the permanent magnet coupling.

    PubMed

    Zhu, Zina; Meng, Zhuo

    2016-07-01

    This paper first presents a 3D analytical model for analyzing the radial air-gap magnetic field between the inner and outer magnetic rotors of the permanent magnet couplings by using the Amperian current model. Based on the air-gap field analysis, the eddy current loss in the isolation cover is predicted according to the Maxwell's equations. A 3D finite element analysis model is constructed to analyze the magnetic field spatial distributions and vector eddy currents, and then the simulation results obtained are analyzed and compared with the analytical method. Finally, the current losses of two types of practical magnet couplings are measured in the experiment to compare with the theoretical results. It is concluded that the 3D analytical method of eddy current loss in the magnet coupling is viable and could be used for the eddy current loss prediction of magnet couplings.

  1. Analytical and experimental studies on detection of longitudinal, L and inverted T cracks in isotropic and bi-material beams based on changes in natural frequencies

    NASA Astrophysics Data System (ADS)

    Ravi, J. T.; Nidhan, S.; Muthu, N.; Maiti, S. K.

    2018-02-01

    An analytical method for determination of dimensions of longitudinal crack in monolithic beams, based on frequency measurements, has been extended to model L and inverted T cracks. Such cracks including longitudinal crack arise in beams made of layered isotropic or composite materials. A new formulation for modelling cracks in bi-material beams is presented. Longitudinal crack segment sizes, for L and inverted T cracks, varying from 2.7% to 13.6% of length of Euler-Bernoulli beams are considered. Both forward and inverse problems have been examined. In the forward problems, the analytical results are compared with finite element (FE) solutions. In the inverse problems, the accuracy of prediction of crack dimensions is verified using FE results as input for virtual testing. The analytical results show good agreement with the actual crack dimensions. Further, experimental studies have been done to verify the accuracy of the analytical method for prediction of dimensions of three types of crack in isotropic and bi-material beams. The results show that the proposed formulation is reliable and can be employed for crack detection in slender beam like structures in practice.

  2. An Extrapolation of a Radical Equation More Accurately Predicts Shelf Life of Frozen Biological Matrices.

    PubMed

    De Vore, Karl W; Fatahi, Nadia M; Sass, John E

    2016-08-01

    Arrhenius modeling of analyte recovery at increased temperatures to predict long-term colder storage stability of biological raw materials, reagents, calibrators, and controls is standard practice in the diagnostics industry. Predicting subzero temperature stability using the same practice is frequently criticized but nevertheless heavily relied upon. We compared the ability to predict analyte recovery during frozen storage using 3 separate strategies: traditional accelerated studies with Arrhenius modeling, and extrapolation of recovery at 20% of shelf life using either ordinary least squares or a radical equation y = B1x(0.5) + B0. Computer simulations were performed to establish equivalence of statistical power to discern the expected changes during frozen storage or accelerated stress. This was followed by actual predictive and follow-up confirmatory testing of 12 chemistry and immunoassay analytes. Linear extrapolations tended to be the most conservative in the predicted percent recovery, reducing customer and patient risk. However, the majority of analytes followed a rate of change that slowed over time, which was fit best to a radical equation of the form y = B1x(0.5) + B0. Other evidence strongly suggested that the slowing of the rate was not due to higher-order kinetics, but to changes in the matrix during storage. Predicting shelf life of frozen products through extrapolation of early initial real-time storage analyte recovery should be considered the most accurate method. Although in this study the time required for a prediction was longer than a typical accelerated testing protocol, there are less potential sources of error, reduced costs, and a lower expenditure of resources. © 2016 American Association for Clinical Chemistry.

  3. Improvement of analytical dynamic models using modal test data

    NASA Technical Reports Server (NTRS)

    Berman, A.; Wei, F. S.; Rao, K. V.

    1980-01-01

    A method developed to determine maximum changes in analytical mass and stiffness matrices to make them consistent with a set of measured normal modes and natural frequencies is presented. The corrected model will be an improved base for studies of physical changes, boundary condition changes, and for prediction of forced responses. The method features efficient procedures not requiring solutions of the eigenvalue problem, and the ability to have more degrees of freedom than the test data. In addition, modal displacements are obtained for all analytical degrees of freedom, and the frequency dependence of the coordinate transformations is properly treated.

  4. Predictive Analytics for Identification of Patients at Risk for QT Interval Prolongation - A Systematic Review.

    PubMed

    Tomaselli Muensterman, Elena; Tisdale, James E

    2018-06-08

    Prolongation of the heart rate-corrected QT (QTc) interval increases the risk for torsades de pointes (TdP), a potentially fatal arrhythmia. The likelihood of TdP is higher in patients with risk factors, which include female sex, older age, heart failure with reduced ejection fraction, hypokalemia, hypomagnesemia, concomitant administration of ≥ 2 QTc interval-prolonging medications, among others. Assessment and quantification of risk factors may facilitate prediction of patients at highest risk for developing QTc interval prolongation and TdP. Investigators have utilized the field of predictive analytics, which generates predictions using techniques including data mining, modeling, machine learning, and others, to develop methods of risk quantification and prediction of QTc interval prolongation. Predictive analytics have also been incorporated into clinical decision support (CDS) tools to alert clinicians regarding patients at increased risk of developing QTc interval prolongation. The objectives of this paper are to assess the effectiveness of predictive analytics for identification of patients at risk of drug-induced QTc interval prolongation, and to discuss the efficacy of incorporation of predictive analytics into CDS tools in clinical practice. A systematic review of English language articles (human subjects only) was performed, yielding 57 articles, with an additional 4 articles identified from other sources; a total of 10 articles were included in this review. Risk scores for QTc interval prolongation have been developed in various patient populations including those in cardiac intensive care units (ICUs) and in broader populations of hospitalized or health system patients. One group developed a risk score that includes information regarding genetic polymorphisms; this score significantly predicted TdP. Development of QTc interval prolongation risk prediction models and incorporation of these models into CDS tools reduces the risk of QTc interval prolongation in cardiac ICUs and identifies health-system patients at increased risk for mortality. The impact of these QTc interval prolongation predictive analytics on overall patient safety outcomes, such as TdP and sudden cardiac death relative to the cost of development and implementation, requires further study. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  5. Solution of magnetic field and eddy current problem induced by rotating magnetic poles (abstract)

    NASA Astrophysics Data System (ADS)

    Liu, Z. J.; Low, T. S.

    1996-04-01

    The magnetic field and eddy current problems induced by rotating permanent magnet poles occur in electromagnetic dampers, magnetic couplings, and many other devices. Whereas numerical techniques, for example, finite element methods can be exploited to study various features of these problems, such as heat generation and drag torque development, etc., the analytical solution is always of interest to the designers since it helps them to gain the insight into the interdependence of the parameters involved and provides an efficient tool for designing. Some of the previous work showed that the solution of the eddy current problem due to the linearly moving magnet poles can give satisfactory approximation for the eddy current problem due to rotating fields. However, in many practical cases, especially when the number of magnet poles is small, there is significant effect of flux focusing due to the geometry. The above approximation can therefore lead to marked errors in the theoretical predictions of the device performance. Bernot et al. recently described an analytical solution in a polar coordinate system where the radial field is excited by a time-varying source. A discussion of an analytical solution of the magnetic field and eddy current problems induced by moving magnet poles in radial field machines will be given in this article. The theoretical predictions obtained from this method is compared with the results obtained from finite element calculations. The validity of the method is also checked by the comparison of the theoretical predictions and the measurements from a test machine. It is shown that the introduced solution leads to a significant improvement in the air gap field prediction as compared with the results obtained from the analytical solution that models the eddy current problems induced by linearly moving magnet poles.

  6. Computer modeling of a two-junction, monolithic cascade solar cell

    NASA Technical Reports Server (NTRS)

    Lamorte, M. F.; Abbott, D.

    1979-01-01

    The theory and design criteria for monolithic, two-junction cascade solar cells are described. The departure from the conventional solar cell analytical method and the reasons for using the integral form of the continuity equations are briefly discussed. The results of design optimization are presented. The energy conversion efficiency that is predicted for the optimized structure is greater than 30% at 300 K, AMO and one sun. The analytical method predicts device performance characteristics as a function of temperature. The range is restricted to 300 to 600 K. While the analysis is capable of determining most of the physical processes occurring in each of the individual layers, only the more significant device performance characteristics are presented.

  7. Big Data and Analytics in Healthcare.

    PubMed

    Tan, S S-L; Gao, G; Koch, S

    2015-01-01

    This editorial is part of the Focus Theme of Methods of Information in Medicine on "Big Data and Analytics in Healthcare". The amount of data being generated in the healthcare industry is growing at a rapid rate. This has generated immense interest in leveraging the availability of healthcare data (and "big data") to improve health outcomes and reduce costs. However, the nature of healthcare data, and especially big data, presents unique challenges in processing and analyzing big data in healthcare. This Focus Theme aims to disseminate some novel approaches to address these challenges. More specifically, approaches ranging from efficient methods of processing large clinical data to predictive models that could generate better predictions from healthcare data are presented.

  8. Analytical methods to predict liquid congealing in ram air heat exchangers during cold operation

    NASA Astrophysics Data System (ADS)

    Coleman, Kenneth; Kosson, Robert

    1989-07-01

    Ram air heat exchangers used to cool liquids such as lube oils or Ethylene-Glycol/water solutions can be subject to congealing in very cold ambients, resulting in a loss of cooling capability. Two-dimensional, transient analytical models have been developed to explore this phenomenon with both continuous and staggered fin cores. Staggered fin predictions are compared to flight test data from the E-2C Allison T56 engine lube oil system during winter conditions. For simpler calculations, a viscosity ratio correction was introduced and found to provide reasonable cold ambient performance predictions for the staggered fin core, using a one-dimensional approach.

  9. Scalable Predictive Analysis in Critically Ill Patients Using a Visual Open Data Analysis Platform

    PubMed Central

    Poucke, Sven Van; Zhang, Zhongheng; Schmitz, Martin; Vukicevic, Milan; Laenen, Margot Vander; Celi, Leo Anthony; Deyne, Cathy De

    2016-01-01

    With the accumulation of large amounts of health related data, predictive analytics could stimulate the transformation of reactive medicine towards Predictive, Preventive and Personalized (PPPM) Medicine, ultimately affecting both cost and quality of care. However, high-dimensionality and high-complexity of the data involved, prevents data-driven methods from easy translation into clinically relevant models. Additionally, the application of cutting edge predictive methods and data manipulation require substantial programming skills, limiting its direct exploitation by medical domain experts. This leaves a gap between potential and actual data usage. In this study, the authors address this problem by focusing on open, visual environments, suited to be applied by the medical community. Moreover, we review code free applications of big data technologies. As a showcase, a framework was developed for the meaningful use of data from critical care patients by integrating the MIMIC-II database in a data mining environment (RapidMiner) supporting scalable predictive analytics using visual tools (RapidMiner’s Radoop extension). Guided by the CRoss-Industry Standard Process for Data Mining (CRISP-DM), the ETL process (Extract, Transform, Load) was initiated by retrieving data from the MIMIC-II tables of interest. As use case, correlation of platelet count and ICU survival was quantitatively assessed. Using visual tools for ETL on Hadoop and predictive modeling in RapidMiner, we developed robust processes for automatic building, parameter optimization and evaluation of various predictive models, under different feature selection schemes. Because these processes can be easily adopted in other projects, this environment is attractive for scalable predictive analytics in health research. PMID:26731286

  10. Scalable Predictive Analysis in Critically Ill Patients Using a Visual Open Data Analysis Platform.

    PubMed

    Van Poucke, Sven; Zhang, Zhongheng; Schmitz, Martin; Vukicevic, Milan; Laenen, Margot Vander; Celi, Leo Anthony; De Deyne, Cathy

    2016-01-01

    With the accumulation of large amounts of health related data, predictive analytics could stimulate the transformation of reactive medicine towards Predictive, Preventive and Personalized (PPPM) Medicine, ultimately affecting both cost and quality of care. However, high-dimensionality and high-complexity of the data involved, prevents data-driven methods from easy translation into clinically relevant models. Additionally, the application of cutting edge predictive methods and data manipulation require substantial programming skills, limiting its direct exploitation by medical domain experts. This leaves a gap between potential and actual data usage. In this study, the authors address this problem by focusing on open, visual environments, suited to be applied by the medical community. Moreover, we review code free applications of big data technologies. As a showcase, a framework was developed for the meaningful use of data from critical care patients by integrating the MIMIC-II database in a data mining environment (RapidMiner) supporting scalable predictive analytics using visual tools (RapidMiner's Radoop extension). Guided by the CRoss-Industry Standard Process for Data Mining (CRISP-DM), the ETL process (Extract, Transform, Load) was initiated by retrieving data from the MIMIC-II tables of interest. As use case, correlation of platelet count and ICU survival was quantitatively assessed. Using visual tools for ETL on Hadoop and predictive modeling in RapidMiner, we developed robust processes for automatic building, parameter optimization and evaluation of various predictive models, under different feature selection schemes. Because these processes can be easily adopted in other projects, this environment is attractive for scalable predictive analytics in health research.

  11. Learning and cognitive styles in web-based learning: theory, evidence, and application.

    PubMed

    Cook, David A

    2005-03-01

    Cognitive and learning styles (CLS) have long been investigated as a basis to adapt instruction and enhance learning. Web-based learning (WBL) can reach large, heterogenous audiences, and adaptation to CLS may increase its effectiveness. Adaptation is only useful if some learners (with a defined trait) do better with one method and other learners (with a complementary trait) do better with another method (aptitude-treatment interaction). A comprehensive search of health professions education literature found 12 articles on CLS in computer-assisted learning and WBL. Because so few reports were found, research from non-medical education was also included. Among all the reports, four CLS predominated. Each CLS construct was used to predict relationships between CLS and WBL. Evidence was then reviewed to support or refute these predictions. The wholist-analytic construct shows consistent aptitude-treatment interactions consonant with predictions (wholists need structure, a broad-before-deep approach, and social interaction, while analytics need less structure and a deep-before-broad approach). Limited evidence for the active-reflective construct suggests aptitude-treatment interaction, with active learners doing better with interactive learning and reflective learners doing better with methods to promote reflection. As predicted, no consistent interaction between the concrete-abstract construct and computer format was found, but one study suggests that there is interaction with instructional method. Contrary to predictions, no interaction was found for the verbal-imager construct. Teachers developing WBL activities should consider assessing and adapting to accommodate learners defined by the wholist-analytic and active-reflective constructs. Other adaptations should be considered experimental. Further WBL research could clarify the feasibility and effectiveness of assessing and adapting to CLS.

  12. A research program to reduce interior noise in general aviation airplanes. [test methods and results

    NASA Technical Reports Server (NTRS)

    Roskam, J.; Muirhead, V. U.; Smith, H. W.; Peschier, T. D.; Durenberger, D.; Vandam, K.; Shu, T. C.

    1977-01-01

    Analytical and semi-empirical methods for determining the transmission of sound through isolated panels and predicting panel transmission loss are described. Test results presented include the influence of plate stiffness and mass and the effects of pressurization and vibration damping materials on sound transmission characteristics. Measured and predicted results are presented in tables and graphs.

  13. An analytical framework to assist decision makers in the use of forest ecosystem model predictions

    USGS Publications Warehouse

    Larocque, Guy R.; Bhatti, Jagtar S.; Ascough, J.C.; Liu, J.; Luckai, N.; Mailly, D.; Archambault, L.; Gordon, Andrew M.

    2011-01-01

    The predictions from most forest ecosystem models originate from deterministic simulations. However, few evaluation exercises for model outputs are performed by either model developers or users. This issue has important consequences for decision makers using these models to develop natural resource management policies, as they cannot evaluate the extent to which predictions stemming from the simulation of alternative management scenarios may result in significant environmental or economic differences. Various numerical methods, such as sensitivity/uncertainty analyses, or bootstrap methods, may be used to evaluate models and the errors associated with their outputs. However, the application of each of these methods carries unique challenges which decision makers do not necessarily understand; guidance is required when interpreting the output generated from each model. This paper proposes a decision flow chart in the form of an analytical framework to help decision makers apply, in an orderly fashion, different steps involved in examining the model outputs. The analytical framework is discussed with regard to the definition of problems and objectives and includes the following topics: model selection, identification of alternatives, modelling tasks and selecting alternatives for developing policy or implementing management scenarios. Its application is illustrated using an on-going exercise in developing silvicultural guidelines for a forest management enterprise in Ontario, Canada.

  14. Prediction of jump phenomena in rotationally-coupled maneuvers of aircraft, including nonlinear aerodynamic effects

    NASA Technical Reports Server (NTRS)

    Young, J. W.; Schy, A. A.; Johnson, K. G.

    1977-01-01

    An analytical method has been developed for predicting critical control inputs for which nonlinear rotational coupling may cause sudden jumps in aircraft response. The analysis includes the effect of aerodynamics which are nonlinear in angle of attack. The method involves the simultaneous solution of two polynomials in roll rate, whose coefficients are functions of angle of attack and the control inputs. Results obtained using this procedure are compared with calculated time histories to verify the validity of the method for predicting jump-like instabilities.

  15. Chemical Sensor Array Response Modeling Using Quantitative Structure-Activity Relationships Technique

    NASA Astrophysics Data System (ADS)

    Shevade, Abhijit V.; Ryan, Margaret A.; Homer, Margie L.; Zhou, Hanying; Manfreda, Allison M.; Lara, Liana M.; Yen, Shiao-Pin S.; Jewell, April D.; Manatt, Kenneth S.; Kisor, Adam K.

    We have developed a Quantitative Structure-Activity Relationships (QSAR) based approach to correlate the response of chemical sensors in an array with molecular descriptors. A novel molecular descriptor set has been developed; this set combines descriptors of sensing film-analyte interactions, representing sensor response, with a basic analyte descriptor set commonly used in QSAR studies. The descriptors are obtained using a combination of molecular modeling tools and empirical and semi-empirical Quantitative Structure-Property Relationships (QSPR) methods. The sensors under investigation are polymer-carbon sensing films which have been exposed to analyte vapors at parts-per-million (ppm) concentrations; response is measured as change in film resistance. Statistically validated QSAR models have been developed using Genetic Function Approximations (GFA) for a sensor array for a given training data set. The applicability of the sensor response models has been tested by using it to predict the sensor activities for test analytes not considered in the training set for the model development. The validated QSAR sensor response models show good predictive ability. The QSAR approach is a promising computational tool for sensing materials evaluation and selection. It can also be used to predict response of an existing sensing film to new target analytes.

  16. Amie Sluiter | NREL

    Science.gov Websites

    biomass analysis methods and is primary author on 11 Laboratory Analytical Procedures, which are ) spectroscopic analysis methods. These methods allow analysts to predict the composition of feedstock and process . Patent No. 6,737,258 (2002) Featured Publications "Improved methods for the determination of drying

  17. Analytical method for predicting the pressure distribution about a nacelle at transonic speeds

    NASA Technical Reports Server (NTRS)

    Keith, J. S.; Ferguson, D. R.; Merkle, C. L.; Heck, P. H.; Lahti, D. J.

    1973-01-01

    The formulation and development of a computer analysis for the calculation of streamlines and pressure distributions around two-dimensional (planar and axisymmetric) isolated nacelles at transonic speeds are described. The computerized flow field analysis is designed to predict the transonic flow around long and short high-bypass-ratio fan duct nacelles with inlet flows and with exhaust flows having appropriate aerothermodynamic properties. The flow field boundaries are located as far upstream and downstream as necessary to obtain minimum disturbances at the boundary. The far-field lateral flow field boundary is analytically defined to exactly represent free-flight conditions or solid wind tunnel wall effects. The inviscid solution technique is based on a Streamtube Curvature Analysis. The computer program utilizes an automatic grid refinement procedure and solves the flow field equations with a matrix relaxation technique. The boundary layer displacement effects and the onset of turbulent separation are included, based on the compressible turbulent boundary layer solution method of Stratford and Beavers and on the turbulent separation prediction method of Stratford.

  18. Prediction of In-hospital Mortality in Emergency Department Patients With Sepsis: A Local Big Data–Driven, Machine Learning Approach

    PubMed Central

    Taylor, R. Andrew; Pare, Joseph R.; Venkatesh, Arjun K.; Mowafi, Hani; Melnick, Edward R.; Fleischman, William; Hall, M. Kennedy

    2018-01-01

    Objectives Predictive analytics in emergency care has mostly been limited to the use of clinical decision rules (CDRs) in the form of simple heuristics and scoring systems. In the development of CDRs, limitations in analytic methods and concerns with usability have generally constrained models to a preselected small set of variables judged to be clinically relevant and to rules that are easily calculated. Furthermore, CDRs frequently suffer from questions of generalizability, take years to develop, and lack the ability to be updated as new information becomes available. Newer analytic and machine learning techniques capable of harnessing the large number of variables that are already available through electronic health records (EHRs) may better predict patient outcomes and facilitate automation and deployment within clinical decision support systems. In this proof-of-concept study, a local, big data–driven, machine learning approach is compared to existing CDRs and traditional analytic methods using the prediction of sepsis in-hospital mortality as the use case. Methods This was a retrospective study of adult ED visits admitted to the hospital meeting criteria for sepsis from October 2013 to October 2014. Sepsis was defined as meeting criteria for systemic inflammatory response syndrome with an infectious admitting diagnosis in the ED. ED visits were randomly partitioned into an 80%/20% split for training and validation. A random forest model (machine learning approach) was constructed using over 500 clinical variables from data available within the EHRs of four hospitals to predict in-hospital mortality. The machine learning prediction model was then compared to a classification and regression tree (CART) model, logistic regression model, and previously developed prediction tools on the validation data set using area under the receiver operating characteristic curve (AUC) and chi-square statistics. Results There were 5,278 visits among 4,676 unique patients who met criteria for sepsis. Of the 4,222 patients in the training group, 210 (5.0%) died during hospitalization, and of the 1,056 patients in the validation group, 50 (4.7%) died during hospitalization. The AUCs with 95% confidence intervals (CIs) for the different models were as follows: random forest model, 0.86 (95% CI = 0.82 to 0.90); CART model, 0.69 (95% CI = 0.62 to 0.77); logistic regression model, 0.76 (95% CI = 0.69 to 0.82); CURB-65, 0.73 (95% CI = 0.67 to 0.80); MEDS, 0.71 (95% CI = 0.63 to 0.77); and mREMS, 0.72 (95% CI = 0.65 to 0.79). The random forest model AUC was statistically different from all other models (p ≤ 0.003 for all comparisons). Conclusions In this proof-of-concept study, a local big data–driven, machine learning approach outperformed existing CDRs as well as traditional analytic techniques for predicting in-hospital mortality of ED patients with sepsis. Future research should prospectively evaluate the effectiveness of this approach and whether it translates into improved clinical outcomes for high-risk sepsis patients. The methods developed serve as an example of a new model for predictive analytics in emergency care that can be automated, applied to other clinical outcomes of interest, and deployed in EHRs to enable locally relevant clinical predictions. PMID:26679719

  19. Evaluation of strength and failure of brittle rock containing initial cracks under lithospheric conditions

    NASA Astrophysics Data System (ADS)

    Li, Xiaozhao; Qi, Chengzhi; Shao, Zhushan; Ma, Chao

    2018-02-01

    Natural brittle rock contains numerous randomly distributed microcracks. Crack initiation, growth, and coalescence play a predominant role in evaluation for the strength and failure of brittle rocks. A new analytical method is proposed to predict the strength and failure of brittle rocks containing initial microcracks. The formulation of this method is based on an improved wing crack model and a suggested micro-macro relation. In this improved wing crack model, the parameter of crack angle is especially introduced as a variable, and the analytical stress-crack relation considering crack angle effect is obtained. Coupling the proposed stress-crack relation and the suggested micro-macro relation describing the relation between crack growth and axial strain, the stress-strain constitutive relation is obtained to predict the rock strength and failure. Considering different initial microcrack sizes, friction coefficients and confining pressures, effects of crack angle on tensile wedge force acting on initial crack interface are studied, and effects of crack angle on stress-strain constitutive relation of rocks are also analyzed. The strength and crack initiation stress under different crack angles are discussed, and the value of most disadvantaged angle triggering crack initiation and rock failure is founded. The analytical results are similar to the published study results. Rationality of this proposed analytical method is verified.

  20. A Meta-Analytic Review of Components Associated with Parent Training Program Effectiveness

    ERIC Educational Resources Information Center

    Kaminski, Jennifer Wyatt; Valle, Linda Anne; Filene, Jill H.; Boyle, Cynthia L.

    2008-01-01

    This component analysis used meta-analytic techniques to synthesize the results of 77 published evaluations of parent training programs (i.e., programs that included the active acquisition of parenting skills) to enhance behavior and adjustment in children aged 0-7. Characteristics of program content and delivery method were used to predict effect…

  1. Constructing and predicting solitary pattern solutions for nonlinear time-fractional dispersive partial differential equations

    NASA Astrophysics Data System (ADS)

    Arqub, Omar Abu; El-Ajou, Ahmad; Momani, Shaher

    2015-07-01

    Building fractional mathematical models for specific phenomena and developing numerical or analytical solutions for these fractional mathematical models are crucial issues in mathematics, physics, and engineering. In this work, a new analytical technique for constructing and predicting solitary pattern solutions of time-fractional dispersive partial differential equations is proposed based on the generalized Taylor series formula and residual error function. The new approach provides solutions in the form of a rapidly convergent series with easily computable components using symbolic computation software. For method evaluation and validation, the proposed technique was applied to three different models and compared with some of the well-known methods. The resultant simulations clearly demonstrate the superiority and potentiality of the proposed technique in terms of the quality performance and accuracy of substructure preservation in the construct, as well as the prediction of solitary pattern solutions for time-fractional dispersive partial differential equations.

  2. Effect of vibration on retention characteristics of screen acquisition systems. [for surface tension propellant acquisition

    NASA Technical Reports Server (NTRS)

    Tegart, J. R.; Aydelott, J. C.

    1978-01-01

    The design of surface tension propellant acquisition systems using fine-mesh screen must take into account all factors that influence the liquid pressure differentials within the system. One of those factors is spacecraft vibration. Analytical models to predict the effects of vibration have been developed. A test program to verify the analytical models and to allow a comparative evaluation of the parameters influencing the response to vibration was performed. Screen specimens were tested under conditions simulating the operation of an acquisition system, considering the effects of such parameters as screen orientation and configuration, screen support method, screen mesh, liquid flow and liquid properties. An analytical model, based on empirical coefficients, was most successful in predicting the effects of vibration.

  3. Quantum decay model with exact explicit analytical solution

    NASA Astrophysics Data System (ADS)

    Marchewka, Avi; Granot, Er'El

    2009-01-01

    A simple decay model is introduced. The model comprises a point potential well, which experiences an abrupt change. Due to the temporal variation, the initial quantum state can either escape from the well or stay localized as a new bound state. The model allows for an exact analytical solution while having the necessary features of a decay process. The results show that the decay is never exponential, as classical dynamics predicts. Moreover, at short times the decay has a fractional power law, which differs from perturbation quantum method predictions. At long times the decay includes oscillations with an envelope that decays algebraically. This is a model where the final state can be either continuous or localized, and that has an exact analytical solution.

  4. Investigation of prediction methods for the loads and stresses of Apollo type spacecraft parachutes. Volume 1: Loads

    NASA Technical Reports Server (NTRS)

    Mickey, F. E.; Mcewan, A. J.; Ewing, E. G.; Huyler, W. C., Jr.; Khajeh-Nouri, B.

    1970-01-01

    An analysis was conducted with the objective of upgrading and improving the loads, stress, and performance prediction methods for Apollo spacecraft parachutes. The subjects considered were: (1) methods for a new theoretical approach to the parachute opening process, (2) new experimental-analytical techniques to improve the measurement of pressures, stresses, and strains in inflight parachutes, and (3) a numerical method for analyzing the dynamical behavior of rapidly loaded pilot chute risers.

  5. Coupled rotor/fuselage dynamic analysis of the AH-1G helicopter and correlation with flight vibrations data

    NASA Technical Reports Server (NTRS)

    Corrigan, J. C.; Cronkhite, J. D.; Dompka, R. V.; Perry, K. S.; Rogers, J. P.; Sadler, S. G.

    1989-01-01

    Under a research program designated Design Analysis Methods for VIBrationS (DAMVIBS), existing analytical methods are used for calculating coupled rotor-fuselage vibrations of the AH-1G helicopter for correlation with flight test data from an AH-1G Operational Load Survey (OLS) test program. The analytical representation of the fuselage structure is based on a NASTRAN finite element model (FEM), which has been developed, extensively documented, and correlated with ground vibration test. One procedure that was used for predicting coupled rotor-fuselage vibrations using the advanced Rotorcraft Flight Simulation Program C81 and NASTRAN is summarized. Detailed descriptions of the analytical formulation of rotor dynamics equations, fuselage dynamic equations, coupling between the rotor and fuselage, and solutions to the total system of equations in C81 are included. Analytical predictions of hub shears for main rotor harmonics 2p, 4p, and 6p generated by C81 are used in conjunction with 2p OLS measured control loads and a 2p lateral tail rotor gearbox force, representing downwash impingement on the vertical fin, to excite the NASTRAN model. NASTRAN is then used to correlate with measured OLS flight test vibrations. Blade load comparisons predicted by C81 showed good agreement. In general, the fuselage vibration correlations show good agreement between anslysis and test in vibration response through 15 to 20 Hz.

  6. Space vehicle engine and heat shield environment review. Volume 1: Engineering analysis

    NASA Technical Reports Server (NTRS)

    Mcanelly, W. B.; Young, C. T. K.

    1973-01-01

    Methods for predicting the base heating characteristics of a multiple rocket engine installation are discussed. The environmental data is applied to the design of adequate protection system for the engine components. The methods for predicting the base region thermal environment are categorized as: (1) scale model testing, (2) extrapolation of previous and related flight test results, and (3) semiempirical analytical techniques.

  7. Extensions to decision curve analysis, a novel method for evaluating diagnostic tests, prediction models and molecular markers

    PubMed Central

    Vickers, Andrew J; Cronin, Angel M; Elkin, Elena B; Gonen, Mithat

    2008-01-01

    Background Decision curve analysis is a novel method for evaluating diagnostic tests, prediction models and molecular markers. It combines the mathematical simplicity of accuracy measures, such as sensitivity and specificity, with the clinical applicability of decision analytic approaches. Most critically, decision curve analysis can be applied directly to a data set, and does not require the sort of external data on costs, benefits and preferences typically required by traditional decision analytic techniques. Methods In this paper we present several extensions to decision curve analysis including correction for overfit, confidence intervals, application to censored data (including competing risk) and calculation of decision curves directly from predicted probabilities. All of these extensions are based on straightforward methods that have previously been described in the literature for application to analogous statistical techniques. Results Simulation studies showed that repeated 10-fold crossvalidation provided the best method for correcting a decision curve for overfit. The method for applying decision curves to censored data had little bias and coverage was excellent; for competing risk, decision curves were appropriately affected by the incidence of the competing risk and the association between the competing risk and the predictor of interest. Calculation of decision curves directly from predicted probabilities led to a smoothing of the decision curve. Conclusion Decision curve analysis can be easily extended to many of the applications common to performance measures for prediction models. Software to implement decision curve analysis is provided. PMID:19036144

  8. Analytic Formulation and Numerical Implementation of an Acoustic Pressure Gradient Prediction

    NASA Technical Reports Server (NTRS)

    Lee, Seongkyu; Brentner, Kenneth S.; Farassat, Fereidoun

    2007-01-01

    The scattering of rotor noise is an area that has received little attention over the years, yet the limited work that has been done has shown that both the directivity and intensity of the acoustic field may be significantly modified by the presence of scattering bodies. One of the inputs needed to compute the scattered acoustic field is the acoustic pressure gradient on a scattering surface. Two new analytical formulations of the acoustic pressure gradient have been developed and implemented in the PSU-WOPWOP rotor noise prediction code. These formulations are presented in this paper. The first formulation is derived by taking the gradient of Farassat's retarded-time Formulation 1A. Although this formulation is relatively simple, it requires numerical time differentiation of the acoustic integrals. In the second formulation, the time differentiation is taken inside the integrals analytically. The acoustic pressure gradient predicted by these new formulations is validated through comparison with the acoustic pressure gradient determined by a purely numerical approach for two model rotors. The agreement between analytic formulations and numerical method is excellent for both stationary and moving observers case.

  9. Mechanics of the tapered interference fit in dental implants.

    PubMed

    Bozkaya, Dinçer; Müftü, Sinan

    2003-11-01

    In evaluation of the long-term success of a dental implant, the reliability and the stability of the implant-abutment interface plays a great role. Tapered interference fits provide a reliable connection method between the abutment and the implant. In this work, the mechanics of the tapered interference fits were analyzed using a closed-form formula and the finite element (FE) method. An analytical solution, which is used to predict the contact pressure in a straight interference, was modified to predict the contact pressure in the tapered implant-abutment interface. Elastic-plastic FE analysis was used to simulate the implant and abutment material behavior. The validity and the applicability of the analytical solution were investigated by comparisons with the FE model for a range of problem parameters. It was shown that the analytical solution could be used to determine the pull-out force and loosening-torque with 5-10% error. Detailed analysis of the stress distribution due to tapered interference fit, in a commercially available, abutment-implant system was carried out. This analysis shows that plastic deformation in the implant limits the increase in the pull-out force that would have been otherwise predicted by higher interference values.

  10. The influence of retrieval practice on metacognition: The contribution of analytic and non-analytic processes.

    PubMed

    Miller, Tyler M; Geraci, Lisa

    2016-05-01

    People may change their memory predictions after retrieval practice using naïve theories of memory and/or by using subjective experience - analytic and non-analytic processes respectively. The current studies disentangled contributions of each process. In one condition, learners studied paired-associates, made a memory prediction, completed a short-run of retrieval practice and made a second prediction. In another condition, judges read about a yoked learners' retrieval practice performance but did not participate in retrieval practice and therefore, could not use non-analytic processes for the second prediction. In Study 1, learners reduced their predictions following moderately difficult retrieval practice whereas judges increased their predictions. In Study 2, learners made lower adjusted predictions than judges following both easy and difficult retrieval practice. In Study 3, judge-like participants used analytic processes to report adjusted predictions. Overall, the results suggested non-analytic processes play a key role for participants to reduce their predictions after retrieval practice. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Prediction of sound fields in acoustical cavities using the boundary element method. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Kipp, C. R.; Bernhard, R. J.

    1985-01-01

    A method was developed to predict sound fields in acoustical cavities. The method is based on the indirect boundary element method. An isoparametric quadratic boundary element is incorporated. Pressure, velocity and/or impedance boundary conditions may be applied to a cavity by using this method. The capability to include acoustic point sources within the cavity is implemented. The method is applied to the prediction of sound fields in spherical and rectangular cavities. All three boundary condition types are verified. Cases with a point source within the cavity domain are also studied. Numerically determined cavity pressure distributions and responses are presented. The numerical results correlate well with available analytical results.

  12. Analytical and Experimental Evaluation of the Heat Transfer Distribution over the Surfaces of Turbine Vanes

    NASA Technical Reports Server (NTRS)

    Hylton, L. D.; Mihelc, M. S.; Turner, E. R.; Nealy, D. A.; York, R. E.

    1983-01-01

    Three airfoil data sets were selected for use in evaluating currently available analytical models for predicting airfoil surface heat transfer distributions in a 2-D flow field. Two additional airfoils, representative of highly loaded, low solidity airfoils currently being designed, were selected for cascade testing at simulated engine conditions. Some 2-D analytical methods were examined and a version of the STAN5 boundary layer code was chosen for modification. The final form of the method utilized a time dependent, transonic inviscid cascade code coupled to a modified version of the STAN5 boundary layer code featuring zero order turbulence modeling. The boundary layer code is structured to accommodate a full spectrum of empirical correlations addressing the coupled influences of pressure gradient, airfoil curvature, and free-stream turbulence on airfoil surface heat transfer distribution and boundary layer transitional behavior. Comparison of pedictions made with the model to the data base indicates a significant improvement in predictive capability.

  13. Prediction of Time Response of Electrowetting

    NASA Astrophysics Data System (ADS)

    Lee, Seung Jun; Hong, Jiwoo; Kang, Kwan Hyoung

    2009-11-01

    It is very important to predict the time response of electrowetting-based devices, such as liquid lenses, reflective displays, and optical switches. We investigated the time response of electrowetting, based on an analytical and a numerical method, to find out characteristic scales and a scaling law for the switching time. For this, spreading process of a sessile droplet was analyzed based on the domain perturbation method. First, we considered the case of weakly viscous fluids. The analytical result for the spreading process was compared with experimental results, which showed very good agreement in overall time response. It was shown that the overall dynamics is governed by P2 shape mode. We derived characteristic scales combining the droplet volume, density, and surface tension. The overall dynamic process was scaled quite well by the scales. A scaling law was derived from the analytical solution and was verified experimentally. We also suggest a scaling law for highly viscous liquids, based on results of numerical analysis for the electrowetting-actuated spreading process.

  14. Analytical and experimental evaluation of the heat transfer distribution over the surfaces of turbine vanes

    NASA Astrophysics Data System (ADS)

    Hylton, L. D.; Mihelc, M. S.; Turner, E. R.; Nealy, D. A.; York, R. E.

    1983-05-01

    Three airfoil data sets were selected for use in evaluating currently available analytical models for predicting airfoil surface heat transfer distributions in a 2-D flow field. Two additional airfoils, representative of highly loaded, low solidity airfoils currently being designed, were selected for cascade testing at simulated engine conditions. Some 2-D analytical methods were examined and a version of the STAN5 boundary layer code was chosen for modification. The final form of the method utilized a time dependent, transonic inviscid cascade code coupled to a modified version of the STAN5 boundary layer code featuring zero order turbulence modeling. The boundary layer code is structured to accommodate a full spectrum of empirical correlations addressing the coupled influences of pressure gradient, airfoil curvature, and free-stream turbulence on airfoil surface heat transfer distribution and boundary layer transitional behavior. Comparison of pedictions made with the model to the data base indicates a significant improvement in predictive capability.

  15. Structure Shapes Dynamics and Directionality in Diverse Brain Networks: Mathematical Principles and Empirical Confirmation in Three Species

    NASA Astrophysics Data System (ADS)

    Moon, Joon-Young; Kim, Junhyeok; Ko, Tae-Wook; Kim, Minkyung; Iturria-Medina, Yasser; Choi, Jee-Hyun; Lee, Joseph; Mashour, George A.; Lee, Uncheol

    2017-04-01

    Identifying how spatially distributed information becomes integrated in the brain is essential to understanding higher cognitive functions. Previous computational and empirical studies suggest a significant influence of brain network structure on brain network function. However, there have been few analytical approaches to explain the role of network structure in shaping regional activities and directionality patterns. In this study, analytical methods are applied to a coupled oscillator model implemented in inhomogeneous networks. We first derive a mathematical principle that explains the emergence of directionality from the underlying brain network structure. We then apply the analytical methods to the anatomical brain networks of human, macaque, and mouse, successfully predicting simulation and empirical electroencephalographic data. The results demonstrate that the global directionality patterns in resting state brain networks can be predicted solely by their unique network structures. This study forms a foundation for a more comprehensive understanding of how neural information is directed and integrated in complex brain networks.

  16. [Local Regression Algorithm Based on Net Analyte Signal and Its Application in Near Infrared Spectral Analysis].

    PubMed

    Zhang, Hong-guang; Lu, Jian-gang

    2016-02-01

    Abstract To overcome the problems of significant difference among samples and nonlinearity between the property and spectra of samples in spectral quantitative analysis, a local regression algorithm is proposed in this paper. In this algorithm, net signal analysis method(NAS) was firstly used to obtain the net analyte signal of the calibration samples and unknown samples, then the Euclidean distance between net analyte signal of the sample and net analyte signal of calibration samples was calculated and utilized as similarity index. According to the defined similarity index, the local calibration sets were individually selected for each unknown sample. Finally, a local PLS regression model was built on each local calibration sets for each unknown sample. The proposed method was applied to a set of near infrared spectra of meat samples. The results demonstrate that the prediction precision and model complexity of the proposed method are superior to global PLS regression method and conventional local regression algorithm based on spectral Euclidean distance.

  17. SVM-Based System for Prediction of Epileptic Seizures from iEEG Signal

    PubMed Central

    Cherkassky, Vladimir; Lee, Jieun; Veber, Brandon; Patterson, Edward E.; Brinkmann, Benjamin H.; Worrell, Gregory A.

    2017-01-01

    Objective This paper describes a data-analytic modeling approach for prediction of epileptic seizures from intracranial electroencephalogram (iEEG) recording of brain activity. Even though it is widely accepted that statistical characteristics of iEEG signal change prior to seizures, robust seizure prediction remains a challenging problem due to subject-specific nature of data-analytic modeling. Methods Our work emphasizes understanding of clinical considerations important for iEEG-based seizure prediction, and proper translation of these clinical considerations into data-analytic modeling assumptions. Several design choices during pre-processing and post-processing are considered and investigated for their effect on seizure prediction accuracy. Results Our empirical results show that the proposed SVM-based seizure prediction system can achieve robust prediction of preictal and interictal iEEG segments from dogs with epilepsy. The sensitivity is about 90–100%, and the false-positive rate is about 0–0.3 times per day. The results also suggest good prediction is subject-specific (dog or human), in agreement with earlier studies. Conclusion Good prediction performance is possible only if the training data contain sufficiently many seizure episodes, i.e., at least 5–7 seizures. Significance The proposed system uses subject-specific modeling and unbalanced training data. This system also utilizes three different time scales during training and testing stages. PMID:27362758

  18. An analytical method to predict efficiency of aircraft gearboxes

    NASA Technical Reports Server (NTRS)

    Anderson, N. E.; Loewenthal, S. H.; Black, J. D.

    1984-01-01

    A spur gear efficiency prediction method previously developed by the authors was extended to include power loss of planetary gearsets. A friction coefficient model was developed for MIL-L-7808 oil based on disc machine data. This combined with the recent capability of predicting losses in spur gears of nonstandard proportions allows the calculation of power loss for complete aircraft gearboxes that utilize spur gears. The method was applied to the T56/501 turboprop gearbox and compared with measured test data. Bearing losses were calculated with large scale computer programs. Breakdowns of the gearbox losses point out areas for possible improvement.

  19. Geographic and temporal validity of prediction models: Different approaches were useful to examine model performance

    PubMed Central

    Austin, Peter C.; van Klaveren, David; Vergouwe, Yvonne; Nieboer, Daan; Lee, Douglas S.; Steyerberg, Ewout W.

    2017-01-01

    Objective Validation of clinical prediction models traditionally refers to the assessment of model performance in new patients. We studied different approaches to geographic and temporal validation in the setting of multicenter data from two time periods. Study Design and Setting We illustrated different analytic methods for validation using a sample of 14,857 patients hospitalized with heart failure at 90 hospitals in two distinct time periods. Bootstrap resampling was used to assess internal validity. Meta-analytic methods were used to assess geographic transportability. Each hospital was used once as a validation sample, with the remaining hospitals used for model derivation. Hospital-specific estimates of discrimination (c-statistic) and calibration (calibration intercepts and slopes) were pooled using random effects meta-analysis methods. I2 statistics and prediction interval width quantified geographic transportability. Temporal transportability was assessed using patients from the earlier period for model derivation and patients from the later period for model validation. Results Estimates of reproducibility, pooled hospital-specific performance, and temporal transportability were on average very similar, with c-statistics of 0.75. Between-hospital variation was moderate according to I2 statistics and prediction intervals for c-statistics. Conclusion This study illustrates how performance of prediction models can be assessed in settings with multicenter data at different time periods. PMID:27262237

  20. Determination of boiling point of petrochemicals by gas chromatography-mass spectrometry and multivariate regression analysis of structural activity relationship.

    PubMed

    Fakayode, Sayo O; Mitchell, Breanna S; Pollard, David A

    2014-08-01

    Accurate understanding of analyte boiling points (BP) is of critical importance in gas chromatographic (GC) separation and crude oil refinery operation in petrochemical industries. This study reported the first combined use of GC separation and partial-least-square (PLS1) multivariate regression analysis of petrochemical structural activity relationship (SAR) for accurate BP determination of two commercially available (D3710 and MA VHP) calibration gas mix samples. The results of the BP determination using PLS1 multivariate regression were further compared with the results of traditional simulated distillation method of BP determination. The developed PLS1 regression was able to correctly predict analytes BP in D3710 and MA VHP calibration gas mix samples, with a root-mean-square-%-relative-error (RMS%RE) of 6.4%, and 10.8% respectively. In contrast, the overall RMS%RE of 32.9% and 40.4%, respectively obtained for BP determination in D3710 and MA VHP using a traditional simulated distillation method were approximately four times larger than the corresponding RMS%RE of BP prediction using MRA, demonstrating the better predictive ability of MRA. The reported method is rapid, robust, and promising, and can be potentially used routinely for fast analysis, pattern recognition, and analyte BP determination in petrochemical industries. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Extensions to decision curve analysis, a novel method for evaluating diagnostic tests, prediction models and molecular markers.

    PubMed

    Vickers, Andrew J; Cronin, Angel M; Elkin, Elena B; Gonen, Mithat

    2008-11-26

    Decision curve analysis is a novel method for evaluating diagnostic tests, prediction models and molecular markers. It combines the mathematical simplicity of accuracy measures, such as sensitivity and specificity, with the clinical applicability of decision analytic approaches. Most critically, decision curve analysis can be applied directly to a data set, and does not require the sort of external data on costs, benefits and preferences typically required by traditional decision analytic techniques. In this paper we present several extensions to decision curve analysis including correction for overfit, confidence intervals, application to censored data (including competing risk) and calculation of decision curves directly from predicted probabilities. All of these extensions are based on straightforward methods that have previously been described in the literature for application to analogous statistical techniques. Simulation studies showed that repeated 10-fold crossvalidation provided the best method for correcting a decision curve for overfit. The method for applying decision curves to censored data had little bias and coverage was excellent; for competing risk, decision curves were appropriately affected by the incidence of the competing risk and the association between the competing risk and the predictor of interest. Calculation of decision curves directly from predicted probabilities led to a smoothing of the decision curve. Decision curve analysis can be easily extended to many of the applications common to performance measures for prediction models. Software to implement decision curve analysis is provided.

  2. Bridging the Gap between Human Judgment and Automated Reasoning in Predictive Analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanfilippo, Antonio P.; Riensche, Roderick M.; Unwin, Stephen D.

    2010-06-07

    Events occur daily that impact the health, security and sustainable growth of our society. If we are to address the challenges that emerge from these events, anticipatory reasoning has to become an everyday activity. Strong advances have been made in using integrated modeling for analysis and decision making. However, a wider impact of predictive analytics is currently hindered by the lack of systematic methods for integrating predictive inferences from computer models with human judgment. In this paper, we present a predictive analytics approach that supports anticipatory analysis and decision-making through a concerted reasoning effort that interleaves human judgment and automatedmore » inferences. We describe a systematic methodology for integrating modeling algorithms within a serious gaming environment in which role-playing by human agents provides updates to model nodes and the ensuing model outcomes in turn influence the behavior of the human players. The approach ensures a strong functional partnership between human players and computer models while maintaining a high degree of independence and greatly facilitating the connection between model and game structures.« less

  3. Computational Prediction of the Global Functional Genomic Landscape: Applications, Methods and Challenges

    PubMed Central

    Zhou, Weiqiang; Sherwood, Ben; Ji, Hongkai

    2017-01-01

    Technological advances have led to an explosive growth of high-throughput functional genomic data. Exploiting the correlation among different data types, it is possible to predict one functional genomic data type from other data types. Prediction tools are valuable in understanding the relationship among different functional genomic signals. They also provide a cost-efficient solution to inferring the unknown functional genomic profiles when experimental data are unavailable due to resource or technological constraints. The predicted data may be used for generating hypotheses, prioritizing targets, interpreting disease variants, facilitating data integration, quality control, and many other purposes. This article reviews various applications of prediction methods in functional genomics, discusses analytical challenges, and highlights some common and effective strategies used to develop prediction methods for functional genomic data. PMID:28076869

  4. Correlation of finite element free vibration predictions using random vibration test data. M.S. Thesis - Cleveland State Univ.

    NASA Technical Reports Server (NTRS)

    Chambers, Jeffrey A.

    1994-01-01

    Finite element analysis is regularly used during the engineering cycle of mechanical systems to predict the response to static, thermal, and dynamic loads. The finite element model (FEM) used to represent the system is often correlated with physical test results to determine the validity of analytical results provided. Results from dynamic testing provide one means for performing this correlation. One of the most common methods of measuring accuracy is by classical modal testing, whereby vibratory mode shapes are compared to mode shapes provided by finite element analysis. The degree of correlation between the test and analytical mode shapes can be shown mathematically using the cross orthogonality check. A great deal of time and effort can be exhausted in generating the set of test acquired mode shapes needed for the cross orthogonality check. In most situations response data from vibration tests are digitally processed to generate the mode shapes from a combination of modal parameters, forcing functions, and recorded response data. An alternate method is proposed in which the same correlation of analytical and test acquired mode shapes can be achieved without conducting the modal survey. Instead a procedure is detailed in which a minimum of test information, specifically the acceleration response data from a random vibration test, is used to generate a set of equivalent local accelerations to be applied to the reduced analytical model at discrete points corresponding to the test measurement locations. The static solution of the analytical model then produces a set of deformations that once normalized can be used to represent the test acquired mode shapes in the cross orthogonality relation. The method proposed has been shown to provide accurate results for both a simple analytical model as well as a complex space flight structure.

  5. Modeling and Analysis of Structural Dynamics for a One-Tenth Scale Model NGST Sunshield

    NASA Technical Reports Server (NTRS)

    Johnston, John; Lienard, Sebastien; Brodeur, Steve (Technical Monitor)

    2001-01-01

    New modeling and analysis techniques have been developed for predicting the dynamic behavior of the Next Generation Space Telescope (NGST) sunshield. The sunshield consists of multiple layers of pretensioned, thin-film membranes supported by deployable booms. Modeling the structural dynamic behavior of the sunshield is a challenging aspect of the problem due to the effects of membrane wrinkling. A finite element model of the sunshield was developed using an approximate engineering approach, the cable network method, to account for membrane wrinkling effects. Ground testing of a one-tenth scale model of the NGST sunshield were carried out to provide data for validating the analytical model. A series of analyses were performed to predict the behavior of the sunshield under the ground test conditions. Modal analyses were performed to predict the frequencies and mode shapes of the test article and transient response analyses were completed to simulate impulse excitation tests. Comparison was made between analytical predictions and test measurements for the dynamic behavior of the sunshield. In general, the results show good agreement with the analytical model correctly predicting the approximate frequency and mode shapes for the significant structural modes.

  6. The general 2-D moments via integral transform method for acoustic radiation and scattering

    NASA Astrophysics Data System (ADS)

    Smith, Jerry R.; Mirotznik, Mark S.

    2004-05-01

    The moments via integral transform method (MITM) is a technique to analytically reduce the 2-D method of moments (MoM) impedance double integrals into single integrals. By using a special integral representation of the Green's function, the impedance integral can be analytically simplified to a single integral in terms of transformed shape and weight functions. The reduced expression requires fewer computations and reduces the fill times of the MoM impedance matrix. Furthermore, the resulting integral is analytic for nearly arbitrary shape and weight function sets. The MITM technique is developed for mixed boundary conditions and predictions with basic shape and weight function sets are presented. Comparisons of accuracy and speed between MITM and brute force are presented. [Work sponsored by ONR and NSWCCD ILIR Board.

  7. System identification of analytical models of damped structures

    NASA Technical Reports Server (NTRS)

    Fuh, J.-S.; Chen, S.-Y.; Berman, A.

    1984-01-01

    A procedure is presented for identifying linear nonproportionally damped system. The system damping is assumed to be representable by a real symmetric matrix. Analytical mass, stiffness and damping matrices which constitute an approximate representation of the system are assumed to be available. Given also are an incomplete set of measured natural frequencies, damping ratios and complex mode shapes of the structure, normally obtained from test data. A method is developed to find the smallest changes in the analytical model so that the improved model can exactly predict the measured modal parameters. The present method uses the orthogonality relationship to improve mass and damping matrices and the dynamic equation to find the improved stiffness matrix.

  8. On the nature of the fragment environment created by the range destruction or random failure of solid rocket motor casings

    NASA Technical Reports Server (NTRS)

    Eck, M.; Mukunda, M.

    1988-01-01

    Given here are predictions of fragment velocities and azimuths resulting from the Space Transportation System Solid Rocket Motor range destruct, or random failure occurring at any time during the 120 seconds of Solid Rocket Motor burn. Results obtained using the analytical methods described showed good agreement between predictions and observations for two specific events. It was shown that these methods have good potential for use in predicting the fragmentation process of a number of generically similar casing systems. It was concluded that coupled Eulerian-Lagrangian calculational methods of the type described here provide a powerful tool for predicting Solid Rocket Motor response.

  9. Analytical estimation on divergence and flutter vibrations of symmetrical three-phase induction stator via field-synchronous coordinates

    NASA Astrophysics Data System (ADS)

    Xia, Ying; Wang, Shiyu; Sun, Wenjia; Xiu, Jie

    2017-01-01

    The electromagnetically induced parametric vibration of the symmetrical three-phase induction stator is examined. While it can be analyzed by an approximate analytical or numerical method, more accurate and simple analytical method is desirable. This work proposes a new method based on the field-synchronous coordinates. A mechanical-electromagnetic coupling model is developed under this frame such that a time-invariant governing equation with gyroscopic term can be developed. With the general vibration theory, the eigenvalue is formulated; the transition curves between the stable and unstable regions, and response are all determined as closed-form expressions of basic mechanical-electromagnetic parameters. The dependence of these parameters on the instability behaviors is demonstrated. The results imply that the divergence and flutter instabilities can occur even for symmetrical motors with balanced, constant amplitude and sinusoidal voltage. To verify the analytical predictions, this work also builds up a time-variant model of the same system under the conventional inertial frame. The Floquét theory is employed to predict the parametric instability and the numerical integration is used to obtain the parametric response. The parametric instability and response are both well compared against those under the field-synchronous coordinates. The proposed field-synchronous coordinates allows a quick estimation on the electromagnetically induced vibration. The convenience offered by the body-fixed coordinates is discussed across various fields.

  10. Healthcare predictive analytics: An overview with a focus on Saudi Arabia.

    PubMed

    Alharthi, Hana

    2018-03-08

    Despite a newfound wealth of data and information, the healthcare sector is lacking in actionable knowledge. This is largely because healthcare data, though plentiful, tends to be inherently complex and fragmented. Health data analytics, with an emphasis on predictive analytics, is emerging as a transformative tool that can enable more proactive and preventative treatment options. This review considers the ways in which predictive analytics has been applied in the for-profit business sector to generate well-timed and accurate predictions of key outcomes, with a focus on key features that may be applicable to healthcare-specific applications. Published medical research presenting assessments of predictive analytics technology in medical applications are reviewed, with particular emphasis on how hospitals have integrated predictive analytics into their day-to-day healthcare services to improve quality of care. This review also highlights the numerous challenges of implementing predictive analytics in healthcare settings and concludes with a discussion of current efforts to implement healthcare data analytics in the developing country, Saudi Arabia. Copyright © 2018 The Author. Published by Elsevier Ltd.. All rights reserved.

  11. Results of an integrated structure/control law design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.

    1989-01-01

    A design sensitivity analysis method for Linear Quadratic Cost, Gaussian (LQG) optimal control laws, which predicts change in the optimal control law due to changes in fixed problem parameters using analytical sensitivity equations is discussed. Numerical results of a design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimally controlled aircraft's response to various problem formulation and physical aircraft parameters is determined. These results are used to predict the aircraft's new optimally controlled response if the parameter was to have some other nominal value during the control law design process. The sensitivity results are validated by recomputing the optimal control law for discrete variations in parameters, computing the new actual aircraft response, and comparing with the predicted response. These results show an improvement in sensitivity accuracy for integrated design purposes over methods which do not include changes in the optimal control law. Use of the analytical LQG sensitivity expressions is also shown to be more efficient than finite difference methods for the computation of the equivalent sensitivity information.

  12. Predicting Student Success using Analytics in Course Learning Management Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olama, Mohammed M; Thakur, Gautam; McNair, Wade

    Educational data analytics is an emerging discipline, concerned with developing methods for exploring the unique types of data that come from the educational context. For example, predicting college student performance is crucial for both the student and educational institutions. It can support timely intervention to prevent students from failing a course, increasing efficacy of advising functions, and improving course completion rate. In this paper, we present the efforts carried out at Oak Ridge National Laboratory (ORNL) toward conducting predictive analytics to academic data collected from 2009 through 2013 and available in one of the most commonly used learning management systems,more » called Moodle. First, we have identified the data features useful for predicting student outcomes such as students scores in homework assignments, quizzes, exams, in addition to their activities in discussion forums and their total GPA at the same term they enrolled in the course. Then, Logistic Regression and Neural Network predictive models are used to identify students as early as possible that are in danger of failing the course they are currently enrolled in. These models compute the likelihood of any given student failing (or passing) the current course. Numerical results are presented to evaluate and compare the performance of the developed models and their predictive accuracy.« less

  13. Predicting student success using analytics in course learning management systems

    NASA Astrophysics Data System (ADS)

    Olama, Mohammed M.; Thakur, Gautam; McNair, Allen W.; Sukumar, Sreenivas R.

    2014-05-01

    Educational data analytics is an emerging discipline, concerned with developing methods for exploring the unique types of data that come from the educational context. For example, predicting college student performance is crucial for both the student and educational institutions. It can support timely intervention to prevent students from failing a course, increasing efficacy of advising functions, and improving course completion rate. In this paper, we present the efforts carried out at Oak Ridge National Laboratory (ORNL) toward conducting predictive analytics to academic data collected from 2009 through 2013 and available in one of the most commonly used learning management systems, called Moodle. First, we have identified the data features useful for predicting student outcomes such as students' scores in homework assignments, quizzes, exams, in addition to their activities in discussion forums and their total GPA at the same term they enrolled in the course. Then, Logistic Regression and Neural Network predictive models are used to identify students as early as possible that are in danger of failing the course they are currently enrolled in. These models compute the likelihood of any given student failing (or passing) the current course. Numerical results are presented to evaluate and compare the performance of the developed models and their predictive accuracy.

  14. Prediction of response of aircraft panels subjected to acoustic and thermal loads

    NASA Technical Reports Server (NTRS)

    Mei, Chuh

    1992-01-01

    The primary effort of this research project has been focused on the development of analytical methods for the prediction of random response of structural panels subjected to combined and intense acoustic and thermal loads. The accomplishments on various acoustic fatigue research activities are described first, then followed by publications and theses. Topics covered include: transverse shear deformation; finite element models of vibrating composite laminates; large deflection vibration modeling; finite element analysis of thermal buckling; and prediction of three dimensional duct using boundary element method.

  15. Prediction of pressure and flow transients in a gaseous bipropellant reaction control rocket engine

    NASA Technical Reports Server (NTRS)

    Markowsky, J. J.; Mcmanus, H. N., Jr.

    1974-01-01

    An analytic model is developed to predict pressure and flow transients in a gaseous hydrogen-oxygen reaction control rocket engine feed system. The one-dimensional equations of momentum and continuity are reduced by the method of characteristics from partial derivatives to a set of total derivatives which describe the state properties along the feedline. System components, e.g., valves, manifolds, and injectors are represented by pseudo steady-state relations at discrete junctions in the system. Solutions were effected by a FORTRAN IV program on an IBM 360/65. The results indicate the relative effect of manifold volume, combustion lag time, feedline pressure fluctuations, propellant temperature, and feedline length on the chamber pressure transient. The analytical combustion model is verified by good correlation between predicted and observed chamber pressure transients. The developed model enables a rocket designer to vary the design parameters analytically to obtain stable combustion for a particular mode of operation which is prescribed by mission objectives.

  16. An Other Perspective on Personality: Meta-Analytic Integration of Observers' Accuracy and Predictive Validity

    ERIC Educational Resources Information Center

    Connelly, Brian S.; Ones, Deniz S.

    2010-01-01

    The bulk of personality research has been built from self-report measures of personality. However, collecting personality ratings from other-raters, such as family, friends, and even strangers, is a dramatically underutilized method that allows better explanation and prediction of personality's role in many domains of psychology. Drawing…

  17. On-line solid-phase microextraction of triclosan, bisphenol A, chlorophenols, and selected pharmaceuticals in environmental water samples by high-performance liquid chromatography-ultraviolet detection.

    PubMed

    Kim, Dalho; Han, Jungho; Choi, Yongwook

    2013-01-01

    A method using on-line solid-phase microextraction (SPME) on a carbowax-templated fiber followed by liquid chromatography (LC) with ultraviolet (UV) detection was developed for the determination of triclosan in environmental water samples. Along with triclosan, other selected phenolic compounds, bisphenol A, and acidic pharmaceuticals were studied. Previous SPME/LC or stir-bar sorptive extraction/LC-UV for polar analytes showed lack of sensitivity. In this study, the calculated octanol-water distribution coefficient (log D) values of the target analytes at different pH values were used to estimate polarity of the analytes. The lack of sensitivity observed in earlier studies is identified as a lack of desorption by strong polar-polar interactions between analyte and solid-phase. Calculated log D values were useful to understand or predict the interaction between analyte and solid phase. Under the optimized conditions, the method detection limit of selected analytes by using on-line SPME-LC-UV method ranged from 5 to 33 ng L(-1), except for very polar 3-chlorophenol and 2,4-dichlorophenol which was obscured in wastewater samples by an interfering substance. This level of detection represented a remarkable improvement over the conventional existing methods. The on-line SPME-LC-UV method, which did not require derivatization of analytes, was applied to the determination of TCS including phenolic compounds and acidic pharmaceuticals in tap water and river water and municipal wastewater samples.

  18. Learning Analytics at Low Cost: At-Risk Student Prediction with Clicker Data and Systematic Proactive Interventions

    ERIC Educational Resources Information Center

    Choi, Samuel P. M.; Lam, S. S.; Li, Kam Cheong; Wong, Billy T. M.

    2018-01-01

    While learning analytics (LA) practices have been shown to be practical and effective, most of them require a huge amount of data and effort. This paper reports a case study which demonstrates the feasibility of practising LA at a low cost for instructors to identify at-risk students in an undergraduate business quantitative methods course.…

  19. A Superior Kirchhoff Method for Aeroacoustic Noise Prediction: The Ffowcs Williams-Hawkings Equation

    NASA Technical Reports Server (NTRS)

    Brentner, Kenneth S.

    1997-01-01

    The prediction of aeroacoustic noise is important; all new aircraft must meet noise certification requirements. Local noise standards can be even more stringent. The NASA noise reduction goal is to reduce perceived noise levels by a factor of two in 10 years. The objective of this viewgraph presentation is to demonstrate the superiority of the FW-H approach over the Kirchoff method for aeroacoustics, both analytically and numerically.

  20. Prediction of Process-Induced Distortions in L-Shaped Composite Profiles Using Path-Dependent Constitutive Law

    NASA Astrophysics Data System (ADS)

    Ding, Anxin; Li, Shuxin; Wang, Jihui; Ni, Aiqing; Sun, Liangliang; Chang, Lei

    2016-10-01

    In this paper, the corner spring-in angles of AS4/8552 L-shaped composite profiles with different thicknesses are predicted using path-dependent constitutive law with the consideration of material properties variation due to phase change during curing. The prediction accuracy mainly depends on the properties in the rubbery and glassy states obtained by homogenization method rather than experimental measurements. Both analytical and finite element (FE) homogenization methods are applied to predict the overall properties of AS4/8552 composite. The effect of fiber volume fraction on the properties is investigated for both rubbery and glassy states using both methods. And the predicted results are compared with experimental measurements for the glassy state. Good agreement is achieved between the predicted results and available experimental data, showing the reliability of the homogenization method. Furthermore, the corner spring-in angles of L-shaped composite profiles are measured experimentally and the reliability of path-dependent constitutive law is validated as well as the properties prediction by FE homogenization method.

  1. Personalized dynamic prediction of death according to tumour progression and high-dimensional genetic factors: Meta-analysis with a joint model.

    PubMed

    Emura, Takeshi; Nakatochi, Masahiro; Matsui, Shigeyuki; Michimae, Hirofumi; Rondeau, Virginie

    2017-01-01

    Developing a personalized risk prediction model of death is fundamental for improving patient care and touches on the realm of personalized medicine. The increasing availability of genomic information and large-scale meta-analytic data sets for clinicians has motivated the extension of traditional survival prediction based on the Cox proportional hazards model. The aim of our paper is to develop a personalized risk prediction formula for death according to genetic factors and dynamic tumour progression status based on meta-analytic data. To this end, we extend the existing joint frailty-copula model to a model allowing for high-dimensional genetic factors. In addition, we propose a dynamic prediction formula to predict death given tumour progression events possibly occurring after treatment or surgery. For clinical use, we implement the computation software of the prediction formula in the joint.Cox R package. We also develop a tool to validate the performance of the prediction formula by assessing the prediction error. We illustrate the method with the meta-analysis of individual patient data on ovarian cancer patients.

  2. Quantitative structure-retention relationships applied to development of liquid chromatography gradient-elution method for the separation of sartans.

    PubMed

    Golubović, Jelena; Protić, Ana; Otašević, Biljana; Zečević, Mira

    2016-04-01

    QSRR are mathematically derived relationships between the chromatographic parameters determined for a representative series of analytes in given separation systems and the molecular descriptors accounting for the structural differences among the investigated analytes. Artificial neural network is a technique of data analysis, which sets out to emulate the human brain's way of working. The aim of the present work was to optimize separation of six angiotensin receptor antagonists, so-called sartans: losartan, valsartan, irbesartan, telmisartan, candesartan cilexetil and eprosartan in a gradient-elution HPLC method. For this purpose, ANN as a mathematical tool was used for establishing a QSRR model based on molecular descriptors of sartans and varied instrumental conditions. The optimized model can be further used for prediction of an external congener of sartans and analysis of the influence of the analyte structure, represented through molecular descriptors, on retention behaviour. Molecular descriptors included in modelling were electrostatic, geometrical and quantum-chemical descriptors: connolly solvent excluded volume non-1,4 van der Waals energy, octanol/water distribution coefficient, polarizability, number of proton-donor sites and number of proton-acceptor sites. Varied instrumental conditions were gradient time, buffer pH and buffer molarity. High prediction ability of the optimized network enabled complete separation of the analytes within the run time of 15.5 min under following conditions: gradient time of 12.5 min, buffer pH of 3.95 and buffer molarity of 25 mM. Applied methodology showed the potential to predict retention behaviour of an external analyte with the properties within the training space. Connolly solvent excluded volume, polarizability and number of proton-acceptor sites appeared to be most influential paramateres on retention behaviour of the sartans. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Prediction of In-hospital Mortality in Emergency Department Patients With Sepsis: A Local Big Data-Driven, Machine Learning Approach.

    PubMed

    Taylor, R Andrew; Pare, Joseph R; Venkatesh, Arjun K; Mowafi, Hani; Melnick, Edward R; Fleischman, William; Hall, M Kennedy

    2016-03-01

    Predictive analytics in emergency care has mostly been limited to the use of clinical decision rules (CDRs) in the form of simple heuristics and scoring systems. In the development of CDRs, limitations in analytic methods and concerns with usability have generally constrained models to a preselected small set of variables judged to be clinically relevant and to rules that are easily calculated. Furthermore, CDRs frequently suffer from questions of generalizability, take years to develop, and lack the ability to be updated as new information becomes available. Newer analytic and machine learning techniques capable of harnessing the large number of variables that are already available through electronic health records (EHRs) may better predict patient outcomes and facilitate automation and deployment within clinical decision support systems. In this proof-of-concept study, a local, big data-driven, machine learning approach is compared to existing CDRs and traditional analytic methods using the prediction of sepsis in-hospital mortality as the use case. This was a retrospective study of adult ED visits admitted to the hospital meeting criteria for sepsis from October 2013 to October 2014. Sepsis was defined as meeting criteria for systemic inflammatory response syndrome with an infectious admitting diagnosis in the ED. ED visits were randomly partitioned into an 80%/20% split for training and validation. A random forest model (machine learning approach) was constructed using over 500 clinical variables from data available within the EHRs of four hospitals to predict in-hospital mortality. The machine learning prediction model was then compared to a classification and regression tree (CART) model, logistic regression model, and previously developed prediction tools on the validation data set using area under the receiver operating characteristic curve (AUC) and chi-square statistics. There were 5,278 visits among 4,676 unique patients who met criteria for sepsis. Of the 4,222 patients in the training group, 210 (5.0%) died during hospitalization, and of the 1,056 patients in the validation group, 50 (4.7%) died during hospitalization. The AUCs with 95% confidence intervals (CIs) for the different models were as follows: random forest model, 0.86 (95% CI = 0.82 to 0.90); CART model, 0.69 (95% CI = 0.62 to 0.77); logistic regression model, 0.76 (95% CI = 0.69 to 0.82); CURB-65, 0.73 (95% CI = 0.67 to 0.80); MEDS, 0.71 (95% CI = 0.63 to 0.77); and mREMS, 0.72 (95% CI = 0.65 to 0.79). The random forest model AUC was statistically different from all other models (p ≤ 0.003 for all comparisons). In this proof-of-concept study, a local big data-driven, machine learning approach outperformed existing CDRs as well as traditional analytic techniques for predicting in-hospital mortality of ED patients with sepsis. Future research should prospectively evaluate the effectiveness of this approach and whether it translates into improved clinical outcomes for high-risk sepsis patients. The methods developed serve as an example of a new model for predictive analytics in emergency care that can be automated, applied to other clinical outcomes of interest, and deployed in EHRs to enable locally relevant clinical predictions. © 2015 by the Society for Academic Emergency Medicine.

  4. Graphical Method for Determining Projectile Trajectory

    ERIC Educational Resources Information Center

    Moore, J. C.; Baker, J. C.; Franzel, L.; McMahon, D.; Songer, D.

    2010-01-01

    We present a nontrigonometric graphical method for predicting the trajectory of a projectile when the angle and initial velocity are known. Students enrolled in a general education conceptual physics course typically have weak backgrounds in trigonometry, making inaccessible the standard analytical calculation of projectile range. Furthermore,…

  5. Turbofan forced mixer lobe flow modeling. 1: Experimental and analytical assessment

    NASA Technical Reports Server (NTRS)

    Barber, T.; Paterson, R. W.; Skebe, S. A.

    1988-01-01

    A joint analytical and experimental investigation of three-dimensional flowfield development within the lobe region of turbofan forced mixer nozzles is described. The objective was to develop a method for predicting the lobe exit flowfield. In the analytical approach, a linearized inviscid aerodynamical theory was used for representing the axial and secondary flows within the three-dimensional convoluted mixer lobes and three-dimensional boundary layer analysis was applied thereafter to account for viscous effects. The experimental phase of the program employed three planar mixer lobe models having different waveform shapes and lobe heights for which detailed measurements were made of the three-dimensional velocity field and total pressure field at the lobe exit plane. Velocity data was obtained using Laser Doppler Velocimetry (LDV) and total pressure probing and hot wire anemometry were employed to define exit plane total pressure and boundary layer development. Comparison of data and analysis was performed to assess analytical model prediction accuracy. As a result of this study a planar mixed geometry analysis was developed. A principal conclusion is that the global mixer lobe flowfield is inviscid and can be predicted from an inviscid analysis and Kutta condition.

  6. Neoclassical toroidal viscosity calculations in tokamaks using a δf Monte Carlo simulation and their verifications.

    PubMed

    Satake, S; Park, J-K; Sugama, H; Kanno, R

    2011-07-29

    Neoclassical toroidal viscosities (NTVs) in tokamaks are investigated using a δf Monte Carlo simulation, and are successfully verified with a combined analytic theory over a wide range of collisionality. A Monte Carlo simulation has been required in the study of NTV since the complexities in guiding-center orbits of particles and their collisions cannot be fully investigated by any means of analytic theories alone. Results yielded the details of the complex NTV dependency on particle precessions and collisions, which were predicted roughly in a combined analytic theory. Both numerical and analytic methods can be utilized and extended based on these successful verifications.

  7. Explicit Analytical Solution of a Pendulum with Periodically Varying Length

    ERIC Educational Resources Information Center

    Yang, Tianzhi; Fang, Bo; Li, Song; Huang, Wenhu

    2010-01-01

    A pendulum with periodically varying length is an interesting physical system. It has been studied by some researchers using traditional perturbation methods (for example, the averaging method). But due to the limitation of the conventional perturbation methods, the solutions are not valid for long-term prediction of the pendulum. In this paper,…

  8. Analytical methods for the development of Reynolds stress closures in turbulence

    NASA Technical Reports Server (NTRS)

    Speziale, Charles G.

    1990-01-01

    Analytical methods for the development of Reynolds stress models in turbulence are reviewed in detail. Zero, one and two equation models are discussed along with second-order closures. A strong case is made for the superior predictive capabilities of second-order closure models in comparison to the simpler models. The central points are illustrated by examples from both homogeneous and inhomogeneous turbulence. A discussion of the author's views concerning the progress made in Reynolds stress modeling is also provided along with a brief history of the subject.

  9. Eddy current loss analysis of open-slot fault-tolerant permanent-magnet machines based on conformal mapping method

    NASA Astrophysics Data System (ADS)

    Ji, Jinghua; Luo, Jianhua; Lei, Qian; Bian, Fangfang

    2017-05-01

    This paper proposed an analytical method, based on conformal mapping (CM) method, for the accurate evaluation of magnetic field and eddy current (EC) loss in fault-tolerant permanent-magnet (FTPM) machines. The aim of modulation function, applied in CM method, is to change the open-slot structure into fully closed-slot structure, whose air-gap flux density is easy to calculate analytically. Therefore, with the help of Matlab Schwarz-Christoffel (SC) Toolbox, both the magnetic flux density and EC density of FTPM machine are obtained accurately. Finally, time-stepped transient finite-element method (FEM) is used to verify the theoretical analysis, showing that the proposed method is able to predict the magnetic flux density and EC loss precisely.

  10. Calculation of Thermally-Induced Displacements in Spherically Domed Ion Engine Grids

    NASA Technical Reports Server (NTRS)

    Soulas, George C.

    2006-01-01

    An analytical method for predicting the thermally-induced normal and tangential displacements of spherically domed ion optics grids under an axisymmetric thermal loading is presented. A fixed edge support that could be thermally expanded is used for this analysis. Equations for the displacements both normal and tangential to the surface of the spherical shell are derived. A simplified equation for the displacement at the center of the spherical dome is also derived. The effects of plate perforation on displacements and stresses are determined by modeling the perforated plate as an equivalent solid plate with modified, or effective, material properties. Analytical model results are compared to the results from a finite element model. For the solid shell, comparisons showed that the analytical model produces results that closely match the finite element model results. The simplified equation for the normal displacement of the spherical dome center is also found to accurately predict this displacement. For the perforated shells, the analytical solution and simplified equation produce accurate results for materials with low thermal expansion coefficients.

  11. Calibrant-Free Analyte Quantitation via a Variable Velocity Flow Cell.

    PubMed

    Beck, Jason G; Skuratovsky, Aleksander; Granger, Michael C; Porter, Marc D

    2017-01-17

    In this paper, we describe a novel method for analyte quantitation that does not rely on calibrants, internal standards, or calibration curves but, rather, leverages the relationship between disparate and predictable surface-directed analyte flux to an array of sensing addresses and a measured resultant signal. To reduce this concept to practice, we fabricated two flow cells such that the mean linear fluid velocity, U, was varied systematically over an array of electrodes positioned along the flow axis. This resulted in a predictable variation of the address-directed flux of a redox analyte, ferrocenedimethanol (FDM). The resultant limiting currents measured at a series of these electrodes, and accurately described by a convective-diffusive transport model, provided a means to calculate an "unknown" concentration without the use of calibrants, internal standards, or a calibration curve. Furthermore, the experiment and concentration calculation only takes minutes to perform. Deviation in calculated FDM concentrations from true values was minimized to less than 0.5% when empirically derived values of U were employed.

  12. Empirical Prediction of Aircraft Landing Gear Noise

    NASA Technical Reports Server (NTRS)

    Golub, Robert A. (Technical Monitor); Guo, Yue-Ping

    2005-01-01

    This report documents a semi-empirical/semi-analytical method for landing gear noise prediction. The method is based on scaling laws of the theory of aerodynamic noise generation and correlation of these scaling laws with current available test data. The former gives the method a sound theoretical foundation and the latter quantitatively determines the relations between the parameters of the landing gear assembly and the far field noise, enabling practical predictions of aircraft landing gear noise, both for parametric trends and for absolute noise levels. The prediction model is validated by wind tunnel test data for an isolated Boeing 737 landing gear and by flight data for the Boeing 777 airplane. In both cases, the predictions agree well with data, both in parametric trends and in absolute noise levels.

  13. Development of Aeroservoelastic Analytical Models and Gust Load Alleviation Control Laws of a SensorCraft Wind-Tunnel Model Using Measured Data

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.; Vartio, Eric; Shimko, Anthony; Kvaternik, Raymond G.; Eure, Kenneth W.; Scott,Robert C.

    2007-01-01

    Aeroservoelastic (ASE) analytical models of a SensorCraft wind-tunnel model are generated using measured data. The data was acquired during the ASE wind-tunnel test of the HiLDA (High Lift-to-Drag Active) Wing model, tested in the NASA Langley Transonic Dynamics Tunnel (TDT) in late 2004. Two time-domain system identification techniques are applied to the development of the ASE analytical models: impulse response (IR) method and the Generalized Predictive Control (GPC) method. Using measured control surface inputs (frequency sweeps) and associated sensor responses, the IR method is used to extract corresponding input/output impulse response pairs. These impulse responses are then transformed into state-space models for use in ASE analyses. Similarly, the GPC method transforms measured random control surface inputs and associated sensor responses into an AutoRegressive with eXogenous input (ARX) model. The ARX model is then used to develop the gust load alleviation (GLA) control law. For the IR method, comparison of measured with simulated responses are presented to investigate the accuracy of the ASE analytical models developed. For the GPC method, comparison of simulated open-loop and closed-loop (GLA) time histories are presented.

  14. Development of Aeroservoelastic Analytical Models and Gust Load Alleviation Control Laws of a SensorCraft Wind-Tunnel Model Using Measured Data

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.; Shimko, Anthony; Kvaternik, Raymond G.; Eure, Kenneth W.; Scott, Robert C.

    2006-01-01

    Aeroservoelastic (ASE) analytical models of a SensorCraft wind-tunnel model are generated using measured data. The data was acquired during the ASE wind-tunnel test of the HiLDA (High Lift-to-Drag Active) Wing model, tested in the NASA Langley Transonic Dynamics Tunnel (TDT) in late 2004. Two time-domain system identification techniques are applied to the development of the ASE analytical models: impulse response (IR) method and the Generalized Predictive Control (GPC) method. Using measured control surface inputs (frequency sweeps) and associated sensor responses, the IR method is used to extract corresponding input/output impulse response pairs. These impulse responses are then transformed into state-space models for use in ASE analyses. Similarly, the GPC method transforms measured random control surface inputs and associated sensor responses into an AutoRegressive with eXogenous input (ARX) model. The ARX model is then used to develop the gust load alleviation (GLA) control law. For the IR method, comparison of measured with simulated responses are presented to investigate the accuracy of the ASE analytical models developed. For the GPC method, comparison of simulated open-loop and closed-loop (GLA) time histories are presented.

  15. A three-step approach for the derivation and validation of high-performing predictive models using an operational dataset: congestive heart failure readmission case study.

    PubMed

    AbdelRahman, Samir E; Zhang, Mingyuan; Bray, Bruce E; Kawamoto, Kensaku

    2014-05-27

    The aim of this study was to propose an analytical approach to develop high-performing predictive models for congestive heart failure (CHF) readmission using an operational dataset with incomplete records and changing data over time. Our analytical approach involves three steps: pre-processing, systematic model development, and risk factor analysis. For pre-processing, variables that were absent in >50% of records were removed. Moreover, the dataset was divided into a validation dataset and derivation datasets which were separated into three temporal subsets based on changes to the data over time. For systematic model development, using the different temporal datasets and the remaining explanatory variables, the models were developed by combining the use of various (i) statistical analyses to explore the relationships between the validation and the derivation datasets; (ii) adjustment methods for handling missing values; (iii) classifiers; (iv) feature selection methods; and (iv) discretization methods. We then selected the best derivation dataset and the models with the highest predictive performance. For risk factor analysis, factors in the highest-performing predictive models were analyzed and ranked using (i) statistical analyses of the best derivation dataset, (ii) feature rankers, and (iii) a newly developed algorithm to categorize risk factors as being strong, regular, or weak. The analysis dataset consisted of 2,787 CHF hospitalizations at University of Utah Health Care from January 2003 to June 2013. In this study, we used the complete-case analysis and mean-based imputation adjustment methods; the wrapper subset feature selection method; and four ranking strategies based on information gain, gain ratio, symmetrical uncertainty, and wrapper subset feature evaluators. The best-performing models resulted from the use of a complete-case analysis derivation dataset combined with the Class-Attribute Contingency Coefficient discretization method and a voting classifier which averaged the results of multi-nominal logistic regression and voting feature intervals classifiers. Of 42 final model risk factors, discharge disposition, discretized age, and indicators of anemia were the most significant. This model achieved a c-statistic of 86.8%. The proposed three-step analytical approach enhanced predictive model performance for CHF readmissions. It could potentially be leveraged to improve predictive model performance in other areas of clinical medicine.

  16. The study on the near infrared spectrum technology of sauce component analysis

    NASA Astrophysics Data System (ADS)

    Li, Shangyu; Zhang, Jun; Chen, Xingdan; Liang, Jingqiu; Wang, Ce

    2006-01-01

    The author, Shangyu Li, engages in supervising and inspecting the quality of products. In soy sauce manufacturing, quality control of intermediate and final products by many components such as total nitrogen, saltless soluble solids, nitrogen of amino acids and total acid is demanded. Wet chemistry analytical methods need much labor and time for these analyses. In order to compensate for this problem, we used near infrared spectroscopy technology to measure the chemical-composition of soy sauce. In the course of the work, a certain amount of soy sauce was collected and was analyzed by wet chemistry analytical methods. The soy sauce was scanned by two kinds of the spectrometer, the Fourier Transform near infrared spectrometer (FT-NIR spectrometer) and the filter near infrared spectroscopy analyzer. The near infrared spectroscopy of soy sauce was calibrated with the components of wet chemistry methods by partial least squares regression and stepwise multiple linear regression. The contents of saltless soluble solids, total nitrogen, total acid and nitrogen of amino acids were predicted by cross validation. The results are compared with the wet chemistry analytical methods. The correlation coefficient and root-mean-square error of prediction (RMSEP) in the better prediction run were found to be 0.961 and 0.206 for total nitrogen, 0.913 and 1.215 for saltless soluble solids, 0.855 and 0.199 nitrogen of amino acids, 0.966 and 0.231 for total acid, respectively. The results presented here demonstrate that the NIR spectroscopy technology is promising for fast and reliable determination of major components of soy sauce.

  17. Study on bending behaviour of nickel–titanium rotary endodontic instruments by analytical and numerical analyses

    PubMed Central

    Tsao, C C; Liou, J U; Wen, P H; Peng, C C; Liu, T S

    2013-01-01

    Aim To develop analytical models and analyse the stress distribution and flexibility of nickel–titanium (NiTi) instruments subject to bending forces. Methodology The analytical method was used to analyse the behaviours of NiTi instruments under bending forces. Two NiTi instruments (RaCe and Mani NRT) with different cross-sections and geometries were considered. Analytical results were derived using Euler–Bernoulli nonlinear differential equations that took into account the screw pitch variation of these NiTi instruments. In addition, the nonlinear deformation analysis based on the analytical model and the finite element nonlinear analysis was carried out. Numerical results are obtained by carrying out a finite element method. Results According to analytical results, the maximum curvature of the instrument occurs near the instrument tip. Results of the finite element analysis revealed that the position of maximum von Mises stress was near the instrument tip. Therefore, the proposed analytical model can be used to predict the position of maximum curvature in the instrument where fracture may occur. Finally, results of analytical and numerical models were compatible. Conclusion The proposed analytical model was validated by numerical results in analysing bending deformation of NiTi instruments. The analytical model is useful in the design and analysis of instruments. The proposed theoretical model is effective in studying the flexibility of NiTi instruments. Compared with the finite element method, the analytical model can deal conveniently and effectively with the subject of bending behaviour of rotary NiTi endodontic instruments. PMID:23173762

  18. Brief summary of the evolution of high-temperature creep-fatigue life prediction models for crack initiation

    NASA Technical Reports Server (NTRS)

    Halford, Gary R.

    1993-01-01

    The evolution of high-temperature, creep-fatigue, life-prediction methods used for cyclic crack initiation is traced from inception in the late 1940's. The methods reviewed are material models as opposed to structural life prediction models. Material life models are used by both structural durability analysts and by material scientists. The latter use micromechanistic models as guidance to improve a material's crack initiation resistance. Nearly one hundred approaches and their variations have been proposed to date. This proliferation poses a problem in deciding which method is most appropriate for a given application. Approaches were identified as being combinations of thirteen different classifications. This review is intended to aid both developers and users of high-temperature fatigue life prediction methods by providing a background from which choices can be made. The need for high-temperature, fatigue-life prediction methods followed immediately on the heels of the development of large, costly, high-technology industrial and aerospace equipment immediately following the second world war. Major advances were made in the design and manufacture of high-temperature, high-pressure boilers and steam turbines, nuclear reactors, high-temperature forming dies, high-performance poppet valves, aeronautical gas turbine engines, reusable rocket engines, etc. These advances could no longer be accomplished simply by trial and error using the 'build-em and bust-em' approach. Development lead times were too great and costs too prohibitive to retain such an approach. Analytic assessments of anticipated performance, cost, and durability were introduced to cut costs and shorten lead times. The analytic tools were quite primitive at first and out of necessity evolved in parallel with hardware development. After forty years more descriptive, more accurate, and more efficient analytic tools are being developed. These include thermal-structural finite element and boundary element analyses, advanced constitutive stress-strain-temperature-time relations, and creep-fatigue-environmental models for crack initiation and propagation. The high-temperature durability methods that have evolved for calculating high-temperature fatigue crack initiation lives of structural engineering materials are addressed. Only a few of the methods were refined to the point of being directly useable in design. Recently, two of the methods were transcribed into computer software for use with personal computers.

  19. Brief summary of the evolution of high-temperature creep-fatigue life prediction models for crack initiation

    NASA Astrophysics Data System (ADS)

    Halford, Gary R.

    1993-10-01

    The evolution of high-temperature, creep-fatigue, life-prediction methods used for cyclic crack initiation is traced from inception in the late 1940's. The methods reviewed are material models as opposed to structural life prediction models. Material life models are used by both structural durability analysts and by material scientists. The latter use micromechanistic models as guidance to improve a material's crack initiation resistance. Nearly one hundred approaches and their variations have been proposed to date. This proliferation poses a problem in deciding which method is most appropriate for a given application. Approaches were identified as being combinations of thirteen different classifications. This review is intended to aid both developers and users of high-temperature fatigue life prediction methods by providing a background from which choices can be made. The need for high-temperature, fatigue-life prediction methods followed immediately on the heels of the development of large, costly, high-technology industrial and aerospace equipment immediately following the second world war. Major advances were made in the design and manufacture of high-temperature, high-pressure boilers and steam turbines, nuclear reactors, high-temperature forming dies, high-performance poppet valves, aeronautical gas turbine engines, reusable rocket engines, etc. These advances could no longer be accomplished simply by trial and error using the 'build-em and bust-em' approach. Development lead times were too great and costs too prohibitive to retain such an approach. Analytic assessments of anticipated performance, cost, and durability were introduced to cut costs and shorten lead times. The analytic tools were quite primitive at first and out of necessity evolved in parallel with hardware development. After forty years more descriptive, more accurate, and more efficient analytic tools are being developed. These include thermal-structural finite element and boundary element analyses, advanced constitutive stress-strain-temperature-time relations, and creep-fatigue-environmental models for crack initiation and propagation. The high-temperature durability methods that have evolved for calculating high-temperature fatigue crack initiation lives of structural engineering materials are addressed. Only a few of the methods were refined to the point of being directly useable in design.

  20. TH-A-19A-06: Site-Specific Comparison of Analytical and Monte Carlo Based Dose Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schuemann, J; Grassberger, C; Paganetti, H

    2014-06-15

    Purpose: To investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict dose distributions and to verify currently used uncertainty margins in proton therapy. Methods: Dose distributions predicted by an analytical pencilbeam algorithm were compared with Monte Carlo simulations (MCS) using TOPAS. 79 complete patient treatment plans were investigated for 7 disease sites (liver, prostate, breast, medulloblastoma spine and whole brain, lung and head and neck). A total of 508 individual passively scattered treatment fields were analyzed for field specific properties. Comparisons based on target coverage indices (EUD, D95, D90 and D50)more » were performed. Range differences were estimated for the distal position of the 90% dose level (R90) and the 50% dose level (R50). Two-dimensional distal dose surfaces were calculated and the root mean square differences (RMSD), average range difference (ARD) and average distal dose degradation (ADD), the distance between the distal position of the 80% and 20% dose levels (R80- R20), were analyzed. Results: We found target coverage indices calculated by TOPAS to generally be around 1–2% lower than predicted by the analytical algorithm. Differences in R90 predicted by TOPAS and the planning system can be larger than currently applied range margins in proton therapy for small regions distal to the target volume. We estimate new site-specific range margins (R90) for analytical dose calculations considering total range uncertainties and uncertainties from dose calculation alone based on the RMSD. Our results demonstrate that a reduction of currently used uncertainty margins is feasible for liver, prostate and whole brain fields even without introducing MC dose calculations. Conclusion: Analytical dose calculation algorithms predict dose distributions within clinical limits for more homogeneous patients sites (liver, prostate, whole brain). However, we recommend treatment plan verification using Monte Carlo simulations for patients with complex geometries.« less

  1. Brownian systems with spatially inhomogeneous activity

    NASA Astrophysics Data System (ADS)

    Sharma, A.; Brader, J. M.

    2017-09-01

    We generalize the Green-Kubo approach, previously applied to bulk systems of spherically symmetric active particles [J. Chem. Phys. 145, 161101 (2016), 10.1063/1.4966153], to include spatially inhomogeneous activity. The method is applied to predict the spatial dependence of the average orientation per particle and the density. The average orientation is given by an integral over the self part of the Van Hove function and a simple Gaussian approximation to this quantity yields an accurate analytical expression. Taking this analytical result as input to a dynamic density functional theory approximates the spatial dependence of the density in good agreement with simulation data. All theoretical predictions are validated using Brownian dynamics simulations.

  2. Big Data and Predictive Analytics: Applications in the Care of Children.

    PubMed

    Suresh, Srinivasan

    2016-04-01

    Emerging changes in the United States' healthcare delivery model have led to renewed interest in data-driven methods for managing quality of care. Analytics (Data plus Information) plays a key role in predictive risk assessment, clinical decision support, and various patient throughput measures. This article reviews the application of a pediatric risk score, which is integrated into our hospital's electronic medical record, and provides an early warning sign for clinical deterioration. Dashboards that are a part of disease management systems, are a vital tool in peer benchmarking, and can help in reducing unnecessary variations in care. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Large deformation of uniaxially loaded slender microbeams on the basis of modified couple stress theory: Analytical solution and Galerkin-based method

    NASA Astrophysics Data System (ADS)

    Kiani, Keivan

    2017-09-01

    Large deformation regime of micro-scale slender beam-like structures subjected to axially pointed loads is of high interest to nanotechnologists and applied mechanics community. Herein, size-dependent nonlinear governing equations are derived by employing modified couple stress theory. Under various boundary conditions, analytical relations between axially applied loads and deformations are presented. Additionally, a novel Galerkin-based assumed mode method (AMM) is established to solve the highly nonlinear equations. In some particular cases, the predicted results by the analytical approach are also checked with those of AMM and a reasonably good agreement is reported. Subsequently, the key role of the material length scale on the load-deformation of microbeams is discussed and the deficiencies of the classical elasticity theory in predicting such a crucial mechanical behavior are explained in some detail. The influences of slenderness ratio and thickness of the microbeam on the obtained results are also examined. The present work could be considered as a pivotal step in better realizing the postbuckling behavior of nano-/micro- electro-mechanical systems consist of microbeams.

  4. QSPR studies on the photoinduced-fluorescence behaviour of pharmaceuticals and pesticides.

    PubMed

    López-Malo, D; Bueso-Bordils, J I; Duart, M J; Alemán-López, P A; Martín-Algarra, R V; Antón-Fos, G M; Lahuerta-Zamora, L; Martínez-Calatayud, J

    2017-07-01

    Fluorimetric analysis is still a growing line of research in the determination of a wide range of organic compounds, including pharmaceuticals and pesticides, which makes necessary the development of new strategies aimed at improving the performance of fluorescence determinations as well as the sensitivity and, especially, the selectivity of the newly developed analytical methods. In this paper are presented applications of a useful and growing tool suitable for fostering and improving research in the analytical field. Experimental screening, molecular connectivity and discriminant analysis are applied to organic compounds to predict their fluorescent behaviour after their photodegradation by UV irradiation in a continuous flow manifold (multicommutation flow assembly). The screening was based on online fluorimetric measurement and comprised pre-selected compounds with different molecular structures (pharmaceuticals and some pesticides with known 'native' fluorescent behaviour) to study their changes in fluorescent behaviour after UV irradiation. Theoretical predictions agree with the results from the experimental screening and could be used to develop selective analytical methods, as well as helping to reduce the need for expensive, time-consuming and trial-and-error screening procedures.

  5. Analytical and Experimental Vibration Analysis of a Faulty Gear System.

    DTIC Science & Technology

    1994-10-01

    Wigner - Ville Distribution ( WVD ) was used to give a comprehensive comparison of the predicted and...experimental results. The WVD method applied to the experimental results were also compared to other fault detection techniques to verify the WVD’s ability to...of the damaged test gear and the predicted vibration from the model with simulated gear tooth pitting damage. Results also verified that the WVD method can successfully detect and locate gear tooth wear and pitting damage.

  6. Accelerated characterization of graphite/epoxy composites

    NASA Technical Reports Server (NTRS)

    Griffith, W. I.; Morris, D. H.; Brinson, H. F.

    1980-01-01

    A method to predict the long term compliance of unidirectional off-axis laminates from short term laboratory tests is presented. The method uses an orthotropic transformation equation and the time-stress-temperature superposition principle. Short term tests are used to construct master curves for two off-axis unidirectional laminates with fiber angles of 10 and 90 degrees. Analytical predictions of long term compliance for 30 and 60 degrees laminates are made. Comparisons with experimental data are also given.

  7. Rotor/Wing Interactions in Hover

    NASA Technical Reports Server (NTRS)

    Young, Larry A.; Derby, Michael R.

    2002-01-01

    Hover predictions of tiltrotor aircraft are hampered by the lack of accurate and computationally efficient models for rotor/wing interactional aerodynamics. This paper summarizes the development of an approximate, potential flow solution for the rotor-on-rotor and wing-on-rotor interactions. This analysis is based on actuator disk and vortex theory and the method of images. The analysis is applicable for out-of-ground-effect predictions. The analysis is particularly suited for aircraft preliminary design studies. Flow field predictions from this simple analytical model are validated against experimental data from previous studies. The paper concludes with an analytical assessment of the influence of rotor-on-rotor and wing-on-rotor interactions. This assessment examines the effect of rotor-to-wing offset distance, wing sweep, wing span, and flaperon incidence angle on tiltrotor inflow and performance.

  8. A method for calculating strut and splitter plate noise in exit ducts: Theory and verification

    NASA Technical Reports Server (NTRS)

    Fink, M. R.

    1978-01-01

    Portions of a four-year analytical and experimental investigation relative to noise radiation from engine internal components in turbulent flow are summarized. Spectra measured for such airfoils over a range of chord, thickness ratio, flow velocity, and turbulence level were compared with predictions made by an available rigorous thin-airfoil analytical method. This analysis included the effects of flow compressibility and source noncompactness. Generally good agreement was obtained. This noise calculation method for isolated airfoils in turbulent flow was combined with a method for calculating transmission of sound through a subsonic exit duct and with an empirical far-field directivity shape. These three elements were checked separately and were individually shown to give close agreement with data. This combination provides a method for predicting engine internally generated aft-radiated noise from radial struts and stators, and annular splitter rings. Calculated sound power spectra, directivity, and acoustic pressure spectra were compared with the best available data. These data were for noise caused by a fan exit duct annular splitter ring, larger-chord stator blades, and turbine exit struts.

  9. Optimization of classification and regression analysis of four monoclonal antibodies from Raman spectra using collaborative machine learning approach.

    PubMed

    Le, Laetitia Minh Maï; Kégl, Balázs; Gramfort, Alexandre; Marini, Camille; Nguyen, David; Cherti, Mehdi; Tfaili, Sana; Tfayli, Ali; Baillet-Guffroy, Arlette; Prognon, Patrice; Chaminade, Pierre; Caudron, Eric

    2018-07-01

    The use of monoclonal antibodies (mAbs) constitutes one of the most important strategies to treat patients suffering from cancers such as hematological malignancies and solid tumors. These antibodies are prescribed by the physician and prepared by hospital pharmacists. An analytical control enables the quality of the preparations to be ensured. The aim of this study was to explore the development of a rapid analytical method for quality control. The method used four mAbs (Infliximab, Bevacizumab, Rituximab and Ramucirumab) at various concentrations and was based on recording Raman data and coupling them to a traditional chemometric and machine learning approach for data analysis. Compared to conventional linear approach, prediction errors are reduced with a data-driven approach using statistical machine learning methods. In the latter, preprocessing and predictive models are jointly optimized. An additional original aspect of the work involved on submitting the problem to a collaborative data challenge platform called Rapid Analytics and Model Prototyping (RAMP). This allowed using solutions from about 300 data scientists in collaborative work. Using machine learning, the prediction of the four mAbs samples was considerably improved. The best predictive model showed a combined error of 2.4% versus 14.6% using linear approach. The concentration and classification errors were 5.8% and 0.7%, only three spectra were misclassified over the 429 spectra of the test set. This large improvement obtained with machine learning techniques was uniform for all molecules but maximal for Bevacizumab with an 88.3% reduction on combined errors (2.1% versus 17.9%). Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Analysis of variation matrix array by bilinear least squares-residual bilinearization (BLLS-RBL) for resolving and quantifying of foodstuff dyes in a candy sample.

    PubMed

    Asadpour-Zeynali, Karim; Maryam Sajjadi, S; Taherzadeh, Fatemeh; Rahmanian, Reza

    2014-04-05

    Bilinear least square (BLLS) method is one of the most suitable algorithms for second-order calibration. Original BLLS method is not applicable to the second order pH-spectral data when an analyte has more than one spectroscopically active species. Bilinear least square-residual bilinearization (BLLS-RBL) was developed to achieve the second order advantage for analysis of complex mixtures. Although the modified method is useful, the pure profiles cannot be obtained and only the linear combination will be obtained. Moreover, for prediction of analyte in an unknown sample, the original algorithm of RBL may diverge; instead of converging to the desired analyte concentrations. Therefore, Gauss Newton-RLB algorithm should be used, which is not as simple as original protocol. Also, the analyte concentration can be predicted on the basis of each of the equilibrating species of the component of interest that are not exactly the same. The aim of the present work is to tackle the non-uniqueness problem in the second order calibration of monoprotic acid mixtures and divergence of RBL. Each pH-absorbance matrix was pretreated by subtraction of the first spectrum from other spectra in the data set to produce full rank array that is called variation matrix. Then variation matrices were analyzed uniquely by original BLLS-RBL that is more parsimonious than its modified counterpart. The proposed method was performed on the simulated as well as the analysis of real data. Sunset yellow and Carmosine as monoprotic acids were determined in candy sample in the presence of unknown interference by this method. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. Predictive analytics and child protection: constraints and opportunities.

    PubMed

    Russell, Jesse

    2015-08-01

    This paper considers how predictive analytics might inform, assist, and improve decision making in child protection. Predictive analytics represents recent increases in data quantity and data diversity, along with advances in computing technology. While the use of data and statistical modeling is not new to child protection decision making, its use in child protection is experiencing growth, and efforts to leverage predictive analytics for better decision-making in child protection are increasing. Past experiences, constraints and opportunities are reviewed. For predictive analytics to make the most impact on child protection practice and outcomes, it must embrace established criteria of validity, equity, reliability, and usefulness. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Prediction-Correction Algorithms for Time-Varying Constrained Optimization

    DOE PAGES

    Simonetto, Andrea; Dall'Anese, Emiliano

    2017-07-26

    This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less

  13. A new experimental method for the accelerated characterization of composite materials

    NASA Technical Reports Server (NTRS)

    Brinson, H. F.; Morris, D. H.; Yeow, Y. T.

    1978-01-01

    A method which permits the prediction of long-term properties of graphite/epoxy laminates on the basis of short-term (15 min) laboratory tests is described. Demonstration of delayed viscoelastic fracture in one laminate configuration, and data on the time and temperature response of a matrix-dominated unidirectional laminate contributed to a characterization of the viscoelastic process in the graphite/epoxy composites. Master curves from short-term tests of certain laminate configurations can be employed to generate long-term master curves. In addition, analytical predictions from short-term results can be used to predict long-term (25-hour) laminate properties.

  14. Net analyte signal standard addition method for simultaneous determination of sulphadiazine and trimethoprim in bovine milk and veterinary medicines.

    PubMed

    Hajian, Reza; Mousavi, Esmat; Shams, Nafiseh

    2013-06-01

    Net analyte signal standard addition method has been used for the simultaneous determination of sulphadiazine and trimethoprim by spectrophotometry in some bovine milk and veterinary medicines. The method combines the advantages of standard addition method with the net analyte signal concept which enables the extraction of information concerning a certain analyte from spectra of multi-component mixtures. This method has some advantages such as the use of a full spectrum realisation, therefore it does not require calibration and prediction step and only a few measurements require for the determination. Cloud point extraction based on the phenomenon of solubilisation used for extraction of sulphadiazine and trimethoprim in bovine milk. It is based on the induction of micellar organised media by using Triton X-100 as an extraction solvent. At the optimum conditions, the norm of NAS vectors increased linearly with concentrations in the range of 1.0-150.0 μmolL(-1) for both sulphadiazine and trimethoprim. The limits of detection (LOD) for sulphadiazine and trimethoprim were 0.86 and 0.92 μmolL(-1), respectively. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Thermodynamics of reformulated automotive fuels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zudkevitch, D.; Murthy, A.K.S.; Gmehling, J.

    1995-06-01

    Two methods for predicting Reid vapor pressure (Rvp) and initial vapor emissions of reformulated gasoline blends that contain one or more oxygenated compounds show excellent agreement with experimental data. In the first method, method A, D-86 distillation data for gasoline blends are used for predicting Rvp from a simulation of the mini dry vapor pressure equivalent (Dvpe) experiment. The other method, method B, relies on analytical information (PIANO analyses) of the base gasoline and uses classical thermodynamics for simulating the same Rvp equivalent (Rvpe) mini experiment. Method B also predicts composition and other properties for the fuel`s initial vapor emission.more » Method B, although complex, is more useful in that is can predict properties of blends without a D-86 distillation. An important aspect of method B is its capability to predict composition of initial vapor emissions from gasoline blends. Thus, it offers a powerful tool to planners of gasoline blending. Method B uses theoretically sound formulas, rigorous thermodynamic routines and uses data and correlations of physical properties that are in the public domain. Results indicate that predictions made with both methods agree very well with experimental values of Dvpe. Computer simulation methods were programmed and tested.« less

  16. Quantitative prediction of solute strengthening in aluminium alloys.

    PubMed

    Leyson, Gerard Paul M; Curtin, William A; Hector, Louis G; Woodward, Christopher F

    2010-09-01

    Despite significant advances in computational materials science, a quantitative, parameter-free prediction of the mechanical properties of alloys has been difficult to achieve from first principles. Here, we present a new analytic theory that, with input from first-principles calculations, is able to predict the strengthening of aluminium by substitutional solute atoms. Solute-dislocation interaction energies in and around the dislocation core are first calculated using density functional theory and a flexible-boundary-condition method. An analytic model for the strength, or stress to move a dislocation, owing to the random field of solutes, is then presented. The theory, which has no adjustable parameters and is extendable to other metallic alloys, predicts both the energy barriers to dislocation motion and the zero-temperature flow stress, allowing for predictions of finite-temperature flow stresses. Quantitative comparisons with experimental flow stresses at temperature T=78 K are made for Al-X alloys (X=Mg, Si, Cu, Cr) and good agreement is obtained.

  17. Prediction of retention times in comprehensive two-dimensional gas chromatography using thermodynamic models.

    PubMed

    McGinitie, Teague M; Harynuk, James J

    2012-09-14

    A method was developed to accurately predict both the primary and secondary retention times for a series of alkanes, ketones and alcohols in a flow-modulated GC×GC system. This was accomplished through the use of a three-parameter thermodynamic model where ΔH, ΔS, and ΔC(p) for an analyte's interaction with the stationary phases in both dimensions are known. Coupling this thermodynamic model with a time summation calculation it was possible to accurately predict both (1)t(r) and (2)t(r) for all analytes. The model was able to predict retention times regardless of the temperature ramp used, with an average error of only 0.64% for (1)t(r) and an average error of only 2.22% for (2)t(r). The model shows promise for the accurate prediction of retention times in GC×GC for a wide range of compounds and is able to utilize data collected from 1D experiments. Copyright © 2012 Elsevier B.V. All rights reserved.

  18. TU-F-17A-03: An Analytical Respiratory Perturbation Model for Lung Motion Prediction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, G; Yuan, A; Wei, J

    2014-06-15

    Purpose: Breathing irregularity is common, causing unreliable prediction in tumor motion for correlation-based surrogates. Both tidal volume (TV) and breathing pattern (BP=ΔVthorax/TV, where TV=ΔVthorax+ΔVabdomen) affect lung motion in anterior-posterior and superior-inferior directions. We developed a novel respiratory motion perturbation (RMP) model in analytical form to account for changes in TV and BP in motion prediction from simulation to treatment. Methods: The RMP model is an analytical function of patient-specific anatomic and physiologic parameters. It contains a base-motion trajectory d(x,y,z) derived from a 4-dimensional computed tomography (4DCT) at simulation and a perturbation term Δd(ΔTV,ΔBP) accounting for deviation at treatment from simulation.more » The perturbation is dependent on tumor-specific location and patient-specific anatomy. Eleven patients with simulation and treatment 4DCT images were used to assess the RMP method in motion prediction from 4DCT1 to 4DCT2, and vice versa. For each patient, ten motion trajectories of corresponding points in the lower lobes were measured in both 4DCTs: one served as the base-motion trajectory and the other as the ground truth for comparison. In total, 220 motion trajectory predictions were assessed. The motion discrepancy between two 4DCTs for each patient served as a control. An established 5D motion model was used for comparison. Results: The average absolute error of RMP model prediction in superior-inferior direction is 1.6±1.8 mm, similar to 1.7±1.6 mm from the 5D model (p=0.98). Some uncertainty is associated with limited spatial resolution (2.5mm slice thickness) and temporal resolution (10-phases). Non-corrected motion discrepancy between two 4DCTs is 2.6±2.7mm, with the maximum of ±20mm, and correction is necessary (p=0.01). Conclusion: The analytical motion model predicts lung motion with accuracy similar to the 5D model. The analytical model is based on physical relationships, requires no training, and therefore is potentially more resilient to breathing irregularities. On-going investigation introduces airflow into the RMP model for improvement. This research is in part supported by NIH (U54CA137788/132378). AY would like to thank MSKCC summer medical student research program supported by National Cancer Institute and hosted by Department of Medical Physics at MSKCC.« less

  19. Advanced quantitative magnetic nondestructive evaluation methods - Theory and experiment

    NASA Technical Reports Server (NTRS)

    Barton, J. R.; Kusenberger, F. N.; Beissner, R. E.; Matzkanin, G. A.

    1979-01-01

    The paper reviews the scale of fatigue crack phenomena in relation to the size detection capabilities of nondestructive evaluation methods. An assessment of several features of fatigue in relation to the inspection of ball and roller bearings suggested the use of magnetic methods; magnetic domain phenomena including the interaction of domains and inclusions, and the influence of stress and magnetic field on domains are discussed. Experimental results indicate that simplified calculations can be used to predict many features of these results; the data predicted by analytic models which use finite element computer analysis predictions do not agree with respect to certain features. Experimental analyses obtained on rod-type fatigue specimens which show experimental magnetic measurements in relation to the crack opening displacement and volume and crack depth should provide methods for improved crack characterization in relation to fracture mechanics and life prediction.

  20. Handbook of Analytical Methods for Textile Composites

    NASA Technical Reports Server (NTRS)

    Cox, Brian N.; Flanagan, Gerry

    1997-01-01

    The purpose of this handbook is to introduce models and computer codes for predicting the properties of textile composites. The handbook includes several models for predicting the stress-strain response all the way to ultimate failure; methods for assessing work of fracture and notch sensitivity; and design rules for avoiding certain critical mechanisms of failure, such as delamination, by proper textile design. The following textiles received some treatment: 2D woven, braided, and knitted/stitched laminates and 3D interlock weaves, and braids.

  1. Accelerated characterization of graphite/epoxy composites

    NASA Technical Reports Server (NTRS)

    Griffith, W. I.; Morris, D. H.; Brinson, H. F.

    1980-01-01

    A method to predict the long-term compliance of unidirectional off-axis laminates from short-term laboratory tests is presented. The method uses an orthotropic transformation equation and the time-stress-temperature superposition principle. Short-term tests are used to construct master curves for two off-axis unidirectional laminates with fiber angles of 10 deg and 90 deg. In addition, analytical predictions of long-term compliance for 30 deg and 60 deg laminates are made. Comparisons with experimental data are also given.

  2. Analytical Methods of Decoupling the Automotive Engine Torque Roll Axis

    NASA Astrophysics Data System (ADS)

    JEONG, TAESEOK; SINGH, RAJENDRA

    2000-06-01

    This paper analytically examines the multi-dimensional mounting schemes of an automotive engine-gearbox system when excited by oscillating torques. In particular, the issue of torque roll axis decoupling is analyzed in significant detail since it is poorly understood. New dynamic decoupling axioms are presented an d compared with the conventional elastic axis mounting and focalization methods. A linear time-invariant system assumption is made in addition to a proportionally damped system. Only rigid-body modes of the powertrain are considered and the chassis elements are assumed to be rigid. Several simplified physical systems are considered and new closed-form solutions for symmetric and asymmetric engine-mounting systems are developed. These clearly explain the design concepts for the 4-point mounting scheme. Our analytical solutions match with the existing design formulations that are only applicable to symmetric geometries. Spectra for all six rigid-body motions are predicted using the alternate decoupling methods and the closed-form solutions are verified. Also, our method is validated by comparing modal solutions with prior experimental and analytical studies. Parametric design studies are carried out to illustrate the methodology. Chief contributions of this research include the development of new or refined analytical models and closed-form solutions along with improved design strategies for the torque roll axis decoupling.

  3. Application of Dynamic Analysis in Semi-Analytical Finite Element Method.

    PubMed

    Liu, Pengfei; Xing, Qinyan; Wang, Dawei; Oeser, Markus

    2017-08-30

    Analyses of dynamic responses are significantly important for the design, maintenance and rehabilitation of asphalt pavement. In order to evaluate the dynamic responses of asphalt pavement under moving loads, a specific computational program, SAFEM, was developed based on a semi-analytical finite element method. This method is three-dimensional and only requires a two-dimensional FE discretization by incorporating Fourier series in the third dimension. In this paper, the algorithm to apply the dynamic analysis to SAFEM was introduced in detail. Asphalt pavement models under moving loads were built in the SAFEM and commercial finite element software ABAQUS to verify the accuracy and efficiency of the SAFEM. The verification shows that the computational accuracy of SAFEM is high enough and its computational time is much shorter than ABAQUS. Moreover, experimental verification was carried out and the prediction derived from SAFEM is consistent with the measurement. Therefore, the SAFEM is feasible to reliably predict the dynamic response of asphalt pavement under moving loads, thus proving beneficial to road administration in assessing the pavement's state.

  4. Comparison of the acetyl bromide spectrophotometric method with other analytical lignin methods for determining lignin concentration in forage samples.

    PubMed

    Fukushima, Romualdo S; Hatfield, Ronald D

    2004-06-16

    Present analytical methods to quantify lignin in herbaceous plants are not totally satisfactory. A spectrophotometric method, acetyl bromide soluble lignin (ABSL), has been employed to determine lignin concentration in a range of plant materials. In this work, lignin extracted with acidic dioxane was used to develop standard curves and to calculate the derived linear regression equation (slope equals absorptivity value or extinction coefficient) for determining the lignin concentration of respective cell wall samples. This procedure yielded lignin values that were different from those obtained with Klason lignin, acid detergent acid insoluble lignin, or permanganate lignin procedures. Correlations with in vitro dry matter or cell wall digestibility of samples were highest with data from the spectrophotometric technique. The ABSL method employing as standard lignin extracted with acidic dioxane has the potential to be employed as an analytical method to determine lignin concentration in a range of forage materials. It may be useful in developing a quick and easy method to predict in vitro digestibility on the basis of the total lignin content of a sample.

  5. Advances in spectroscopic methods for quantifying soil carbon

    USGS Publications Warehouse

    Reeves, James B.; McCarty, Gregory W.; Calderon, Francisco; Hively, W. Dean

    2012-01-01

    The current gold standard for soil carbon (C) determination is elemental C analysis using dry combustion. However, this method requires expensive consumables, is limited by the number of samples that can be processed (~100/d), and is restricted to the determination of total carbon. With increased interest in soil C sequestration, faster methods of analysis are needed, and there is growing interest in methods based on diffuse reflectance spectroscopy in the visible, near-infrared or mid-infrared spectral ranges. These spectral methods can decrease analytical requirements and speed sample processing, be applied to large landscape areas using remote sensing imagery, and be used to predict multiple analytes simultaneously. However, the methods require localized calibrations to establish the relationship between spectral data and reference analytical data, and also have additional, specific problems. For example, remote sensing is capable of scanning entire watersheds for soil carbon content but is limited to the surface layer of tilled soils and may require difficult and extensive field sampling to obtain proper localized calibration reference values. The objective of this chapter is to discuss the present state of spectroscopic methods for determination of soil carbon.

  6. Blade loss transient dynamics analysis, volume 2. Task 2: Theoretical and analytical development. Task 3: Experimental verification

    NASA Technical Reports Server (NTRS)

    Gallardo, V. C.; Storace, A. S.; Gaffney, E. F.; Bach, L. J.; Stallone, M. J.

    1981-01-01

    The component element method was used to develop a transient dynamic analysis computer program which is essentially based on modal synthesis combined with a central, finite difference, numerical integration scheme. The methodology leads to a modular or building-block technique that is amenable to computer programming. To verify the analytical method, turbine engine transient response analysis (TETRA), was applied to two blade-out test vehicles that had been previously instrumented and tested. Comparison of the time dependent test data with those predicted by TETRA led to recommendations for refinement or extension of the analytical method to improve its accuracy and overcome its shortcomings. The development of working equations, their discretization, numerical solution scheme, the modular concept of engine modelling, the program logical structure and some illustrated results are discussed. The blade-loss test vehicles (rig full engine), the type of measured data, and the engine structural model are described.

  7. Calculated and measured fields in superferric wiggler magnets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blum, E.B.; Solomon, L.

    1995-02-01

    Although Klaus Halbach is widely known and appreciated as the originator of the computer program POISSON for electromagnetic field calculation, Klaus has always believed that analytical methods can give much more insight into the performance of a magnet than numerical simulation. Analytical approximations readily show how the different aspects of a magnet`s design such as pole dimensions, current, and coil configuration contribute to the performance. These methods yield accuracies of better than 10%. Analytical methods should therefore be used when conceptualizing a magnet design. Computer analysis can then be used for refinement. A simple model is presented for the peakmore » on-axis field of an electro-magnetic wiggler with iron poles and superconducting coils. The model is applied to the radiator section of the superconducting wiggler for the BNL Harmonic Generation Free Electron Laser. The predictions of the model are compared to the measured field and the results from POISSON.« less

  8. Residual Stress Reversal in Highly Strained Shot Peened Structural Elements. Degree awarded by Florida Univ.

    NASA Technical Reports Server (NTRS)

    Mitchell, William S.; Throckmorton, David (Technical Monitor)

    2002-01-01

    The purpose of this research was to further the understanding of a crack initiation problem in a highly strained pressure containment housing. Finite Element Analysis methods were used to model the behavior of shot peened materials undergoing plastic deformation. Analytical results are in agreement with laboratory tensile tests that simulated the actual housing load conditions. These results further validate the original investigation finding that the shot peened residual stress had reversed, changing from compressive to tensile, and demonstrate that analytical finite element methods can be used to predict this behavior.

  9. A new mathematical solution for predicting char activation reactions

    USGS Publications Warehouse

    Rafsanjani, H.H.; Jamshidi, E.; Rostam-Abadi, M.

    2002-01-01

    The differential conservation equations that describe typical gas-solid reactions, such as activation of coal chars, yield a set of coupled second-order partial differential equations. The solution of these coupled equations by exact analytical methods is impossible. In addition, an approximate or exact solution only provides predictions for either reaction- or diffusion-controlling cases. A new mathematical solution, the quantize method (QM), was applied to predict the gasification rates of coal char when both chemical reaction and diffusion through the porous char are present. Carbon conversion rates predicted by the QM were in closer agreement with the experimental data than those predicted by the random pore model and the simple particle model. ?? 2002 Elsevier Science Ltd. All rights reserved.

  10. A density functional theory study of the correlation between analyte basicity, ZnPc adsorption strength, and sensor response.

    PubMed

    Tran, N L; Bohrer, F I; Trogler, W C; Kummel, A C

    2009-05-28

    Density functional theory (DFT) simulations were used to determine the binding strength of 12 electron-donating analytes to the zinc metal center of a zinc phthalocyanine molecule (ZnPc monomer). The analyte binding strengths were compared to the analytes' enthalpies of complex formation with boron trifluoride (BF(3)), which is a direct measure of their electron donating ability or Lewis basicity. With the exception of the most basic analyte investigated, the ZnPc binding energies were found to correlate linearly with analyte basicities. Based on natural population analysis calculations, analyte complexation to the Zn metal of the ZnPc monomer resulted in limited charge transfer from the analyte to the ZnPc molecule, which increased with analyte-ZnPc binding energy. The experimental analyte sensitivities from chemiresistor ZnPc sensor data were proportional to an exponential of the binding energies from DFT calculations consistent with sensitivity being proportional to analyte coverage and binding strength. The good correlation observed suggests DFT is a reliable method for the prediction of chemiresistor metallophthalocyanine binding strengths and response sensitivities.

  11. Symmetric airfoil geometry effects on leading edge noise.

    PubMed

    Gill, James; Zhang, X; Joseph, P

    2013-10-01

    Computational aeroacoustic methods are applied to the modeling of noise due to interactions between gusts and the leading edge of real symmetric airfoils. Single frequency harmonic gusts are interacted with various airfoil geometries at zero angle of attack. The effects of airfoil thickness and leading edge radius on noise are investigated systematically and independently for the first time, at higher frequencies than previously used in computational methods. Increases in both leading edge radius and thickness are found to reduce the predicted noise. This noise reduction effect becomes greater with increasing frequency and Mach number. The dominant noise reduction mechanism for airfoils with real geometry is found to be related to the leading edge stagnation region. It is shown that accurate leading edge noise predictions can be made when assuming an inviscid meanflow, but that it is not valid to assume a uniform meanflow. Analytic flat plate predictions are found to over-predict the noise due to a NACA 0002 airfoil by up to 3 dB at high frequencies. The accuracy of analytic flat plate solutions can be expected to decrease with increasing airfoil thickness, leading edge radius, gust frequency, and Mach number.

  12. GI-POP: a combinational annotation and genomic island prediction pipeline for ongoing microbial genome projects.

    PubMed

    Lee, Chi-Ching; Chen, Yi-Ping Phoebe; Yao, Tzu-Jung; Ma, Cheng-Yu; Lo, Wei-Cheng; Lyu, Ping-Chiang; Tang, Chuan Yi

    2013-04-10

    Sequencing of microbial genomes is important because of microbial-carrying antibiotic and pathogenetic activities. However, even with the help of new assembling software, finishing a whole genome is a time-consuming task. In most bacteria, pathogenetic or antibiotic genes are carried in genomic islands. Therefore, a quick genomic island (GI) prediction method is useful for ongoing sequencing genomes. In this work, we built a Web server called GI-POP (http://gipop.life.nthu.edu.tw) which integrates a sequence assembling tool, a functional annotation pipeline, and a high-performance GI predicting module, in a support vector machine (SVM)-based method called genomic island genomic profile scanning (GI-GPS). The draft genomes of the ongoing genome projects in contigs or scaffolds can be submitted to our Web server, and it provides the functional annotation and highly probable GI-predicting results. GI-POP is a comprehensive annotation Web server designed for ongoing genome project analysis. Researchers can perform annotation and obtain pre-analytic information include possible GIs, coding/non-coding sequences and functional analysis from their draft genomes. This pre-analytic system can provide useful information for finishing a genome sequencing project. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Predicting CH4 adsorption capacity of microporous carbon using N2 isotherm and a new analytical model

    USGS Publications Warehouse

    Sun, Jielun; Chen, S.; Rostam-Abadi, M.; Rood, M.J.

    1998-01-01

    A new analytical pore size distribution (PSD) model was developed to predict CH4 adsorption (storage) capacity of microporous adsorbent carbon. The model is based on a 3-D adsorption isotherm equation, derived from statistical mechanical principles. Least squares error minimization is used to solve the PSD without any pre-assumed distribution function. In comparison with several well-accepted analytical methods from the literature, this 3-D model offers relatively realistic PSD description for select reference materials, including activated carbon fibers. N2 and CH4 adsorption data were correlated using the 3-D model for commercial carbons BPL and AX-21. Predicted CH4 adsorption isotherms, based on N2 adsorption at 77 K, were in reasonable agreement with the experimental CH4 isotherms. Modeling results indicate that not all the pores contribute the same percentage Vm/Vs for CH4 storage due to different adsorbed CH4 densities. Pores near 8-9 A?? shows higher Vm/Vs on the equivalent volume basis than does larger pores.

  14. An overview of selected NASP aeroelastic studies at the NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Spain, Charles V.; Soistmann, David L.; Parker, Ellen C.; Gibbons, Michael D.; Gilbert, Michael G.

    1990-01-01

    Following an initial discussion of the NASP flight environment, the results of recent aeroelastic testing of NASP-type highly swept delta-wing models in Langley's Transonic Dynamics Tunnel (TDT) are summarized. Subsonic and transonic flutter characteristics of a variety of these models are described, and several analytical codes used to predict flutter of these models are evaluated. These codes generally provide good, but conservative predictions of subsonic and transonic flutter. Also, test results are presented on a nonlinear transonic phenomena known as aileron buzz which occurred in the wind tunnel on highly swept delta wings with full-span ailerons. An analytical procedure which assesses the effects of hypersonic heating on aeroelastic instabilities (aerothermoelasticity) is also described. This procedure accurately predicted flutter of a heated aluminum wing on which experimental data exists. Results are presented on the application of this method to calculate the flutter characteristics of a fine-element model of a generic NASP configuration. Finally, it is demonstrated analytically that active controls can be employed to improve the aeroelastic stability and ride quality of a generic NASP vehicle flying at hypersonic speeds.

  15. Analytic prediction of unconfined boundary layer flashback limits in premixed hydrogen-air flames

    NASA Astrophysics Data System (ADS)

    Hoferichter, Vera; Hirsch, Christoph; Sattelmayer, Thomas

    2017-05-01

    Flame flashback is a major challenge in premixed combustion. Hence, the prediction of the minimum flow velocity to prevent boundary layer flashback is of high technical interest. This paper presents an analytic approach to predicting boundary layer flashback limits for channel and tube burners. The model reflects the experimentally observed flashback mechanism and consists of a local and global analysis. Based on the local analysis, the flow velocity at flashback initiation is obtained depending on flame angle and local turbulent burning velocity. The local turbulent burning velocity is calculated in accordance with a predictive model for boundary layer flashback limits of duct-confined flames presented by the authors in an earlier publication. This ensures consistency of both models. The flame angle of the stable flame near flashback conditions can be obtained by various methods. In this study, an approach based on global mass conservation is applied and is validated using Mie-scattering images from a channel burner test rig at ambient conditions. The predicted flashback limits are compared to experimental results and to literature data from preheated tube burner experiments. Finally, a method for including the effect of burner exit temperature is demonstrated and used to explain the discrepancies in flashback limits obtained from different burner configurations reported in the literature.

  16. MetaKTSP: a meta-analytic top scoring pair method for robust cross-study validation of omics prediction analysis.

    PubMed

    Kim, SungHwan; Lin, Chien-Wei; Tseng, George C

    2016-07-01

    Supervised machine learning is widely applied to transcriptomic data to predict disease diagnosis, prognosis or survival. Robust and interpretable classifiers with high accuracy are usually favored for their clinical and translational potential. The top scoring pair (TSP) algorithm is an example that applies a simple rank-based algorithm to identify rank-altered gene pairs for classifier construction. Although many classification methods perform well in cross-validation of single expression profile, the performance usually greatly reduces in cross-study validation (i.e. the prediction model is established in the training study and applied to an independent test study) for all machine learning methods, including TSP. The failure of cross-study validation has largely diminished the potential translational and clinical values of the models. The purpose of this article is to develop a meta-analytic top scoring pair (MetaKTSP) framework that combines multiple transcriptomic studies and generates a robust prediction model applicable to independent test studies. We proposed two frameworks, by averaging TSP scores or by combining P-values from individual studies, to select the top gene pairs for model construction. We applied the proposed methods in simulated data sets and three large-scale real applications in breast cancer, idiopathic pulmonary fibrosis and pan-cancer methylation. The result showed superior performance of cross-study validation accuracy and biomarker selection for the new meta-analytic framework. In conclusion, combining multiple omics data sets in the public domain increases robustness and accuracy of the classification model that will ultimately improve disease understanding and clinical treatment decisions to benefit patients. An R package MetaKTSP is available online. (http://tsenglab.biostat.pitt.edu/software.htm). ctseng@pitt.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  17. Prioritizing pesticide compounds for analytical methods development

    USGS Publications Warehouse

    Norman, Julia E.; Kuivila, Kathryn; Nowell, Lisa H.

    2012-01-01

    The U.S. Geological Survey (USGS) has a periodic need to re-evaluate pesticide compounds in terms of priorities for inclusion in monitoring and studies and, thus, must also assess the current analytical capabilities for pesticide detection. To meet this need, a strategy has been developed to prioritize pesticides and degradates for analytical methods development. Screening procedures were developed to separately prioritize pesticide compounds in water and sediment. The procedures evaluate pesticide compounds in existing USGS analytical methods for water and sediment and compounds for which recent agricultural-use information was available. Measured occurrence (detection frequency and concentrations) in water and sediment, predicted concentrations in water and predicted likelihood of occurrence in sediment, potential toxicity to aquatic life or humans, and priorities of other agencies or organizations, regulatory or otherwise, were considered. Several existing strategies for prioritizing chemicals for various purposes were reviewed, including those that identify and prioritize persistent, bioaccumulative, and toxic compounds, and those that determine candidates for future regulation of drinking-water contaminants. The systematic procedures developed and used in this study rely on concepts common to many previously established strategies. The evaluation of pesticide compounds resulted in the classification of compounds into three groups: Tier 1 for high priority compounds, Tier 2 for moderate priority compounds, and Tier 3 for low priority compounds. For water, a total of 247 pesticide compounds were classified as Tier 1 and, thus, are high priority for inclusion in analytical methods for monitoring and studies. Of these, about three-quarters are included in some USGS analytical method; however, many of these compounds are included on research methods that are expensive and for which there are few data on environmental samples. The remaining quarter of Tier 1 compounds are high priority as new analytes. The objective for analytical methods development is to design an integrated analytical strategy that includes as many of the Tier 1 pesticide compounds as possible in a relatively few, cost-effective methods. More than 60 percent of the Tier 1 compounds are high priority because they are anticipated to be present at concentrations approaching levels that could be of concern to human health or aquatic life in surface water or groundwater. An additional 17 percent of Tier 1 compounds were frequently detected in monitoring studies, but either were not measured at levels potentially relevant to humans or aquatic organisms, or do not have benchmarks available with which to compare concentrations. The remaining 21 percent are pesticide degradates that were included because their parent pesticides were in Tier 1. Tier 1 pesticide compounds for water span all major pesticide use groups and a diverse range of chemical classes, with herbicides and their degradates composing half of compounds. Many of the high priority pesticide compounds also are in several national regulatory programs for water, including those that are regulated in drinking water by the U.S. Environmental Protection Agency under the Safe Drinking Water Act and those that are on the latest Contaminant Candidate List. For sediment, a total of 175 pesticide compounds were classified as Tier 1 and, thus, are high priority for inclusion in analytical methods available for monitoring and studies. More than 60 percent of these compounds are included in some USGS analytical method; however, some are spread across several research methods that are expensive to perform, and monitoring data are not extensive for many compounds. The remaining Tier 1 compounds for sediment are high priority as new analytes. The objective for analytical methods development for sediment is to enhance an existing analytical method that currently includes nearly half of the pesticide compounds in Tier 1 by adding as many additional Tier 1 compounds as are analytically compatible. About 35 percent of the Tier 1 compounds for sediment are high priority on the basis of measured occurrence. A total of 74 compounds, or 42 percent, are high priority on the basis of predicted likelihood of occurrence according to physical-chemical properties, and either have potential toxicity to aquatic life, high pesticide useage, or both. The remaining 22 percent of Tier 1 pesticide compounds were either degradates of Tier 1 parent compounds or included for other reasons. As with water, the Tier 1 pesticide compounds for sediment are distributed across the major pesticide-use groups; insecticides and their degradates are the largest fraction, making up 45 percent of Tier 1. In contrast to water, organochlorines, at 17 percent, are the largest chemical class for Tier 1 in sediment, which is to be expected because there is continued widespread detection in sediments of persistent organochlorine pesticides and their degradates at concentrations high enough for potential effects on aquatic life. Compared to water, there are fewer available benchmarks with which to compare contaminant concentrations in sediment, but a total of 19 Tier 1 compounds have at least one sediment benchmark or screening value for aquatic organisms. Of the 175 compounds in Tier 1, 77 percent have high aquatic-life toxicity, as defined for this process. This evaluation of pesticides and degradates resulted in two lists of compounds that are priorities for USGS analytical methods development, one for water and one for sediment. These lists will be used as the basis for redesigning and enhancing USGS analytical capabilities for pesticides in order to capture as many high-priority pesticide compounds as possible using an economically feasible approach.

  18. Hydroplaning on multi lane facilities.

    DOT National Transportation Integrated Search

    2012-11-01

    The primary findings of this research can be highlighted as follows. Models that provide estimates of wet weather speed reduction, as well as analytical and empirical methods for the prediction of hydroplaning speeds of trailers and heavy trucks, wer...

  19. Rapid Method Development in Hydrophilic Interaction Liquid Chromatography for Pharmaceutical Analysis Using a Combination of Quantitative Structure-Retention Relationships and Design of Experiments.

    PubMed

    Taraji, Maryam; Haddad, Paul R; Amos, Ruth I J; Talebi, Mohammad; Szucs, Roman; Dolan, John W; Pohl, Chris A

    2017-02-07

    A design-of-experiment (DoE) model was developed, able to describe the retention times of a mixture of pharmaceutical compounds in hydrophilic interaction liquid chromatography (HILIC) under all possible combinations of acetonitrile content, salt concentration, and mobile-phase pH with R 2 > 0.95. Further, a quantitative structure-retention relationship (QSRR) model was developed to predict retention times for new analytes, based only on their chemical structures, with a root-mean-square error of prediction (RMSEP) as low as 0.81%. A compound classification based on the concept of similarity was applied prior to QSRR modeling. Finally, we utilized a combined QSRR-DoE approach to propose an optimal design space in a quality-by-design (QbD) workflow to facilitate the HILIC method development. The mathematical QSRR-DoE model was shown to be highly predictive when applied to an independent test set of unseen compounds in unseen conditions with a RMSEP value of 5.83%. The QSRR-DoE computed retention time of pharmaceutical test analytes and subsequently calculated separation selectivity was used to optimize the chromatographic conditions for efficient separation of targets. A Monte Carlo simulation was performed to evaluate the risk of uncertainty in the model's prediction, and to define the design space where the desired quality criterion was met. Experimental realization of peak selectivity between targets under the selected optimal working conditions confirmed the theoretical predictions. These results demonstrate how discovery of optimal conditions for the separation of new analytes can be accelerated by the use of appropriate theoretical tools.

  20. Estimation of the curvature of the solid liquid interface during Bridgman crystal growth

    NASA Astrophysics Data System (ADS)

    Barat, Catherine; Duffar, Thierry; Garandet, Jean-Paul

    1998-11-01

    An approximate solution for the solid/liquid interface curvature due to the crucible effect in crystal growth is derived from simple heat flux considerations. The numerical modelling of the problem carried out with the help of the finite element code FIDAP supports the predictions of our analytical expression and allows to identify its range of validity. Experimental interface curvatures, measured in gallium antimonide samples grown by the vertical Bridgman method, are seen to compare satisfactorily to analytical and numerical results. Other literature data are also in fair agreement with the predictions of our models in the case where the amount of heat carried by the crucible is small compared to the overall heat flux.

  1. Static penetration resistance of soils

    NASA Technical Reports Server (NTRS)

    Durgunoglu, H. T.; Mitchell, J. K.

    1973-01-01

    Model test results were used to define the failure mechanism associated with the static penetration resistance of cohesionless and low-cohesion soils. Knowledge of this mechanism has permitted the development of a new analytical method for calculating the ultimate penetration resistance which explicitly accounts for penetrometer base apex angle and roughness, soil friction angle, and the ratio of penetration depth to base width. Curves relating the bearing capacity factors to the soil friction angle are presented for failure in general shear. Strength parameters and penetrometer interaction properties of a fine sand were determined and used as the basis for prediction of the penetration resistance encountered by wedge, cone, and flat-ended penetrometers of different surface roughness using the proposed analytical method. Because of the close agreement between predicted values and values measured in laboratory tests, it appears possible to deduce in-situ soil strength parameters and their variation with depth from the results of static penetration tests.

  2. Eigenvector centrality is a metric of elastomer modulus, heterogeneity, and damage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Welch, Jr., Paul Michael; Welch, Cynthia F.

    Here, we present an application of eigenvector centrality to encode the connectivity of polymer networks resolved at the micro- and meso-scopic length scales. This method captures the relative importance of different nodes within the network structure and provides a route toward the development of a statistical mechanics model that correlates connectivity with mechanical response. This scheme may be informed by analytical and semi-analytical models for the network structure, or through direct experimental examination. It may be used to predict the reduction in mechanical performance for heterogeneous materials subjected to specific modes of damage. Here, we develop the method and demonstratemore » that it leads to the prediction of established trends in elastomers. We also apply the model to the case of a self-healing polymer network reported in the literature, extracting insight about the fraction of bonds broken and re-formed during strain and recovery.« less

  3. Eigenvector centrality is a metric of elastomer modulus, heterogeneity, and damage

    DOE PAGES

    Welch, Jr., Paul Michael; Welch, Cynthia F.

    2017-04-27

    Here, we present an application of eigenvector centrality to encode the connectivity of polymer networks resolved at the micro- and meso-scopic length scales. This method captures the relative importance of different nodes within the network structure and provides a route toward the development of a statistical mechanics model that correlates connectivity with mechanical response. This scheme may be informed by analytical and semi-analytical models for the network structure, or through direct experimental examination. It may be used to predict the reduction in mechanical performance for heterogeneous materials subjected to specific modes of damage. Here, we develop the method and demonstratemore » that it leads to the prediction of established trends in elastomers. We also apply the model to the case of a self-healing polymer network reported in the literature, extracting insight about the fraction of bonds broken and re-formed during strain and recovery.« less

  4. Statistical evaluation of forecasts

    NASA Astrophysics Data System (ADS)

    Mader, Malenka; Mader, Wolfgang; Gluckman, Bruce J.; Timmer, Jens; Schelter, Björn

    2014-08-01

    Reliable forecasts of extreme but rare events, such as earthquakes, financial crashes, and epileptic seizures, would render interventions and precautions possible. Therefore, forecasting methods have been developed which intend to raise an alarm if an extreme event is about to occur. In order to statistically validate the performance of a prediction system, it must be compared to the performance of a random predictor, which raises alarms independent of the events. Such a random predictor can be obtained by bootstrapping or analytically. We propose an analytic statistical framework which, in contrast to conventional methods, allows for validating independently the sensitivity and specificity of a forecasting method. Moreover, our method accounts for the periods during which an event has to remain absent or occur after a respective forecast.

  5. Data Pre-Processing Method to Remove Interference of Gas Bubbles and Cell Clusters During Anaerobic and Aerobic Yeast Fermentations in a Stirred Tank Bioreactor

    NASA Astrophysics Data System (ADS)

    Princz, S.; Wenzel, U.; Miller, R.; Hessling, M.

    2014-11-01

    One aerobic and four anaerobic batch fermentations of the yeast Saccharomyces cerevisiae were conducted in a stirred bioreactor and monitored inline by NIR spectroscopy and a transflectance dip probe. From the acquired NIR spectra, chemometric partial least squares regression (PLSR) models for predicting biomass, glucose and ethanol were constructed. The spectra were directly measured in the fermentation broth and successfully inspected for adulteration using our novel data pre-processing method. These adulterations manifested as strong fluctuations in the shape and offset of the absorption spectra. They resulted from cells, cell clusters, or gas bubbles intercepting the optical path of the dip probe. In the proposed data pre-processing method, adulterated signals are removed by passing the time-scanned non-averaged spectra through two filter algorithms with a 5% quantile cutoff. The filtered spectra containing meaningful data are then averaged. A second step checks whether the whole time scan is analyzable. If true, the average is calculated and used to prepare the PLSR models. This new method distinctly improved the prediction results. To dissociate possible correlations between analyte concentrations, such as glucose and ethanol, the feeding analytes were alternately supplied at different concentrations (spiking) at the end of the four anaerobic fermentations. This procedure yielded low-error (anaerobic) PLSR models for predicting analyte concentrations of 0.31 g/l for biomass, 3.41 g/l for glucose, and 2.17 g/l for ethanol. The maximum concentrations were 14 g/l biomass, 167 g/l glucose, and 80 g/l ethanol. Data from the aerobic fermentation, carried out under high agitation and high aeration, were incorporated to realize combined PLSR models, which have not been previously reported to our knowledge.

  6. Predictive simulation of guide-wave structural health monitoring

    NASA Astrophysics Data System (ADS)

    Giurgiutiu, Victor

    2017-04-01

    This paper presents an overview of recent developments on predictive simulation of guided wave structural health monitoring (SHM) with piezoelectric wafer active sensor (PWAS) transducers. The predictive simulation methodology is based on the hybrid global local (HGL) concept which allows fast analytical simulation in the undamaged global field and finite element method (FEM) simulation in the local field around and including the damage. The paper reviews the main results obtained in this area by researchers of the Laboratory for Active Materials and Smart Structures (LAMSS) at the University of South Carolina, USA. After thematic introduction and research motivation, the paper covers four main topics: (i) presentation of the HGL analysis; (ii) analytical simulation in 1D and 2D; (iii) scatter field generation; (iv) HGL examples. The paper ends with summary, discussion, and suggestions for future work.

  7. Adapting Surface Ground Motion Relations to Underground conditions: A case study for the Sudbury Neutrino Observatory in Sudbury, Ontario, Canada

    NASA Astrophysics Data System (ADS)

    Babaie Mahani, A.; Eaton, D. W.

    2013-12-01

    Ground Motion Prediction Equations (GMPEs) are widely used in Probabilistic Seismic Hazard Assessment (PSHA) to estimate ground-motion amplitudes at Earth's surface as a function of magnitude and distance. Certain applications, such as hazard assessment for caprock integrity in the case of underground storage of CO2, waste disposal sites, and underground pipelines, require subsurface estimates of ground motion; at present, such estimates depend upon theoretical modeling and simulations. The objective of this study is to derive correction factors for GMPEs to enable estimation of amplitudes in the subsurface. We use a semi-analytic approach along with finite-difference simulations of ground-motion amplitudes for surface and underground motions. Spectral ratios of underground to surface motions are used to calculate the correction factors. Two predictive methods are used. The first is a semi-analytic approach based on a quarter-wavelength method that is widely used for earthquake site-response investigations; the second is a numerical approach based on elastic finite-difference simulations of wave propagation. Both methods are evaluated using recordings of regional earthquakes by broadband seismometers installed at the surface and at depths of 1400 m and 2100 m in the Sudbury Neutrino Observatory, Canada. Overall, both methods provide a reasonable fit to the peaks and troughs observed in the ratios of real data. The finite-difference method, however, has the capability to simulate ground motion ratios more accurately than the semi-analytic approach.

  8. A new experimental method for the accelerated characterization of composite materials

    NASA Technical Reports Server (NTRS)

    Yeow, Y. T.; Morris, D. H.; Brinson, H. F.

    1978-01-01

    The use of composite materials for a variety of practical structural applications is presented and the need for an accelerated characterization procedure is assessed. A new experimental and analytical method is presented which allows the prediction of long term properties from short term tests. Some preliminary experimental results are presented.

  9. The calculation of transport properties in quantum liquids using the maximum entropy numerical analytic continuation method: Application to liquid para-hydrogen

    PubMed Central

    Rabani, Eran; Reichman, David R.; Krilov, Goran; Berne, Bruce J.

    2002-01-01

    We present a method based on augmenting an exact relation between a frequency-dependent diffusion constant and the imaginary time velocity autocorrelation function, combined with the maximum entropy numerical analytic continuation approach to study transport properties in quantum liquids. The method is applied to the case of liquid para-hydrogen at two thermodynamic state points: a liquid near the triple point and a high-temperature liquid. Good agreement for the self-diffusion constant and for the real-time velocity autocorrelation function is obtained in comparison to experimental measurements and other theoretical predictions. Improvement of the methodology and future applications are discussed. PMID:11830656

  10. BIG DATA ANALYTICS AND PRECISION ANIMAL AGRICULTURE SYMPOSIUM: Data to decisions.

    PubMed

    White, B J; Amrine, D E; Larson, R L

    2018-04-14

    Big data are frequently used in many facets of business and agronomy to enhance knowledge needed to improve operational decisions. Livestock operations collect data of sufficient quantity to perform predictive analytics. Predictive analytics can be defined as a methodology and suite of data evaluation techniques to generate a prediction for specific target outcomes. The objective of this manuscript is to describe the process of using big data and the predictive analytic framework to create tools to drive decisions in livestock production, health, and welfare. The predictive analytic process involves selecting a target variable, managing the data, partitioning the data, then creating algorithms, refining algorithms, and finally comparing accuracy of the created classifiers. The partitioning of the datasets allows model building and refining to occur prior to testing the predictive accuracy of the model with naive data to evaluate overall accuracy. Many different classification algorithms are available for predictive use and testing multiple algorithms can lead to optimal results. Application of a systematic process for predictive analytics using data that is currently collected or that could be collected on livestock operations will facilitate precision animal management through enhanced livestock operational decisions.

  11. Solving the Schroedinger Equation of Atoms and Molecules without Analytical Integration Based on the Free Iterative-Complement-Interaction Wave Function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakatsuji, H.; Nakashima, H.; Department of Synthetic Chemistry and Biological Chemistry, Graduate School of Engineering, Kyoto University, Nishikyo-ku, Kyoto 615-8510

    2007-12-14

    A local Schroedinger equation (LSE) method is proposed for solving the Schroedinger equation (SE) of general atoms and molecules without doing analytic integrations over the complement functions of the free ICI (iterative-complement-interaction) wave functions. Since the free ICI wave function is potentially exact, we can assume a flatness of its local energy. The variational principle is not applicable because the analytic integrations over the free ICI complement functions are very difficult for general atoms and molecules. The LSE method is applied to several 2 to 5 electron atoms and molecules, giving an accuracy of 10{sup -5} Hartree in total energy.more » The potential energy curves of H{sub 2} and LiH molecules are calculated precisely with the free ICI LSE method. The results show the high potentiality of the free ICI LSE method for developing accurate predictive quantum chemistry with the solutions of the SE.« less

  12. Food adulteration analysis without laboratory prepared or determined reference food adulterant values.

    PubMed

    Kalivas, John H; Georgiou, Constantinos A; Moira, Marianna; Tsafaras, Ilias; Petrakis, Eleftherios A; Mousdis, George A

    2014-04-01

    Quantitative analysis of food adulterants is an important health and economic issue that needs to be fast and simple. Spectroscopy has significantly reduced analysis time. However, still needed are preparations of analyte calibration samples matrix matched to prediction samples which can be laborious and costly. Reported in this paper is the application of a newly developed pure component Tikhonov regularization (PCTR) process that does not require laboratory prepared or reference analysis methods, and hence, is a greener calibration method. The PCTR method requires an analyte pure component spectrum and non-analyte spectra. As a food analysis example, synchronous fluorescence spectra of extra virgin olive oil samples adulterated with sunflower oil is used. Results are shown to be better than those obtained using ridge regression with reference calibration samples. The flexibility of PCTR allows including reference samples and is generic for use with other instrumental methods and food products. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. The Acoustic Analogy: A Powerful Tool in Aeroacoustics with Emphasis on Jet Noise Prediction

    NASA Technical Reports Server (NTRS)

    Farassat, F.; Doty, Michael J.; Hunter, Craig A.

    2004-01-01

    The acoustic analogy introduced by Lighthill to study jet noise is now over 50 years old. In the present paper, Lighthill s Acoustic Analogy is revisited together with a brief evaluation of the state-of-the-art of the subject and an exploration of the possibility of further improvements in jet noise prediction from analytical methods, computational fluid dynamics (CFD) predictions, and measurement techniques. Experimental Particle Image Velocimetry (PIV) data is used both to evaluate turbulent statistics from Reynolds-averaged Navier-Stokes (RANS) CFD and to propose correlation models for the Lighthill stress tensor. The NASA Langley Jet3D code is used to study the effect of these models on jet noise prediction. From the analytical investigation, a retarded time correction is shown that improves, by approximately 8 dB, the over-prediction of aft-arc jet noise by Jet3D. In experimental investigation, the PIV data agree well with the CFD mean flow predictions, with room for improvement in Reynolds stress predictions. Initial modifications, suggested by the PIV data, to the form of the Jet3D correlation model showed no noticeable improvements in jet noise prediction.

  14. Cylindrical optical resonators: fundamental properties and bio-sensing characteristics

    NASA Astrophysics Data System (ADS)

    Khozeymeh, Foroogh; Razaghi, Mohammad

    2018-04-01

    In this paper, detailed theoretical analysis of cylindrical resonators is demonstrated. As illustrated, these kinds of resonators can be used as optical bio-sensing devices. The proposed structure is analyzed using an analytical method based on Lam's approximation. This method is systematic and has simplified the tedious process of whispering-gallery mode (WGM) wavelength analysis in optical cylindrical biosensors. By this method, analysis of higher radial orders of high angular momentum WGMs has been possible. Using closed-form analytical equations, resonance wavelengths of higher radial and angular order WGMs of TE and TM polarization waves are calculated. It is shown that high angular momentum WGMs are more appropriate for bio-sensing applications. Some of the calculations are done using a numerical non-linear Newton method. A perfect match of 99.84% between the analytical and the numerical methods has been achieved. In order to verify the validity of the calculations, Meep simulations based on the finite difference time domain (FDTD) method are performed. In this case, a match of 96.70% between the analytical and FDTD results has been obtained. The analytical predictions are in good agreement with other experimental work (99.99% match). These results validate the proposed analytical modelling for the fast design of optical cylindrical biosensors. It is shown that by extending the proposed two-layer resonator structure analyzing scheme, it is possible to study a three-layer cylindrical resonator structure as well. Moreover, by this method, fast sensitivity optimization in cylindrical resonator-based biosensors has been possible. Sensitivity of the WGM resonances is analyzed as a function of the structural parameters of the cylindrical resonators. Based on the results, fourth radial order WGMs, with a resonator radius of 50 μm, display the most bulk refractive index sensitivity of 41.50 (nm/RIU).

  15. On Nonlinear Combustion Instability in Liquid Propellant Rocket Motors

    NASA Technical Reports Server (NTRS)

    Sims, J. D. (Technical Monitor); Flandro, Gary A.; Majdalani, Joseph; Sims, Joseph D.

    2004-01-01

    All liquid propellant rocket instability calculations in current use have limited value in the predictive sense and serve mainly as a correlating framework for the available data sets. The well-known n-t model first introduced by Crocco and Cheng in 1956 is still used as the primary analytical tool of this type. A multitude of attempts to establish practical analytical methods have achieved only limited success. These methods usually produce only stability boundary maps that are of little use in making critical design decisions in new motor development programs. Recent progress in understanding the mechanisms of combustion instability in solid propellant rockets"' provides a firm foundation for a new approach to prediction, diagnosis, and correction of the closely related problems in liquid motor instability. For predictive tools to be useful in the motor design process, they must have the capability to accurately determine: 1) time evolution of the pressure oscillations and limit amplitude, 2) critical triggering pulse amplitude, and 3) unsteady heat transfer rates at injector surfaces and chamber walls. The method described in this paper relates these critical motor characteristics directly to system design parameters. Inclusion of mechanisms such as wave steepening, vorticity production and transport, and unsteady detonation wave phenomena greatly enhance the representation of key features of motor chamber oscillatory behavior. The basic theoretical model is described and preliminary computations are compared to experimental data. A plan to develop the new predictive method into a comprehensive analysis tool is also described.

  16. Load balancing prediction method of cloud storage based on analytic hierarchy process and hybrid hierarchical genetic algorithm.

    PubMed

    Zhou, Xiuze; Lin, Fan; Yang, Lvqing; Nie, Jing; Tan, Qian; Zeng, Wenhua; Zhang, Nian

    2016-01-01

    With the continuous expansion of the cloud computing platform scale and rapid growth of users and applications, how to efficiently use system resources to improve the overall performance of cloud computing has become a crucial issue. To address this issue, this paper proposes a method that uses an analytic hierarchy process group decision (AHPGD) to evaluate the load state of server nodes. Training was carried out by using a hybrid hierarchical genetic algorithm (HHGA) for optimizing a radial basis function neural network (RBFNN). The AHPGD makes the aggregative indicator of virtual machines in cloud, and become input parameters of predicted RBFNN. Also, this paper proposes a new dynamic load balancing scheduling algorithm combined with a weighted round-robin algorithm, which uses the predictive periodical load value of nodes based on AHPPGD and RBFNN optimized by HHGA, then calculates the corresponding weight values of nodes and makes constant updates. Meanwhile, it keeps the advantages and avoids the shortcomings of static weighted round-robin algorithm.

  17. Availability Analysis of Dual Mode Systems

    DOT National Transportation Integrated Search

    1974-04-01

    The analytical procedures presented define a method of evaluating the effects of failures in a complex dual-mode system based on a worst case steady-state analysis. The computed result is an availability figure of merit and not an absolute prediction...

  18. THEORETICAL METHODS FOR COMPUTING ELECTRICAL CONDITIONS IN WIRE-PLATE ELECTROSTATIC PRECIPITATORS

    EPA Science Inventory

    The paper describes a new semi-empirical, approximate theory for predicting electrical conditions. In the approximate theory, analytical expressions are derived for calculating voltage-current characteristics and electric potential, electric field, and space charge density distri...

  19. STICK-SLIP-SEPARATION Analysis and Non-Linear Stiffness and Damping Characterization of Friction Contacts Having Variable Normal Load

    NASA Astrophysics Data System (ADS)

    Yang, B. D.; Chu, M. L.; Menq, C. H.

    1998-03-01

    Mechanical systems in which moving components are mutually constrained through contacts often lead to complex contact kinematics involving tangential and normal relative motions. A friction contact model is proposed to characterize this type of contact kinematics that imposes both friction non-linearity and intermittent separation non-linearity on the system. The stick-slip friction phenomenon is analyzed by establishing analytical criteria that predict the transition between stick, slip, and separation of the interface. The established analytical transition criteria are particularly important to the proposed friction contact model for the transition conditions of the contact kinematics are complicated by the effect of normal load variation and possible interface separation. With these transition criteria, the induced friction force on the contact plane and the variable normal load perpendicular to the contact plane, can be predicted for any given cyclic relative motions at the contact interface and hysteresis loops can be produced so as to characterize the equivalent damping and stiffness of the friction contact. These-non-linear damping and stiffness methods along with the harmonic balance method are then used to predict the resonant response of a frictionally constrained two-degree-of-freedom oscillator. The predicted results are compared with those of the time integration method and the damping effect, the resonant frequency shift, and the jump phenomenon are examined.

  20. Cellular dosimetry of (111)In using monte carlo N-particle computer code: comparison with analytic methods and correlation with in vitro cytotoxicity.

    PubMed

    Cai, Zhongli; Pignol, Jean-Philippe; Chan, Conrad; Reilly, Raymond M

    2010-03-01

    Our objective was to compare Monte Carlo N-particle (MCNP) self- and cross-doses from (111)In to the nucleus of breast cancer cells with doses calculated by reported analytic methods (Goddu et al. and Farragi et al.). A further objective was to determine whether the MCNP-predicted surviving fraction (SF) of breast cancer cells exposed in vitro to (111)In-labeled diethylenetriaminepentaacetic acid human epidermal growth factor ((111)In-DTPA-hEGF) could accurately predict the experimentally determined values. MCNP was used to simulate the transport of electrons emitted by (111)In from the cell surface, cytoplasm, or nucleus. The doses to the nucleus per decay (S values) were calculated for single cells, closely packed monolayer cells, or cell clusters. The cell and nucleus dimensions of 6 breast cancer cell lines were measured, and cell line-specific S values were calculated. For self-doses, MCNP S values of nucleus to nucleus agreed very well with those of Goddu et al. (ratio of S values using analytic methods vs. MCNP = 0.962-0.995) and Faraggi et al. (ratio = 1.011-1.024). MCNP S values of cytoplasm and cell surface to nucleus compared fairly well with the reported values (ratio = 0.662-1.534 for Goddu et al.; 0.944-1.129 for Faraggi et al.). For cross doses, the S values to the nucleus were independent of (111)In subcellular distribution but increased with cluster size. S values for monolayer cells were significantly different from those of single cells and cell clusters. The MCNP-predicted SF for monolayer MDA-MB-468, MDA-MB-231, and MCF-7 cells agreed with the experimental data (relative error of 3.1%, -1.0%, and 1.7%). The single-cell and cell cluster models were less accurate in predicting the SF. For MDA-MB-468 cells, relative error was 8.1% using the single-cell model and -54% to -67% using the cell cluster model. Individual cell-line dimensions had large effects on S values and were needed to estimate doses and SF accurately. MCNP simulation compared well with the reported analytic methods in the calculation of subcellular S values for single cells and cell clusters. Application of a monolayer model was most accurate in predicting the SF of breast cancer cells exposed in vitro to (111)In-DTPA-hEGF.

  1. Investigation of the validity of radiosity for sound-field prediction in cubic rooms

    NASA Astrophysics Data System (ADS)

    Nosal, Eva-Marie; Hodgson, Murray; Ashdown, Ian

    2004-12-01

    This paper explores acoustical (or time-dependent) radiosity using predictions made in four cubic enclosures. The methods and algorithms used are those presented in a previous paper by the same authors [Nosal, Hodgson, and Ashdown, J. Acoust. Soc. Am. 116(2), 970-980 (2004)]. First, the algorithm, methods, and conditions for convergence are investigated by comparison of numerous predictions for the four cubic enclosures. Here, variables and parameters used in the predictions are varied to explore the effect of absorption distribution, the necessary conditions for convergence of the numerical solution to the analytical solution, form-factor prediction methods, and the computational requirements. The predictions are also used to investigate the effect of absorption distribution on sound fields in cubic enclosures with diffusely reflecting boundaries. Acoustical radiosity is then compared to predictions made in the four enclosures by a ray-tracing model that can account for diffuse reflection. Comparisons are made of echograms, room-acoustical parameters, and discretized echograms. .

  2. Investigation of the validity of radiosity for sound-field prediction in cubic rooms.

    PubMed

    Nosal, Eva-Marie; Hodgson, Murray; Ashdown, Ian

    2004-12-01

    This paper explores acoustical (or time-dependent) radiosity using predictions made in four cubic enclosures. The methods and algorithms used are those presented in a previous paper by the same authors [Nosal, Hodgson, and Ashdown, J. Acoust. Soc. Am. 116(2), 970-980 (2004)]. First, the algorithm, methods, and conditions for convergence are investigated by comparison of numerous predictions for the four cubic enclosures. Here, variables and parameters used in the predictions are varied to explore the effect of absorption distribution, the necessary conditions for convergence of the numerical solution to the analytical solution, form-factor prediction methods, and the computational requirements. The predictions are also used to investigate the effect of absorption distribution on sound fields in cubic enclosures with diffusely reflecting boundaries. Acoustical radiosity is then compared to predictions made in the four enclosures by a ray-tracing model that can account for diffuse reflection. Comparisons are made of echograms, room-acoustical parameters, and discretized echograms.

  3. Peptide interfaces with graphene: an emerging intersection of analytical chemistry, theory, and materials.

    PubMed

    Russell, Shane R; Claridge, Shelley A

    2016-04-01

    Because noncovalent interface functionalization is frequently required in graphene-based devices, biomolecular self-assembly has begun to emerge as a route for controlling substrate electronic structure or binding specificity for soluble analytes. The remarkable diversity of structures that arise in biological self-assembly hints at the possibility of equally diverse and well-controlled surface chemistry at graphene interfaces. However, predicting and analyzing adsorbed monolayer structures at such interfaces raises substantial experimental and theoretical challenges. In contrast with the relatively well-developed monolayer chemistry and characterization methods applied at coinage metal surfaces, monolayers on graphene are both less robust and more structurally complex, levying more stringent requirements on characterization techniques. Theory presents opportunities to understand early binding events that lay the groundwork for full monolayer structure. However, predicting interactions between complex biomolecules, solvent, and substrate is necessitating a suite of new force fields and algorithms to assess likely binding configurations, solvent effects, and modulations to substrate electronic properties. This article briefly discusses emerging analytical and theoretical methods used to develop a rigorous chemical understanding of the self-assembly of peptide-graphene interfaces and prospects for future advances in the field.

  4. IUS solid rocket motor contamination prediction methods

    NASA Technical Reports Server (NTRS)

    Mullen, C. R.; Kearnes, J. H.

    1980-01-01

    A series of computer codes were developed to predict solid rocket motor produced contamination to spacecraft sensitive surfaces. Subscale and flight test data have confirmed some of the analytical results. Application of the analysis tools to a typical spacecraft has provided early identification of potential spacecraft contamination problems and provided insight into their solution; e.g., flight plan modifications, plume or outgassing shields and/or contamination covers.

  5. A computer program for cyclic plasticity and structural fatigue analysis

    NASA Technical Reports Server (NTRS)

    Kalev, I.

    1980-01-01

    A computerized tool for the analysis of time independent cyclic plasticity structural response, life to crack initiation prediction, and crack growth rate prediction for metallic materials is described. Three analytical items are combined: the finite element method with its associated numerical techniques for idealization of the structural component, cyclic plasticity models for idealization of the material behavior, and damage accumulation criteria for the fatigue failure.

  6. An Experimental and Theoretical Study on Cavitating Propellers.

    DTIC Science & Technology

    1982-10-01

    34 And Identfyp eV &to" nMeeJ cascade flow theoretical supercavitating flow performance prediction method partially cavitating flow supercavitating ...the present work was to develop an analytical tool for predicting the off-design performance of supercavitating propellers over a wide range of...operating conditions. Due to the complex nature of the flow phenomena, a lifting line theory sirply combined with the two-dimensional supercavitating

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simonetto, Andrea; Dall'Anese, Emiliano

    This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less

  8. A Novel Calibration-Minimum Method for Prediction of Mole Fraction in Non-Ideal Mixture.

    PubMed

    Shibayama, Shojiro; Kaneko, Hiromasa; Funatsu, Kimito

    2017-04-01

    This article proposes a novel concentration prediction model that requires little training data and is useful for rapid process understanding. Process analytical technology is currently popular, especially in the pharmaceutical industry, for enhancement of process understanding and process control. A calibration-free method, iterative optimization technology (IOT), was proposed to predict pure component concentrations, because calibration methods such as partial least squares, require a large number of training samples, leading to high costs. However, IOT cannot be applied to concentration prediction in non-ideal mixtures because its basic equation is derived from the Beer-Lambert law, which cannot be applied to non-ideal mixtures. We proposed a novel method that realizes prediction of pure component concentrations in mixtures from a small number of training samples, assuming that spectral changes arising from molecular interactions can be expressed as a function of concentration. The proposed method is named IOT with virtual molecular interaction spectra (IOT-VIS) because the method takes spectral change as a virtual spectrum x nonlin,i into account. It was confirmed through the two case studies that the predictive accuracy of IOT-VIS was the highest among existing IOT methods.

  9. Water hammer prediction and control: the Green's function method

    NASA Astrophysics Data System (ADS)

    Xuan, Li-Jun; Mao, Feng; Wu, Jie-Zhi

    2012-04-01

    By Green's function method we show that the water hammer (WH) can be analytically predicted for both laminar and turbulent flows (for the latter, with an eddy viscosity depending solely on the space coordinates), and thus its hazardous effect can be rationally controlled and minimized. To this end, we generalize a laminar water hammer equation of Wang et al. (J. Hydrodynamics, B2, 51, 1995) to include arbitrary initial condition and variable viscosity, and obtain its solution by Green's function method. The predicted characteristic WH behaviors by the solutions are in excellent agreement with both direct numerical simulation of the original governing equations and, by adjusting the eddy viscosity coefficient, experimentally measured turbulent flow data. Optimal WH control principle is thereby constructed and demonstrated.

  10. Communication — Modeling polymer-electrolyte fuel-cell agglomerates with double-trap kinetics

    DOE PAGES

    Pant, Lalit M.; Weber, Adam Z.

    2017-04-14

    A new semi-analytical agglomerate model is presented for polymer-electrolyte fuel-cell cathodes. The model uses double-trap kinetics for the oxygen-reduction reaction, which can capture the observed potential-dependent coverage and Tafel-slope changes. An iterative semi-analytical approach is used to obtain reaction rate constants from the double-trap kinetics, oxygen concentration at the agglomerate surface, and overall agglomerate reaction rate. The analytical method can predict reaction rates within 2% of the numerically simulated values for a wide range of oxygen concentrations, overpotentials, and agglomerate sizes, while saving simulation time compared to a fully numerical approach.

  11. Swarm intelligence metaheuristics for enhanced data analysis and optimization.

    PubMed

    Hanrahan, Grady

    2011-09-21

    The swarm intelligence (SI) computing paradigm has proven itself as a comprehensive means of solving complicated analytical chemistry problems by emulating biologically-inspired processes. As global optimum search metaheuristics, associated algorithms have been widely used in training neural networks, function optimization, prediction and classification, and in a variety of process-based analytical applications. The goal of this review is to provide readers with critical insight into the utility of swarm intelligence tools as methods for solving complex chemical problems. Consideration will be given to algorithm development, ease of implementation and model performance, detailing subsequent influences on a number of application areas in the analytical, bioanalytical and detection sciences.

  12. Cavitation in liquid cryogens. 4: Combined correlations for venturi, hydrofoil, ogives, and pumps

    NASA Technical Reports Server (NTRS)

    Hord, J.

    1974-01-01

    The results of a series of experimental and analytical cavitation studies are presented. Cross-correlation is performed of the developed cavity data for a venturi, a hydrofoil and three scaled ogives. The new correlating parameter, MTWO, improves data correlation for these stationary bodies and for pumping equipment. Existing techniques for predicting the cavitating performance of pumping machinery were extended to include variations in flow coefficient, cavitation parameter, and equipment geometry. The new predictive formulations hold promise as a design tool and universal method for correlating pumping machinery performance. Application of these predictive formulas requires prescribed cavitation test data or an independent method of estimating the cavitation parameter for each pump. The latter would permit prediction of performance without testing; potential methods for evaluating the cavitation parameter prior to testing are suggested.

  13. Predictive modeling of complications.

    PubMed

    Osorio, Joseph A; Scheer, Justin K; Ames, Christopher P

    2016-09-01

    Predictive analytic algorithms are designed to identify patterns in the data that allow for accurate predictions without the need for a hypothesis. Therefore, predictive modeling can provide detailed and patient-specific information that can be readily applied when discussing the risks of surgery with a patient. There are few studies using predictive modeling techniques in the adult spine surgery literature. These types of studies represent the beginning of the use of predictive analytics in spine surgery outcomes. We will discuss the advancements in the field of spine surgery with respect to predictive analytics, the controversies surrounding the technique, and the future directions.

  14. A standardized method for the calibration of thermodynamic data for the prediction of gas chromatographic retention times.

    PubMed

    McGinitie, Teague M; Ebrahimi-Najafabadi, Heshmatollah; Harynuk, James J

    2014-02-21

    A new method for calibrating thermodynamic data to be used in the prediction of analyte retention times is presented. The method allows thermodynamic data collected on one column to be used in making predictions across columns of the same stationary phase but with varying geometries. This calibration is essential as slight variances in the column inner diameter and stationary phase film thickness between columns or as a column ages will adversely affect the accuracy of predictions. The calibration technique uses a Grob standard mixture along with a Nelder-Mead simplex algorithm and a previously developed model of GC retention times based on a three-parameter thermodynamic model to estimate both inner diameter and stationary phase film thickness. The calibration method is highly successful with the predicted retention times for a set of alkanes, ketones and alcohols having an average error of 1.6s across three columns. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Simultaneous determination of glucose, triglycerides, urea, cholesterol, albumin and total protein in human plasma by Fourier transform infrared spectroscopy: direct clinical biochemistry without reagents.

    PubMed

    Jessen, Torben E; Höskuldsson, Agnar T; Bjerrum, Poul J; Verder, Henrik; Sørensen, Lars; Bratholm, Palle S; Christensen, Bo; Jensen, Lene S; Jensen, Maria A B

    2014-09-01

    Direct measurement of chemical constituents in complex biologic matrices without the use of analyte specific reagents could be a step forward toward the simplification of clinical biochemistry. Problems related to reagents such as production errors, improper handling, and lot-to-lot variations would be eliminated as well as errors occurring during assay execution. We describe and validate a reagent free method for direct measurement of six analytes in human plasma based on Fourier-transform infrared spectroscopy (FTIR). Blood plasma is analyzed without any sample preparation. FTIR spectrum of the raw plasma is recorded in a sampling cuvette specially designed for measurement of aqueous solutions. For each analyte, a mathematical calibration process is performed by a stepwise selection of wavelengths giving the optimal least-squares correlation between the measured FTIR signal and the analyte concentration measured by conventional clinical reference methods. The developed calibration algorithms are subsequently evaluated for their capability to predict the concentration of the six analytes in blinded patient samples. The correlation between the six FTIR methods and corresponding reference methods were 0.87

  16. Application of person-centered analytic methodology in longitudinal research: exemplars from the Women's Health Initiative Clinical Trial data.

    PubMed

    Zaslavsky, Oleg; Cochrane, Barbara B; Herting, Jerald R; Thompson, Hilaire J; Woods, Nancy F; Lacroix, Andrea

    2014-02-01

    Despite the variety of available analytic methods, longitudinal research in nursing has been dominated by use of a variable-centered analytic approach. The purpose of this article is to present the utility of person-centered methodology using a large cohort of American women 65 and older enrolled in the Women's Health Initiative Clinical Trial (N = 19,891). Four distinct trajectories of energy/fatigue scores were identified. Levels of fatigue were closely linked to age, socio-demographic factors, comorbidities, health behaviors, and poor sleep quality. These findings were consistent regardless of the methodological framework. Finally, we demonstrated that energy/fatigue levels predicted future hospitalization in non-disabled elderly. Person-centered methods provide unique opportunities to explore and statistically model the effects of longitudinal heterogeneity within a population. © 2013 Wiley Periodicals, Inc.

  17. An Analytical Approach to Obtaining JWL Parameters from Cylinder Tests

    NASA Astrophysics Data System (ADS)

    Sutton, Ben; Ferguson, James

    2015-06-01

    An analytical method for determining parameters for the JWL equation of state (EoS) from cylinder test data is described. This method is applied to four datasets obtained from two 20.3 mm diameter EDC37 cylinder tests. The calculated parameters and pressure-volume (p-V) curves agree with those produced by hydro-code modelling. The calculated Chapman-Jouguet (CJ) pressure is 38.6 GPa, compared to the model value of 38.3 GPa; the CJ relative volume is 0.729 for both. The analytical pressure-volume curves produced agree with the one used in the model out to the commonly reported expansion of 7 relative volumes, as do the predicted energies generated by integrating under the p-V curve. The calculated and model energies are 8.64 GPa and 8.76 GPa respectively.

  18. Validation of finite element and boundary element methods for predicting structural vibration and radiated noise

    NASA Technical Reports Server (NTRS)

    Seybert, A. F.; Wu, X. F.; Oswald, Fred B.

    1992-01-01

    Analytical and experimental validation of methods to predict structural vibration and radiated noise are presented. A rectangular box excited by a mechanical shaker was used as a vibrating structure. Combined finite element method (FEM) and boundary element method (BEM) models of the apparatus were used to predict the noise radiated from the box. The FEM was used to predict the vibration, and the surface vibration was used as input to the BEM to predict the sound intensity and sound power. Vibration predicted by the FEM model was validated by experimental modal analysis. Noise predicted by the BEM was validated by sound intensity measurements. Three types of results are presented for the total radiated sound power: (1) sound power predicted by the BEM modeling using vibration data measured on the surface of the box; (2) sound power predicted by the FEM/BEM model; and (3) sound power measured by a sound intensity scan. The sound power predicted from the BEM model using measured vibration data yields an excellent prediction of radiated noise. The sound power predicted by the combined FEM/BEM model also gives a good prediction of radiated noise except for a shift of the natural frequencies that are due to limitations in the FEM model.

  19. Application of the Spectral Element Method to Interior Noise Problems

    NASA Technical Reports Server (NTRS)

    Doyle, James F.

    1998-01-01

    The primary effort of this research project was focused the development of analytical methods for the accurate prediction of structural acoustic noise and response. Of particular interest was the development of curved frame and shell spectral elements for the efficient computational of structural response and of schemes to match this to the surrounding fluid.

  20. A finite-element method for large-amplitude, two-dimensional panel flutter at hypersonic speeds

    NASA Technical Reports Server (NTRS)

    Mei, Chuh; Gray, Carl E.

    1989-01-01

    The nonlinear flutter behavior of a two-dimensional panel in hypersonic flow is investigated analytically. An FEM formulation based unsteady third-order piston theory (Ashley and Zartarian, 1956; McIntosh, 1970) and taking nonlinear structural and aerodynamic phenomena into account is derived; the solution procedure is outlined; and typical results are presented in extensive tables and graphs. A 12-element finite-element solution obtained using an alternative method for linearizing the assumed limit-cycle time function is shown to give predictions in good agreement with classical analytical results for large-amplitude vibration in a vacuum and large-amplitude panel flutter, using linear aerodynamics.

  1. Nonparametric method for failures diagnosis in the actuating subsystem of aircraft control system

    NASA Astrophysics Data System (ADS)

    Terentev, M. N.; Karpenko, S. S.; Zybin, E. Yu; Kosyanchuk, V. V.

    2018-02-01

    In this paper we design a nonparametric method for failures diagnosis in the aircraft control system that uses the measurements of the control signals and the aircraft states only. It doesn’t require a priori information of the aircraft model parameters, training or statistical calculations, and is based on analytical nonparametric one-step-ahead state prediction approach. This makes it possible to predict the behavior of unidentified and failure dynamic systems, to weaken the requirements to control signals, and to reduce the diagnostic time and problem complexity.

  2. Predicting adverse hemodynamic events in critically ill patients.

    PubMed

    Yoon, Joo H; Pinsky, Michael R

    2018-06-01

    The art of predicting future hemodynamic instability in the critically ill has rapidly become a science with the advent of advanced analytical processed based on computer-driven machine learning techniques. How these methods have progressed beyond severity scoring systems to interface with decision-support is summarized. Data mining of large multidimensional clinical time-series databases using a variety of machine learning tools has led to our ability to identify alert artifact and filter it from bedside alarms, display real-time risk stratification at the bedside to aid in clinical decision-making and predict the subsequent development of cardiorespiratory insufficiency hours before these events occur. This fast evolving filed is primarily limited by linkage of high-quality granular to physiologic rationale across heterogeneous clinical care domains. Using advanced analytic tools to glean knowledge from clinical data streams is rapidly becoming a reality whose clinical impact potential is great.

  3. Trends & Controversies: Sociocultural Predictive Analytics and Terrorism Deterrence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanfilippo, Antonio P.; McGrath, Liam R.

    2011-08-12

    The use of predictive analytics to model terrorist rhetoric is highly instrumental in developing a strategy to deter terrorism. Traditional (e.g. Cold-War) deterrence methods are ineffective with terrorist groups such as al Qaida. Terrorists typically regard the prospect of death or loss of property as acceptable consequences of their struggle. Deterrence by threat of punishment is therefore fruitless. On the other hand, isolating terrorists from the community that may sympathize with their cause can have a decisive deterring outcome. Without the moral backing of a supportive audience, terrorism cannot be successfully framed as a justifiable political strategy and recruiting ismore » curtailed. Ultimately, terrorism deterrence is more effectively enforced by exerting influence to neutralize the communicative reach of terrorists.« less

  4. Correlation of analytical and experimental hot structure vibration results

    NASA Technical Reports Server (NTRS)

    Kehoe, Michael W.; Deaton, Vivian C.

    1993-01-01

    High surface temperatures and temperature gradients can affect the vibratory characteristics and stability of aircraft structures. Aircraft designers are relying more on finite-element model analysis methods to ensure sufficient vehicle structural dynamic stability throughout the desired flight envelope. Analysis codes that predict these thermal effects must be correlated and verified with experimental data. Experimental modal data for aluminum, titanium, and fiberglass plates heated at uniform, nonuniform, and transient heating conditions are presented. The data show the effect of heat on each plate's modal characteristics, a comparison of predicted and measured plate vibration frequencies, the measured modal damping, and the effect of modeling material property changes and thermal stresses on the accuracy of the analytical results at nonuniform and transient heating conditions.

  5. Comparison of Analysis with Test for Static Loading of Two Hypersonic Inflatable Aerodynamic Decelerator Concepts

    NASA Technical Reports Server (NTRS)

    Lyle, Karen H.

    2015-01-01

    Acceptance of new spacecraft structural architectures and concepts requires validated design methods to minimize the expense involved with technology demonstration via flight-testing. Hypersonic Inflatable Aerodynamic Decelerator (HIAD) architectures are attractive for spacecraft deceleration because they are lightweight, store compactly, and utilize the atmosphere to decelerate a spacecraft during entry. However, designers are hesitant to include these inflatable approaches for large payloads or spacecraft because of the lack of flight validation. This publication summarizes results comparing analytical results with test data for two concepts subjected to representative entry, static loading. The level of agreement and ability to predict the load distribution is considered sufficient to enable analytical predictions to be used in the design process.

  6. The liquid fuel jet in subsonic crossflow

    NASA Technical Reports Server (NTRS)

    Nguyen, T. T.; Karagozian, A. R.

    1990-01-01

    An analytical/numerical model is described which predicts the behavior of nonreacting and reacting liquid jets injected transversely into subsonic cross flow. The compressible flowfield about the elliptical jet cross section is solved at various locations along the jet trajectory by analytical means for free-stream local Mach number perpendicular to jet cross section smaller than 0.3 and by numerical means for free-stream local Mach number perpendicular to jet cross section in the range 0.3-1.0. External and internal boundary layers along the jet cross section are solved by integral and numerical methods, and the mass losses due to boundary layer shedding, evaporation, and combustion are calculated and incorporated into the trajectory calculation. Comparison of predicted trajectories is made with limited experimental observations.

  7. The Promise and Peril of Predictive Analytics in Higher Education: A Landscape Analysis

    ERIC Educational Resources Information Center

    Ekowo, Manuela; Palmer, Iris

    2016-01-01

    Predictive analytics in higher education is a hot-button topic among educators and administrators as institutions strive to better serve students by becoming more data-informed. In this paper, the authors describe how predictive analytics are used in higher education to identify students who need extra support, steer students in courses they will…

  8. Net analyte signal-based simultaneous determination of ethanol and water by quartz crystal nanobalance sensor.

    PubMed

    Mirmohseni, A; Abdollahi, H; Rostamizadeh, K

    2007-02-28

    Net analyte signal (NAS)-based method called HLA/GO was applied for the selectively determination of binary mixture of ethanol and water by quartz crystal nanobalance (QCN) sensor. A full factorial design was applied for the formation of calibration and prediction sets in the concentration ranges 5.5-22.2 microg mL(-1) for ethanol and 7.01-28.07 microg mL(-1) for water. An optimal time range was selected by procedure which was based on the calculation of the net analyte signal regression plot in any considered time window for each test sample. A moving window strategy was used for searching the region with maximum linearity of NAS regression plot (minimum error indicator) and minimum of PRESS value. On the base of obtained results, the differences on the adsorption profiles in the time range between 1 and 600 s were used to determine mixtures of both compounds by HLA/GO method. The calculation of the net analytical signal using HLA/GO method allows determination of several figures of merit like selectivity, sensitivity, analytical sensitivity and limit of detection, for each component. To check the ability of the proposed method in the selection of linear regions of adsorption profile, a test for detecting non-linear regions of adsorption profile data in the presence of methanol was also described. The results showed that the method was successfully applied for the determination of ethanol and water.

  9. Analytic Guided-Search Model of Human Performance Accuracy in Target- Localization Search Tasks

    NASA Technical Reports Server (NTRS)

    Eckstein, Miguel P.; Beutter, Brent R.; Stone, Leland S.

    2000-01-01

    Current models of human visual search have extended the traditional serial/parallel search dichotomy. Two successful models for predicting human visual search are the Guided Search model and the Signal Detection Theory model. Although these models are inherently different, it has been difficult to compare them because the Guided Search model is designed to predict response time, while Signal Detection Theory models are designed to predict performance accuracy. Moreover, current implementations of the Guided Search model require the use of Monte-Carlo simulations, a method that makes fitting the model's performance quantitatively to human data more computationally time consuming. We have extended the Guided Search model to predict human accuracy in target-localization search tasks. We have also developed analytic expressions that simplify simulation of the model to the evaluation of a small set of equations using only three free parameters. This new implementation and extension of the Guided Search model will enable direct quantitative comparisons with human performance in target-localization search experiments and with the predictions of Signal Detection Theory and other search accuracy models.

  10. A Dynamic Calibration Method for Experimental and Analytical Hub Load Comparison

    NASA Technical Reports Server (NTRS)

    Kreshock, Andrew R.; Thornburgh, Robert P.; Wilbur, Matthew L.

    2017-01-01

    This paper presents the results from an ongoing effort to produce improved correlation between analytical hub force and moment prediction and those measured during wind-tunnel testing on the Aeroelastic Rotor Experimental System (ARES), a conventional rotor testbed commonly used at the Langley Transonic Dynamics Tunnel (TDT). A frequency-dependent transformation between loads at the rotor hub and outputs of the testbed balance is produced from frequency response functions measured during vibration testing of the system. The resulting transformation is used as a dynamic calibration of the balance to transform hub loads predicted by comprehensive analysis into predicted balance outputs. In addition to detailing the transformation process, this paper also presents a set of wind-tunnel test cases, with comparisons between the measured balance outputs and transformed predictions from the comprehensive analysis code CAMRAD II. The modal response of the testbed is discussed and compared to a detailed finite-element model. Results reveal that the modal response of the testbed exhibits a number of characteristics that make accurate dynamic balance predictions challenging, even with the use of the balance transformation.

  11. External evaluation of population pharmacokinetic models of vancomycin in neonates: the transferability of published models to different clinical settings

    PubMed Central

    Zhao, Wei; Kaguelidou, Florentia; Biran, Valérie; Zhang, Daolun; Allegaert, Karel; Capparelli, Edmund V; Holford, Nick; Kimura, Toshimi; Lo, Yoke-Lin; Peris, José-Esteban; Thomson, Alison; Anker, John N; Fakhoury, May; Jacqz-Aigrain, Evelyne

    2013-01-01

    Aims Vancomycin is one of the most evaluated antibiotics in neonates using modeling and simulation approaches. However no clear consensus on optimal dosing has been achieved. The objective of the present study was to perform an external evaluation of published models, in order to test their predictive performances in an independent dataset and to identify the possible study-related factors influencing the transferability of pharmacokinetic models to different clinical settings. Method Published neonatal vancomycin pharmacokinetic models were screened from the literature. The predictive performance of six models was evaluated using an independent dataset (112 concentrations from 78 neonates). The evaluation procedures used simulation-based diagnostics [visual predictive check (VPC) and normalized prediction distribution errors (NPDE)]. Results Differences in predictive performances of models for vancomycin pharmacokinetics in neonates were found. The mean of NPDE for six evaluated models were 1.35, −0.22, −0.36, 0.24, 0.66 and 0.48, respectively. These differences were explained, at least partly, by taking into account the method used to measure serum creatinine concentrations. The adult conversion factor of 1.3 (enzymatic to Jaffé) was tested with an improvement in the VPC and NPDE, but it still needs to be evaluated and validated in neonates. Differences were also identified between analytical methods for vancomycin. Conclusion The importance of analytical techniques for serum creatinine concentrations and vancomycin as predictors of vancomycin concentrations in neonates have been confirmed. Dosage individualization of vancomycin in neonates should consider not only patients' characteristics and clinical conditions, but also the methods used to measure serum creatinine and vancomycin. PMID:23148919

  12. Modeling of classical swirl injector dynamics

    NASA Astrophysics Data System (ADS)

    Ismailov, Maksud M.

    The knowledge of the dynamics of a swirl injector is crucial in designing a stable liquid rocket engine. Since the swirl injector is a complex fluid flow device in itself, not much work has been conducted to describe its dynamics either analytically or by using computational fluid dynamics techniques. Even the experimental observation is limited up to date. Thus far, there exists an analytical linear theory by Bazarov [1], which is based on long-wave disturbances traveling on the free surface of the injector core. This theory does not account for variation of the nozzle reflection coefficient as a function of disturbance frequency, and yields a response function which is strongly dependent on the so called artificial viscosity factor. This causes an uncertainty in designing an injector for the given operational combustion instability frequencies in the rocket engine. In this work, the author has studied alternative techniques to describe the swirl injector response, both analytically and computationally. In the analytical part, by using the linear small perturbation analysis, the entire phenomenon of unsteady flow in swirl injectors is dissected into fundamental components, which are the phenomena of disturbance wave refraction and reflection, and vortex chamber resonance. This reveals the nature of flow instability and the driving factors leading to maximum injector response. In the computational part, by employing the nonlinear boundary element method (BEM), the author sets the boundary conditions such that they closely simulate those in the analytical part. The simulation results then show distinct peak responses at frequencies that are coincident with those resonant frequencies predicted in the analytical part. Moreover, a cold flow test of the injector related to this study also shows a clear growth of instability with its maximum amplitude at the first fundamental frequency predicted both by analytical methods and BEM. It shall be noted however that Bazarov's theory does not predict the resonant peaks. Overall this methodology provides clearer understanding of the injector dynamics compared to Bazarov's. Even though the exact value of response is not possible to obtain at this stage of theoretical, computational, and experimental investigation, this methodology sets the starting point from where the theoretical description of reflection/refraction, resonance, and their interaction between each other may be refined to higher order to obtain its more precise value.

  13. Improved partition equilibrium model for predicting analyte response in electrospray ionization mass spectrometry.

    PubMed

    Du, Lihong; White, Robert L

    2009-02-01

    A previously proposed partition equilibrium model for quantitative prediction of analyte response in electrospray ionization mass spectrometry is modified to yield an improved linear relationship. Analyte mass spectrometer response is modeled by a competition mechanism between analyte and background electrolytes that is based on partition equilibrium considerations. The correlation between analyte response and solution composition is described by the linear model over a wide concentration range and the improved model is shown to be valid for a wide range of experimental conditions. The behavior of an analyte in a salt solution, which could not be explained by the original model, is correctly predicted. The ion suppression effects of 16:0 lysophosphatidylcholine (LPC) on analyte signals are attributed to a combination of competition for excess charge and reduction of total charge due to surface tension effects. In contrast to the complicated mathematical forms that comprise the original model, the simplified model described here can more easily be employed to predict analyte mass spectrometer responses for solutions containing multiple components. Copyright (c) 2008 John Wiley & Sons, Ltd.

  14. Using Empirical Models for Communication Prediction of Spacecraft

    NASA Technical Reports Server (NTRS)

    Quasny, Todd

    2015-01-01

    A viable communication path to a spacecraft is vital for its successful operation. For human spaceflight, a reliable and predictable communication link between the spacecraft and the ground is essential not only for the safety of the vehicle and the success of the mission, but for the safety of the humans on board as well. However, analytical models of these communication links are challenged by unique characteristics of space and the vehicle itself. For example, effects of radio frequency during high energy solar events while traveling through a solar array of a spacecraft can be difficult to model, and thus to predict. This presentation covers the use of empirical methods of communication link predictions, using the International Space Station (ISS) and its associated historical data as the verification platform and test bed. These empirical methods can then be incorporated into communication prediction and automation tools for the ISS in order to better understand the quality of the communication path given a myriad of variables, including solar array positions, line of site to satellites, position of the sun, and other dynamic structures on the outside of the ISS. The image on the left below show the current analytical model of one of the communication systems on the ISS. The image on the right shows a rudimentary empirical model of the same system based on historical archived data from the ISS.

  15. Analytical and experimental vibration analysis of a faulty gear system

    NASA Astrophysics Data System (ADS)

    Choy, F. K.; Braun, M. J.; Polyshchuk, V.; Zakrajsek, J. J.; Townsend, D. P.; Handschuh, R. F.

    1994-10-01

    A comprehensive analytical procedure was developed for predicting faults in gear transmission systems under normal operating conditions. A gear tooth fault model is developed to simulate the effects of pitting and wear on the vibration signal under normal operating conditions. The model uses changes in the gear mesh stiffness to simulate the effects of gear tooth faults. The overall dynamics of the gear transmission system is evaluated by coupling the dynamics of each individual gear-rotor system through gear mesh forces generated between each gear-rotor system and the bearing forces generated between the rotor and the gearbox structures. The predicted results were compared with experimental results obtained from a spiral bevel gear fatigue test rig at NASA Lewis Research Center. The Wigner-Ville Distribution (WVD) was used to give a comprehensive comparison of the predicted and experimental results. The WVD method applied to the experimental results were also compared to other fault detection techniques to verify the WVD's ability to detect the pitting damage, and to determine its relative performance. Overall results show good correlation between the experimental vibration data of the damaged test gear and the predicted vibration from the model with simulated gear tooth pitting damage. Results also verified that the WVD method can successfully detect and locate gear tooth wear and pitting damage.

  16. Analytical and experimental vibration analysis of a faulty gear system

    NASA Astrophysics Data System (ADS)

    Choy, F. K.; Braun, M. J.; Polyshchuk, V.; Zakrajsek, J. J.; Townsend, D. P.; Handschuh, R. F.

    1994-10-01

    A comprehensive analytical procedure was developed for predicting faults in gear transmission systems under normal operating conditions. A gear tooth fault model is developed to simulate the effects of pitting and wear on the vibration signal under normal operating conditions. The model uses changes in the gear mesh stiffness to simulate the effects of gear tooth faults. The overall dynamics of the gear transmission system is evaluated by coupling the dynamics of each individual gear-rotor system through gear mesh forces generated between each gear-rotor system and the bearing forces generated between the rotor and the gearbox structure. The predicted results were compared with experimental results obtained from a spiral bevel gear fatigue test rig at NASA Lewis Research Center. The Wigner-Ville distribution (WVD) was used to give a comprehensive comparison of the predicted and experimental results. The WVD method applied to the experimental results were also compared to other fault detection techniques to verify the WVD's ability to detect the pitting damage, and to determine its relative performance. Overall results show good correlation between the experimental vibration data of the damaged test gear and the predicted vibration from the model with simulated gear tooth pitting damage. Results also verified that the WVD method can successfully detect and locate gear tooth wear and pitting damage.

  17. The critical period hypothesis in second language acquisition: a statistical critique and a reanalysis.

    PubMed

    Vanhove, Jan

    2013-01-01

    In second language acquisition research, the critical period hypothesis (cph) holds that the function between learners' age and their susceptibility to second language input is non-linear. This paper revisits the indistinctness found in the literature with regard to this hypothesis's scope and predictions. Even when its scope is clearly delineated and its predictions are spelt out, however, empirical studies-with few exceptions-use analytical (statistical) tools that are irrelevant with respect to the predictions made. This paper discusses statistical fallacies common in cph research and illustrates an alternative analytical method (piecewise regression) by means of a reanalysis of two datasets from a 2010 paper purporting to have found cross-linguistic evidence in favour of the cph. This reanalysis reveals that the specific age patterns predicted by the cph are not cross-linguistically robust. Applying the principle of parsimony, it is concluded that age patterns in second language acquisition are not governed by a critical period. To conclude, this paper highlights the role of confirmation bias in the scientific enterprise and appeals to second language acquisition researchers to reanalyse their old datasets using the methods discussed in this paper. The data and R commands that were used for the reanalysis are provided as supplementary materials.

  18. Analytical and Experimental Vibration Analysis of a Faulty Gear System

    NASA Technical Reports Server (NTRS)

    Choy, F. K.; Braun, M. J.; Polyshchuk, V.; Zakrajsek, J. J.; Townsend, D. P.; Handschuh, R. F.

    1994-01-01

    A comprehensive analytical procedure was developed for predicting faults in gear transmission systems under normal operating conditions. A gear tooth fault model is developed to simulate the effects of pitting and wear on the vibration signal under normal operating conditions. The model uses changes in the gear mesh stiffness to simulate the effects of gear tooth faults. The overall dynamics of the gear transmission system is evaluated by coupling the dynamics of each individual gear-rotor system through gear mesh forces generated between each gear-rotor system and the bearing forces generated between the rotor and the gearbox structure. The predicted results were compared with experimental results obtained from a spiral bevel gear fatigue test rig at NASA Lewis Research Center. The Wigner-Ville distribution (WVD) was used to give a comprehensive comparison of the predicted and experimental results. The WVD method applied to the experimental results were also compared to other fault detection techniques to verify the WVD's ability to detect the pitting damage, and to determine its relative performance. Overall results show good correlation between the experimental vibration data of the damaged test gear and the predicted vibration from the model with simulated gear tooth pitting damage. Results also verified that the WVD method can successfully detect and locate gear tooth wear and pitting damage.

  19. Geospatial Analytics in Retail Site Selection and Sales Prediction.

    PubMed

    Ting, Choo-Yee; Ho, Chiung Ching; Yee, Hui Jia; Matsah, Wan Razali

    2018-03-01

    Studies have shown that certain features from geography, demography, trade area, and environment can play a vital role in retail site selection, largely due to the impact they asserted on retail performance. Although the relevant features could be elicited by domain experts, determining the optimal feature set can be intractable and labor-intensive exercise. The challenges center around (1) how to determine features that are important to a particular retail business and (2) how to estimate retail sales performance given a new location? The challenges become apparent when the features vary across time. In this light, this study proposed a nonintervening approach by employing feature selection algorithms and subsequently sales prediction through similarity-based methods. The results of prediction were validated by domain experts. In this study, data sets from different sources were transformed and aggregated before an analytics data set that is ready for analysis purpose could be obtained. The data sets included data about feature location, population count, property type, education status, and monthly sales from 96 branches of a telecommunication company in Malaysia. The finding suggested that (1) optimal retail performance can only be achieved through fulfillment of specific location features together with the surrounding trade area characteristics and (2) similarity-based method can provide solution to retail sales prediction.

  20. Application of Dynamic Analysis in Semi-Analytical Finite Element Method

    PubMed Central

    Oeser, Markus

    2017-01-01

    Analyses of dynamic responses are significantly important for the design, maintenance and rehabilitation of asphalt pavement. In order to evaluate the dynamic responses of asphalt pavement under moving loads, a specific computational program, SAFEM, was developed based on a semi-analytical finite element method. This method is three-dimensional and only requires a two-dimensional FE discretization by incorporating Fourier series in the third dimension. In this paper, the algorithm to apply the dynamic analysis to SAFEM was introduced in detail. Asphalt pavement models under moving loads were built in the SAFEM and commercial finite element software ABAQUS to verify the accuracy and efficiency of the SAFEM. The verification shows that the computational accuracy of SAFEM is high enough and its computational time is much shorter than ABAQUS. Moreover, experimental verification was carried out and the prediction derived from SAFEM is consistent with the measurement. Therefore, the SAFEM is feasible to reliably predict the dynamic response of asphalt pavement under moving loads, thus proving beneficial to road administration in assessing the pavement’s state. PMID:28867813

  1. Penning Ionization Electron Spectroscopy in Glow Discharge: A New Dimension for Gas Chromatography Detectors

    NASA Technical Reports Server (NTRS)

    Sheverev, V. A.; Khromov, N. A.; Kojiro, D. R.; Fonda, Mark (Technical Monitor)

    2002-01-01

    Admixtures to helium of 100 ppm and 5 ppm of nitrogen, and 100 ppm and 10 ppm of carbon monoxide were identified and measured in the helium discharge afterglow using an electrical probe placed into the plasma. For nitrogen and carbon monoxide gases, the measured electron energy spectra display distinct characteristic peaks (fingerprints). Location of the peaks on the energy scale is determined by the ionization energies of the analyte molecules. Nitrogen and carbon monoxide fingerprints were also observed in a binary mixture of these gases in helium, and the relative concentration analytes has been predicted. The technically simple and durable method is considered a good candidate for a number of analytical applications, and in particular, in GC and for analytical flight instrumentation.

  2. Managing knowledge business intelligence: A cognitive analytic approach

    NASA Astrophysics Data System (ADS)

    Surbakti, Herison; Ta'a, Azman

    2017-10-01

    The purpose of this paper is to identify and analyze integration of Knowledge Management (KM) and Business Intelligence (BI) in order to achieve competitive edge in context of intellectual capital. Methodology includes review of literatures and analyzes the interviews data from managers in corporate sector and models established by different authors. BI technologies have strong association with process of KM for attaining competitive advantage. KM have strong influence from human and social factors and turn them to the most valuable assets with efficient system run under BI tactics and technologies. However, the term of predictive analytics is based on the field of BI. Extracting tacit knowledge is a big challenge to be used as a new source for BI to use in analyzing. The advanced approach of the analytic methods that address the diversity of data corpus - structured and unstructured - required a cognitive approach to provide estimative results and to yield actionable descriptive, predictive and prescriptive results. This is a big challenge nowadays, and this paper aims to elaborate detail in this initial work.

  3. Analytical investigation of aerodynamic characteristics of highly swept wings with separated flow

    NASA Technical Reports Server (NTRS)

    Reddy, C. S.

    1980-01-01

    Many modern aircraft designed for supersonic speeds employ highly swept-back and low-aspect-ratio wings with sharp or thin edges. Flow separation occurs near the leading and tip edges of such wings at moderate to high angles of attack. Attempts have been made over the years to develop analytical methods for predicting the aerodynamic characteristics of such aircraft. Before any method can really be useful, it must be tested against a standard set of data to determine its capabilities and limitations. The present work undertakes such an investigation. Three methods are considered: the free-vortex-sheet method (Weber et al., 1975), the vortex-lattice method with suction analogy (Lamar and Gloss, 1975), and the quasi-vortex lattice method of Mehrotra (1977). Both flat and cambered wings of different configurations, for which experimental data are available, are studied and comparisons made.

  4. Blade loss transient dynamic analysis of turbomachinery

    NASA Technical Reports Server (NTRS)

    Stallone, M. J.; Gallardo, V.; Storace, A. F.; Bach, L. J.; Black, G.; Gaffney, E. F.

    1982-01-01

    This paper reports on work completed to develop an analytical method for predicting the transient non-linear response of a complete aircraft engine system due to the loss of a fan blade, and to validate the analysis by comparing the results against actual blade loss test data. The solution, which is based on the component element method, accounts for rotor-to-casing rubs, high damping and rapid deceleration rates associated with the blade loss event. A comparison of test results and predicted response show good agreement except for an initial overshoot spike not observed in test. The method is effective for analysis of large systems.

  5. Scaling Student Success with Predictive Analytics: Reflections after Four Years in the Data Trenches

    ERIC Educational Resources Information Center

    Wagner, Ellen; Longanecker, David

    2016-01-01

    The metrics used in the US to track students do not include adults and part-time students. This has led to the development of a massive data initiative--the Predictive Analytics Reporting (PAR) framework--that uses predictive analytics to trace the progress of all types of students in the system. This development has allowed actionable,…

  6. Results of an integrated structure-control law design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.

    1988-01-01

    Next generation air and space vehicle designs are driven by increased performance requirements, demanding a high level of design integration between traditionally separate design disciplines. Interdisciplinary analysis capabilities have been developed, for aeroservoelastic aircraft and large flexible spacecraft control for instance, but the requisite integrated design methods are only beginning to be developed. One integrated design method which has received attention is based on hierarchal problem decompositions, optimization, and design sensitivity analyses. This paper highlights a design sensitivity analysis method for Linear Quadratic Cost, Gaussian (LQG) optimal control laws, which predicts change in the optimal control law due to changes in fixed problem parameters using analytical sensitivity equations. Numerical results of a design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimally controlled aircraft's response to various problem formulation and physical aircraft parameters is determined. These results are used to predict the aircraft's new optimally controlled response if the parameter was to have some other nominal value during the control law design process. The sensitivity results are validated by recomputing the optimal control law for discrete variations in parameters, computing the new actual aircraft response, and comparing with the predicted response. These results show an improvement in sensitivity accuracy for integrated design purposes over methods which do not include changess in the optimal control law. Use of the analytical LQG sensitivity expressions is also shown to be more efficient that finite difference methods for the computation of the equivalent sensitivity information.

  7. Estudio comparativo de los metodos analitico-sintetico y global en el aprendizaje de la lectura (Comparative Study of the Analytical-Synthetical (Phonics) and Global (Sight) Reading Methods).

    ERIC Educational Resources Information Center

    Carbonell de Grompone, Maria A.; And Others

    An investigation into the phonics and sight methods of reading instruction being taught in Uruguay schools seeks valid predictions in support of each approach. The study, written in Spanish, examines the progressive reading habits and abilities of 12 first-grade classes. Teachers assigned to teach each method uniformly had equivalent training and…

  8. Improved algorithms and methods for room sound-field prediction by acoustical radiosity in arbitrary polyhedral rooms.

    PubMed

    Nosal, Eva-Marie; Hodgson, Murray; Ashdown, Ian

    2004-08-01

    This paper explores acoustical (or time-dependent) radiosity--a geometrical-acoustics sound-field prediction method that assumes diffuse surface reflection. The literature of acoustical radiosity is briefly reviewed and the advantages and disadvantages of the method are discussed. A discrete form of the integral equation that results from meshing the enclosure boundaries into patches is presented and used in a discrete-time algorithm. Furthermore, an averaging technique is used to reduce computational requirements. To generalize to nonrectangular rooms, a spherical-triangle method is proposed as a means of evaluating the integrals over solid angles that appear in the discrete form of the integral equation. The evaluation of form factors, which also appear in the numerical solution, is discussed for rectangular and nonrectangular rooms. This algorithm and associated methods are validated by comparison of the steady-state predictions for a spherical enclosure to analytical solutions.

  9. Improved algorithms and methods for room sound-field prediction by acoustical radiosity in arbitrary polyhedral rooms

    NASA Astrophysics Data System (ADS)

    Nosal, Eva-Marie; Hodgson, Murray; Ashdown, Ian

    2004-08-01

    This paper explores acoustical (or time-dependent) radiosity-a geometrical-acoustics sound-field prediction method that assumes diffuse surface reflection. The literature of acoustical radiosity is briefly reviewed and the advantages and disadvantages of the method are discussed. A discrete form of the integral equation that results from meshing the enclosure boundaries into patches is presented and used in a discrete-time algorithm. Furthermore, an averaging technique is used to reduce computational requirements. To generalize to nonrectangular rooms, a spherical-triangle method is proposed as a means of evaluating the integrals over solid angles that appear in the discrete form of the integral equation. The evaluation of form factors, which also appear in the numerical solution, is discussed for rectangular and nonrectangular rooms. This algorithm and associated methods are validated by comparison of the steady-state predictions for a spherical enclosure to analytical solutions.

  10. A new method for the prediction of chatter stability lobes based on dynamic cutting force simulation model and support vector machine

    NASA Astrophysics Data System (ADS)

    Peng, Chong; Wang, Lun; Liao, T. Warren

    2015-10-01

    Currently, chatter has become the critical factor in hindering machining quality and productivity in machining processes. To avoid cutting chatter, a new method based on dynamic cutting force simulation model and support vector machine (SVM) is presented for the prediction of chatter stability lobes. The cutting force is selected as the monitoring signal, and the wavelet energy entropy theory is used to extract the feature vectors. A support vector machine is constructed using the MATLAB LIBSVM toolbox for pattern classification based on the feature vectors derived from the experimental cutting data. Then combining with the dynamic cutting force simulation model, the stability lobes diagram (SLD) can be estimated. Finally, the predicted results are compared with existing methods such as zero-order analytical (ZOA) and semi-discretization (SD) method as well as actual cutting experimental results to confirm the validity of this new method.

  11. The current preference for the immuno-analytical ELISA method for quantitation of steroid hormones (endocrine disruptor compounds) in wastewater in South Africa.

    PubMed

    Manickum, Thavrin; John, Wilson

    2015-07-01

    The availability of national test centers to offer a routine service for analysis and quantitation of some selected steroid hormones [natural estrogens (17-β-estradiol, E2; estrone, E1; estriol, E3), synthetic estrogen (17-α-ethinylestradiol, EE2), androgen (testosterone), and progestogen (progesterone)] in wastewater matrix was investigated; corresponding internationally used chemical- and immuno-analytical test methods were reviewed. The enzyme-linked immunosorbent assay (ELISA) (immuno-analytical technique) was also assessed for its suitability as a routine test method to quantitate the levels of these hormones at a sewage/wastewater treatment plant (WTP) (Darvill, Pietermaritzburg, South Africa), over a 2-year period. The method performance and other relevant characteristics of the immuno-analytical ELISA method were compared to the conventional chemical-analytical methodology, like gas/liquid chromatography-mass spectrometry (GC/LC-MS), and GC-LC/tandem mass spectrometry (MSMS), for quantitation of the steroid hormones in wastewater and environmental waters. The national immuno-analytical ELISA technique was found to be sensitive (LOQ 5 ng/L, LOD 0.2-5 ng/L), accurate (mean recovery 96%), precise (RSD 7-10%), and cost-effective for screening and quantitation of these steroid hormones in wastewater and environmental water matrix. A survey of the most current international literature indicates a fairly equal use of the LC-MS/MS, GC-MS/MS (chemical-analytical), and ELISA (immuno-analytical) test methods for screening and quantitation of the target steroid hormones in both water and wastewater matrix. Internationally, the observed sensitivity, based on LOQ (ng/L), for the steroid estrogens E1, E2, EE2, is, in decreasing order: LC-MSMS (0.08-9.54) > GC-MS (1) > ELISA (5) (chemical-analytical > immuno-analytical). At the national level, the routine, unoptimized chemical-analytical LC-MSMS method was found to lack the required sensitivity for meeting environmental requirements for steroid hormone quantitation. Further optimization of the sensitivity of the chemical-analytical LC-tandem mass spectrometry methods, especially for wastewater screening, in South Africa is required. Risk assessment studies showed that it was not practical to propose standards or allowable limits for the steroid estrogens E1, E2, EE2, and E3; the use of predicted-no-effect concentration values of the steroid estrogens appears to be appropriate for use in their risk assessment in relation to aquatic organisms. For raw water sources, drinking water, raw and treated wastewater, the use of bioassays, with trigger values, is a useful screening tool option to decide whether further examination of specific endocrine activity may be warranted, or whether concentrations of such activity are of low priority, with respect to health concerns in the human population. The achievement of improved quantitation limits for immuno-analytical methods, like ELISA, used for compound quantitation, and standardization of the method for measuring E2 equivalents (EEQs) used for biological activity (endocrine: e.g., estrogenic) are some areas for future EDC research.

  12. Implementing Operational Analytics using Big Data Technologies to Detect and Predict Sensor Anomalies

    NASA Astrophysics Data System (ADS)

    Coughlin, J.; Mital, R.; Nittur, S.; SanNicolas, B.; Wolf, C.; Jusufi, R.

    2016-09-01

    Operational analytics when combined with Big Data technologies and predictive techniques have been shown to be valuable in detecting mission critical sensor anomalies that might be missed by conventional analytical techniques. Our approach helps analysts and leaders make informed and rapid decisions by analyzing large volumes of complex data in near real-time and presenting it in a manner that facilitates decision making. It provides cost savings by being able to alert and predict when sensor degradations pass a critical threshold and impact mission operations. Operational analytics, which uses Big Data tools and technologies, can process very large data sets containing a variety of data types to uncover hidden patterns, unknown correlations, and other relevant information. When combined with predictive techniques, it provides a mechanism to monitor and visualize these data sets and provide insight into degradations encountered in large sensor systems such as the space surveillance network. In this study, data from a notional sensor is simulated and we use big data technologies, predictive algorithms and operational analytics to process the data and predict sensor degradations. This study uses data products that would commonly be analyzed at a site. This study builds on a big data architecture that has previously been proven valuable in detecting anomalies. This paper outlines our methodology of implementing an operational analytic solution through data discovery, learning and training of data modeling and predictive techniques, and deployment. Through this methodology, we implement a functional architecture focused on exploring available big data sets and determine practical analytic, visualization, and predictive technologies.

  13. Propeller noise prediction

    NASA Technical Reports Server (NTRS)

    Zorumski, W. E.

    1983-01-01

    Analytic propeller noise prediction involves a sequence of computations culminating in the application of acoustic equations. The prediction sequence currently used by NASA in its ANOPP (aircraft noise prediction) program is described. The elements of the sequence are called program modules. The first group of modules analyzes the propeller geometry, the aerodynamics, including both potential and boundary layer flow, the propeller performance, and the surface loading distribution. This group of modules is based entirely on aerodynamic strip theory. The next group of modules deals with the actual noise prediction, based on data from the first group. Deterministic predictions of periodic thickness and loading noise are made using Farassat's time-domain methods. Broadband noise is predicted by the semi-empirical Schlinker-Amiet method. Near-field predictions of fuselage surface pressures include the effects of boundary layer refraction and (for a cylinder) scattering. Far-field predictions include atmospheric and ground effects. Experimental data from subsonic and transonic propellers are compared and NASA's future direction is propeller noise technology development are indicated.

  14. [Detection of rubella virus RNA in clinical material by real time polymerase chain reaction method].

    PubMed

    Domonova, É A; Shipulina, O Iu; Kuevda, D A; Larichev, V F; Safonova, A P; Burchik, M A; Butenko, A M; Shipulin, G A

    2012-01-01

    Development of a reagent kit for detection of rubella virus RNA in clinical material by PCR-RT. During development and determination of analytical specificity and sensitivity DNA and RNA of 33 different microorganisms including 4 rubella strains were used. Comparison of analytical sensitivity of virological and molecular-biological methods was performed by using rubella virus strains Wistar RA 27/3, M-33, "Orlov", Judith. Evaluation of diagnostic informativity of rubella virus RNAisolation in various clinical material by PCR-RT method was performed in comparison with determination of virus specific serum antibodies by enzyme immunoassay. A reagent kit for the detection of rubella virus RNA in clinical material by PCR-RT was developed. Analytical specificity was 100%, analytical sensitivity - 400 virus RNA copies per ml. Analytical sensitivity of the developed technique exceeds analytical sensitivity of the Vero E6 cell culture infection method in studies of rubella virus strains Wistar RA 27/3 and "Orlov" by 11g and 31g, and for M-33 and Judith strains is analogous. Diagnostic specificity is 100%. Diagnostic specificity for testing samples obtained within 5 days of rash onset: for peripheral blood sera - 20.9%, saliva - 92.5%, nasopharyngeal swabs - 70.1%, saliva and nasopharyngeal swabs - 97%. Positive and negative predictive values of the results were shown depending on the type of clinical material tested. Application of reagent kit will allow to increase rubella diagnostics effectiveness at the early stages of infectious process development, timely and qualitatively perform differential diagnostics of exanthema diseases, support tactics of anti-epidemic regime.

  15. Core Engine Noise Control Program. Volume III. Prediction Methods

    DTIC Science & Technology

    1974-08-01

    turbofan engines , and Method (C) is based on an analytical description of viscous wake interaction between adjoining blade rows. Turbine Tone/ Jet ...levels for turbojet , turboshaft and turbofan engines . The turbojet data correlate highest and the turbofan data correlate lowest. Turbine Noise Noise...different engines were examined for combustor, jet and fan noise. Tnree turbojet , two turboshaft and two turbofan

  16. The Shock and Vibration Bulletin. Part 2. Invited Papers, Structural Dynamics

    DTIC Science & Technology

    1974-08-01

    VIKING LANDER DYNAMICS 41 Mr. Joseph C. Pohlen, Martin Marietta Aerospace, Denver, Colorado Structural Dynamics PERFORMANCE OF STATISTICAL ENERGY ANALYSIS 47...aerospace structures. Analytical prediction of these environments is beyond the current scope of classical modal techniques. Statistical energy analysis methods...have been developed that circumvent the difficulties of high-frequency nodal analysis. These statistical energy analysis methods are evaluated

  17. S-curve networks and an approximate method for estimating degree distributions of complex networks

    NASA Astrophysics Data System (ADS)

    Guo, Jin-Li

    2010-12-01

    In the study of complex networks almost all theoretical models have the property of infinite growth, but the size of actual networks is finite. According to statistics from the China Internet IPv4 (Internet Protocol version 4) addresses, this paper proposes a forecasting model by using S curve (logistic curve). The growing trend of IPv4 addresses in China is forecasted. There are some reference values for optimizing the distribution of IPv4 address resource and the development of IPv6. Based on the laws of IPv4 growth, that is, the bulk growth and the finitely growing limit, it proposes a finite network model with a bulk growth. The model is said to be an S-curve network. Analysis demonstrates that the analytic method based on uniform distributions (i.e., Barabási-Albert method) is not suitable for the network. It develops an approximate method to predict the growth dynamics of the individual nodes, and uses this to calculate analytically the degree distribution and the scaling exponents. The analytical result agrees with the simulation well, obeying an approximately power-law form. This method can overcome a shortcoming of Barabási-Albert method commonly used in current network research.

  18. Application of statistical classification methods for predicting the acceptability of well-water quality

    NASA Astrophysics Data System (ADS)

    Cameron, Enrico; Pilla, Giorgio; Stella, Fabio A.

    2018-06-01

    The application of statistical classification methods is investigated—in comparison also to spatial interpolation methods—for predicting the acceptability of well-water quality in a situation where an effective quantitative model of the hydrogeological system under consideration cannot be developed. In the example area in northern Italy, in particular, the aquifer is locally affected by saline water and the concentration of chloride is the main indicator of both saltwater occurrence and groundwater quality. The goal is to predict if the chloride concentration in a water well will exceed the allowable concentration so that the water is unfit for the intended use. A statistical classification algorithm achieved the best predictive performances and the results of the study show that statistical classification methods provide further tools for dealing with groundwater quality problems concerning hydrogeological systems that are too difficult to describe analytically or to simulate effectively.

  19. IBM Watson Analytics: Automating Visualization, Descriptive, and Predictive Statistics.

    PubMed

    Hoyt, Robert Eugene; Snider, Dallas; Thompson, Carla; Mantravadi, Sarita

    2016-10-11

    We live in an era of explosive data generation that will continue to grow and involve all industries. One of the results of this explosion is the need for newer and more efficient data analytics procedures. Traditionally, data analytics required a substantial background in statistics and computer science. In 2015, International Business Machines Corporation (IBM) released the IBM Watson Analytics (IBMWA) software that delivered advanced statistical procedures based on the Statistical Package for the Social Sciences (SPSS). The latest entry of Watson Analytics into the field of analytical software products provides users with enhanced functions that are not available in many existing programs. For example, Watson Analytics automatically analyzes datasets, examines data quality, and determines the optimal statistical approach. Users can request exploratory, predictive, and visual analytics. Using natural language processing (NLP), users are able to submit additional questions for analyses in a quick response format. This analytical package is available free to academic institutions (faculty and students) that plan to use the tools for noncommercial purposes. To report the features of IBMWA and discuss how this software subjectively and objectively compares to other data mining programs. The salient features of the IBMWA program were examined and compared with other common analytical platforms, using validated health datasets. Using a validated dataset, IBMWA delivered similar predictions compared with several commercial and open source data mining software applications. The visual analytics generated by IBMWA were similar to results from programs such as Microsoft Excel and Tableau Software. In addition, assistance with data preprocessing and data exploration was an inherent component of the IBMWA application. Sensitivity and specificity were not included in the IBMWA predictive analytics results, nor were odds ratios, confidence intervals, or a confusion matrix. IBMWA is a new alternative for data analytics software that automates descriptive, predictive, and visual analytics. This program is very user-friendly but requires data preprocessing, statistical conceptual understanding, and domain expertise.

  20. Experimental study and analytical model of deformation of magnetostrictive films as applied to mirrors for x-ray space telescopes.

    PubMed

    Wang, Xiaoli; Knapp, Peter; Vaynman, S; Graham, M E; Cao, Jian; Ulmer, M P

    2014-09-20

    The desire for continuously gaining new knowledge in astronomy has pushed the frontier of engineering methods to deliver lighter, thinner, higher quality mirrors at an affordable cost for use in an x-ray observatory. To address these needs, we have been investigating the application of magnetic smart materials (MSMs) deposited as a thin film on mirror substrates. MSMs have some interesting properties that make the application of MSMs to mirror substrates a promising solution for making the next generation of x-ray telescopes. Due to the ability to hold a shape with an impressed permanent magnetic field, MSMs have the potential to be the method used to make light weight, affordable x-ray telescope mirrors. This paper presents the experimental setup for measuring the deformation of the magnetostrictive bimorph specimens under an applied magnetic field, and the analytical and numerical analysis of the deformation. As a first step in the development of tools to predict deflections, we deposited Terfenol-D on the glass substrates. We then made measurements that were compared with the results from the analytical and numerical analysis. The surface profiles of thin-film specimens were measured under an external magnetic field with white light interferometry (WLI). The analytical model provides good predictions of film deformation behavior under various magnetic field strengths. This work establishes a solid foundation for further research to analyze the full three-dimensional deformation behavior of magnetostrictive thin films.

  1. Method for predicting dry mechanical properties from wet wood and standing trees

    DOEpatents

    Meglen, Robert R.; Kelley, Stephen S.

    2003-08-12

    A method for determining the dry mechanical strength for a green wood comprising: illuminating a surface of the wood to be determined with light between 350-2,500 nm, the wood having a green moisture content; analyzing the surface using a spectrometric method, the method generating a first spectral data, and using a multivariate analysis to predict the dry mechanical strength of green wood when dry by comparing the first spectral data with a calibration model, the calibration model comprising a second spectrometric method of spectral data obtained from a reference wood having a green moisture content, the second spectral data correlated with a known mechanical strength analytical result obtained from a reference wood when dried and having a dry moisture content.

  2. Research study on high energy radiation effect and environment solar cell degradation methods

    NASA Technical Reports Server (NTRS)

    Horne, W. E.; Wilkinson, M. C.

    1974-01-01

    The most detailed and comprehensively verified analytical model was used to evaluate the effects of simplifying assumptions on the accuracy of predictions made by the external damage coefficient method. It was found that the most serious discrepancies were present in heavily damaged cells, particularly proton damaged cells, in which a gradient in damage across the cell existed. In general, it was found that the current damage coefficient method tends to underestimate damage at high fluences. An exception to this rule was thick cover-slipped cells experiencing heavy degradation due to omnidirectional electrons. In such cases, the damage coefficient method overestimates the damage. Comparisons of degradation predictions made by the two methods and measured flight data confirmed the above findings.

  3. A correlation method to predict the surface pressure distribution on an infinite plate from which a jet is issuing. [effects of a lifting jet

    NASA Technical Reports Server (NTRS)

    Perkins, S. C., Jr.; Menhall, M. R.

    1978-01-01

    A correlation method to predict pressures induced on an infinite plate by a jet issuing from the plate into a subsonic free stream was developed. The complete method consists of an analytical method which models the blockage and entrainment properties of the jet and a correlation which accounts for the effects of separation. The method was developed for jet velocity ratios up to ten and for radial distances up to five diameters from the jet. Correlation curves and data comparisons are presented for jets issuing normally from a flat plate with velocity ratios one to twelve. Also, a list of references which deal with jets in a crossflow is presented.

  4. Analytical Modeling for the Bending Resonant Frequency of Multilayered Microresonators with Variable Cross-Section

    PubMed Central

    Herrera-May, Agustín L.; Aguilera-Cortés, Luz A.; Plascencia-Mora, Hector; Rodríguez-Morales, Ángel L.; Lu, Jian

    2011-01-01

    Multilayered microresonators commonly use sensitive coating or piezoelectric layers for detection of mass and gas. Most of these microresonators have a variable cross-section that complicates the prediction of their fundamental resonant frequency (generally of the bending mode) through conventional analytical models. In this paper, we present an analytical model to estimate the first resonant frequency and deflection curve of single-clamped multilayered microresonators with variable cross-section. The analytical model is obtained using the Rayleigh and Macaulay methods, as well as the Euler-Bernoulli beam theory. Our model is applied to two multilayered microresonators with piezoelectric excitation reported in the literature. Both microresonators are composed by layers of seven different materials. The results of our analytical model agree very well with those obtained from finite element models (FEMs) and experimental data. Our analytical model can be used to determine the suitable dimensions of the microresonator’s layers in order to obtain a microresonator that operates at a resonant frequency necessary for a particular application. PMID:22164071

  5. METHODOLOGY TO EVALUATE THE POTENTIAL FOR GROUND WATER CONTAMINATION FROM GEOTHERMAL FLUID RELEASES

    EPA Science Inventory

    This report provides analytical methods and graphical techniques to predict potential ground water contamination from geothermal energy development. Overflows and leaks from ponds, pipe leaks, well blowouts, leaks from well casing, and migration from injection zones can be handle...

  6. Computational Fluid Dynamics Uncertainty Analysis Applied to Heat Transfer over a Flat Plate

    NASA Technical Reports Server (NTRS)

    Groves, Curtis Edward; Ilie, Marcel; Schallhorn, Paul A.

    2013-01-01

    There have been few discussions on using Computational Fluid Dynamics (CFD) without experimental validation. Pairing experimental data, uncertainty analysis, and analytical predictions provides a comprehensive approach to verification and is the current state of the art. With pressed budgets, collecting experimental data is rare or non-existent. This paper investigates and proposes a method to perform CFD uncertainty analysis only from computational data. The method uses current CFD uncertainty techniques coupled with the Student-T distribution to predict the heat transfer coefficient over a at plate. The inputs to the CFD model are varied from a specified tolerance or bias error and the difference in the results are used to estimate the uncertainty. The variation in each input is ranked from least to greatest to determine the order of importance. The results are compared to heat transfer correlations and conclusions drawn about the feasibility of using CFD without experimental data. The results provide a tactic to analytically estimate the uncertainty in a CFD model when experimental data is unavailable

  7. A Squeeze-film Damping Model for the Circular Torsion Micro-resonators

    NASA Astrophysics Data System (ADS)

    Yang, Fan; Li, Pu

    2017-07-01

    In recent years, MEMS devices are widely used in many industries. The prediction of squeeze-film damping is very important for the research of high quality factor resonators. In the past, there have been many analytical models predicting the squeeze-film damping of the torsion micro-resonators. However, for the circular torsion micro-plate, the works over it is very rare. The only model presented by Xia et al[7] using the method of eigenfunction expansions. In this paper, The Bessel series solution is used to solve the Reynolds equation under the assumption of the incompressible gas of the gap, the pressure distribution of the gas between two micro-plates is obtained. Then the analytical expression for the damping constant of the device is derived. The result of the present model matches very well with the finite element method (FEM) solutions and the result of Xia’s model, so the present models’ accuracy is able to be validated.

  8. Bayesian meta-analytical methods to incorporate multiple surrogate endpoints in drug development process.

    PubMed

    Bujkiewicz, Sylwia; Thompson, John R; Riley, Richard D; Abrams, Keith R

    2016-03-30

    A number of meta-analytical methods have been proposed that aim to evaluate surrogate endpoints. Bivariate meta-analytical methods can be used to predict the treatment effect for the final outcome from the treatment effect estimate measured on the surrogate endpoint while taking into account the uncertainty around the effect estimate for the surrogate endpoint. In this paper, extensions to multivariate models are developed aiming to include multiple surrogate endpoints with the potential benefit of reducing the uncertainty when making predictions. In this Bayesian multivariate meta-analytic framework, the between-study variability is modelled in a formulation of a product of normal univariate distributions. This formulation is particularly convenient for including multiple surrogate endpoints and flexible for modelling the outcomes which can be surrogate endpoints to the final outcome and potentially to one another. Two models are proposed, first, using an unstructured between-study covariance matrix by assuming the treatment effects on all outcomes are correlated and second, using a structured between-study covariance matrix by assuming treatment effects on some of the outcomes are conditionally independent. While the two models are developed for the summary data on a study level, the individual-level association is taken into account by the use of the Prentice's criteria (obtained from individual patient data) to inform the within study correlations in the models. The modelling techniques are investigated using an example in relapsing remitting multiple sclerosis where the disability worsening is the final outcome, while relapse rate and MRI lesions are potential surrogates to the disability progression. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  9. An approximate analytical solution for describing surface runoff and sediment transport over hillslope

    NASA Astrophysics Data System (ADS)

    Tao, Wanghai; Wang, Quanjiu; Lin, Henry

    2018-03-01

    Soil and water loss from farmland causes land degradation and water pollution, thus continued efforts are needed to establish mathematical model for quantitative analysis of relevant processes and mechanisms. In this study, an approximate analytical solution has been developed for overland flow model and sediment transport model, offering a simple and effective means to predict overland flow and erosion under natural rainfall conditions. In the overland flow model, the flow regime was considered to be transitional with the value of parameter β (in the kinematic wave model) approximately two. The change rate of unit discharge with distance was assumed to be constant and equal to the runoff rate at the outlet of the plane. The excess rainfall was considered to be constant under uniform rainfall conditions. The overland flow model developed can be further applied to natural rainfall conditions by treating excess rainfall intensity as constant over a small time interval. For the sediment model, the recommended values of the runoff erosion calibration constant (cr) and the splash erosion calibration constant (cf) have been given in this study so that it is easier to use the model. These recommended values are 0.15 and 0.12, respectively. Comparisons with observed results were carried out to validate the proposed analytical solution. The results showed that the approximate analytical solution developed in this paper closely matches the observed data, thus providing an alternative method of predicting runoff generation and sediment yield, and offering a more convenient method of analyzing the quantitative relationships between variables. Furthermore, the model developed in this study can be used as a theoretical basis for developing runoff and erosion control methods.

  10. Predicting the velocity and azimuth of fragments generated by the range destruction or random failure of rocket casings and tankage

    NASA Technical Reports Server (NTRS)

    Eck, Marshall; Mukunda, Meera

    1988-01-01

    A calculational method is described which provides a powerful tool for predicting solid rocket motor (SRM) casing and liquid rocket tankage fragmentation response. The approach properly partitions the available impulse to each major system-mass component. It uses the Pisces code developed by Physics International to couple the forces generated by an Eulerian-modeled gas flow field to a Lagrangian-modeled fuel and casing system. The details of the predictive analytical modeling process and the development of normalized relations for momentum partition as a function of SRM burn time and initial geometry are discussed. Methods for applying similar modeling techniques to liquid-tankage-overpressure failures are also discussed. Good agreement between predictions and observations are obtained for five specific events.

  11. Higher order alchemical derivatives from coupled perturbed self-consistent field theory.

    PubMed

    Lesiuk, Michał; Balawender, Robert; Zachara, Janusz

    2012-01-21

    We present an analytical approach to treat higher order derivatives of Hartree-Fock (HF) and Kohn-Sham (KS) density functional theory energy in the Born-Oppenheimer approximation with respect to the nuclear charge distribution (so-called alchemical derivatives). Modified coupled perturbed self-consistent field theory is used to calculate molecular systems response to the applied perturbation. Working equations for the second and the third derivatives of HF/KS energy are derived. Similarly, analytical forms of the first and second derivatives of orbital energies are reported. The second derivative of Kohn-Sham energy and up to the third derivative of Hartree-Fock energy with respect to the nuclear charge distribution were calculated. Some issues of practical calculations, in particular the dependence of the basis set and Becke weighting functions on the perturbation, are considered. For selected series of isoelectronic molecules values of available alchemical derivatives were computed and Taylor series expansion was used to predict energies of the "surrounding" molecules. Predicted values of energies are in unexpectedly good agreement with the ones computed using HF/KS methods. Presented method allows one to predict orbital energies with the error less than 1% or even smaller for valence orbitals. © 2012 American Institute of Physics

  12. A simple analytical method to estimate all exit parameters of a cross-flow air dehumidifier using liquid desiccant.

    PubMed

    Bassuoni, M M

    2014-03-01

    The dehumidifier is a key component in liquid desiccant air-conditioning systems. Analytical solutions have more advantages than numerical solutions in studying the dehumidifier performance parameters. This paper presents the performance results of exit parameters from an analytical model of an adiabatic cross-flow liquid desiccant air dehumidifier. Calcium chloride is used as desiccant material in this investigation. A program performing the analytical solution is developed using the engineering equation solver software. Good accuracy has been found between analytical solution and reliable experimental results with a maximum deviation of +6.63% and -5.65% in the moisture removal rate. The method developed here can be used in the quick prediction of the dehumidifier performance. The exit parameters from the dehumidifier are evaluated under the effects of variables such as air temperature and humidity, desiccant temperature and concentration, and air to desiccant flow rates. The results show that hot humid air and desiccant concentration have the greatest impact on the performance of the dehumidifier. The moisture removal rate is decreased with increasing both air inlet temperature and desiccant temperature while increases with increasing air to solution mass ratio, inlet desiccant concentration, and inlet air humidity ratio.

  13. Estimating habitat volume of living resources using three-dimensional circulation and biogeochemical models

    NASA Astrophysics Data System (ADS)

    Smith, Katharine A.; Schlag, Zachary; North, Elizabeth W.

    2018-07-01

    Coupled three-dimensional circulation and biogeochemical models predict changes in water properties that can be used to define fish habitat, including physiologically important parameters such as temperature, salinity, and dissolved oxygen. However, methods for calculating the volume of habitat defined by the intersection of multiple water properties are not well established for coupled three-dimensional models. The objectives of this research were to examine multiple methods for calculating habitat volume from three-dimensional model predictions, select the most robust approach, and provide an example application of the technique. Three methods were assessed: the "Step," "Ruled Surface", and "Pentahedron" methods, the latter of which was developed as part of this research. Results indicate that the analytical Pentahedron method is exact, computationally efficient, and preserves continuity in water properties between adjacent grid cells. As an example application, the Pentahedron method was implemented within the Habitat Volume Model (HabVol) using output from a circulation model with an Arakawa C-grid and physiological tolerances of juvenile striped bass (Morone saxatilis). This application demonstrates that the analytical Pentahedron method can be successfully applied to calculate habitat volume using output from coupled three-dimensional circulation and biogeochemical models, and it indicates that the Pentahedron method has wide application to aquatic and marine systems for which these models exist and physiological tolerances of organisms are known.

  14. A Comparison of Lifting-Line and CFD Methods with Flight Test Data from a Research Puma Helicopter

    NASA Technical Reports Server (NTRS)

    Bousman, William G.; Young, Colin; Toulmay, Francois; Gilbert, Neil E.; Strawn, Roger C.; Miller, Judith V.; Maier, Thomas H.; Costes, Michel; Beaumier, Philippe

    1996-01-01

    Four lifting-line methods were compared with flight test data from a research Puma helicopter and the accuracy assessed over a wide range of flight speeds. Hybrid Computational Fluid Dynamics (CFD) methods were also examined for two high-speed conditions. A parallel analytical effort was performed with the lifting-line methods to assess the effects of modeling assumptions and this provided insight into the adequacy of these methods for load predictions.

  15. Proposed method for determining the thickness of glass in solar collector panels

    NASA Technical Reports Server (NTRS)

    Moore, D. M.

    1980-01-01

    An analytical method was developed for determining the minimum thickness for simply supported, rectangular glass plates subjected to uniform normal pressure environmental loads such as wind, earthquake, snow, and deadweight. The method consists of comparing an analytical prediction of the stress in the glass panel to a glass breakage stress determined from fracture mechanics considerations. Based on extensive analysis using the nonlinear finite element structural analysis program ARGUS, design curves for the structural analysis of simply supported rectangular plates were developed. These curves yield the center deflection, center stress and corner stress as a function of a dimensionless parameter describing the load intensity. A method of estimating the glass breakage stress as a function of a specified failure rate, degree of glass temper, design life, load duration time, and panel size is also presented.

  16. Conflict Probability Estimation for Free Flight

    NASA Technical Reports Server (NTRS)

    Paielli, Russell A.; Erzberger, Heinz

    1996-01-01

    The safety and efficiency of free flight will benefit from automated conflict prediction and resolution advisories. Conflict prediction is based on trajectory prediction and is less certain the farther in advance the prediction, however. An estimate is therefore needed of the probability that a conflict will occur, given a pair of predicted trajectories and their levels of uncertainty. A method is developed in this paper to estimate that conflict probability. The trajectory prediction errors are modeled as normally distributed, and the two error covariances for an aircraft pair are combined into a single equivalent covariance of the relative position. A coordinate transformation is then used to derive an analytical solution. Numerical examples and Monte Carlo validation are presented.

  17. On the Use of Accelerated Test Methods for Characterization of Advanced Composite Materials

    NASA Technical Reports Server (NTRS)

    Gates, Thomas S.

    2003-01-01

    A rational approach to the problem of accelerated testing for material characterization of advanced polymer matrix composites is discussed. The experimental and analytical methods provided should be viewed as a set of tools useful in the screening of material systems for long-term engineering properties in aerospace applications. Consideration is given to long-term exposure in extreme environments that include elevated temperature, reduced temperature, moisture, oxygen, and mechanical load. Analytical formulations useful for predictive models that are based on the principles of time-based superposition are presented. The need for reproducible mechanisms, indicator properties, and real-time data are outlined as well as the methodologies for determining specific aging mechanisms.

  18. Analytical prediction of the heat transfer from a blood vessel near the skin surface when cooled by a symmetrical cooling strip

    NASA Technical Reports Server (NTRS)

    Chato, J. C.; Shitzer, A.

    1971-01-01

    An analytical method was developed to estimate the amount of heat extracted from an artery running close to the skin surface which is cooled in a symmetrical fashion by a cooling strip. The results indicate that the optimum width of a cooling strip is approximately three times the depth to the centerline of the artery. The heat extracted from an artery with such a strip is about 0.9 w/m-C which is too small to affect significantly the temperature of the blood flow through a main blood vessel, such as the carotid artery. The method is applicable to veins as well.

  19. Analytical approaches for the characterization and quantification of nanoparticles in food and beverages.

    PubMed

    Mattarozzi, Monica; Suman, Michele; Cascio, Claudia; Calestani, Davide; Weigel, Stefan; Undas, Anna; Peters, Ruud

    2017-01-01

    Estimating consumer exposure to nanomaterials (NMs) in food products and predicting their toxicological properties are necessary steps in the assessment of the risks of this technology. To this end, analytical methods have to be available to detect, characterize and quantify NMs in food and materials related to food, e.g. food packaging and biological samples following metabolization of food. The challenge for the analytical sciences is that the characterization of NMs requires chemical as well as physical information. This article offers a comprehensive analysis of methods available for the detection and characterization of NMs in food and related products. Special attention was paid to the crucial role of sample preparation methods since these have been partially neglected in the scientific literature so far. The currently available instrumental methods are grouped as fractionation, counting and ensemble methods, and their advantages and limitations are discussed. We conclude that much progress has been made over the last 5 years but that many challenges still exist. Future perspectives and priority research needs are pointed out. Graphical Abstract Two possible analytical strategies for the sizing and quantification of Nanoparticles: Asymmetric Flow Field-Flow Fractionation with multiple detectors (allows the determination of true size and mass-based particle size distribution); Single Particle Inductively Coupled Plasma Mass Spectrometry (allows the determination of a spherical equivalent diameter of the particle and a number-based particle size distribution).

  20. An automated baseline correction protocol for infrared spectra of atmospheric aerosols collected on polytetrafluoroethylene (Teflon) filters

    NASA Astrophysics Data System (ADS)

    Kuzmiakova, Adele; Dillner, Ann M.; Takahama, Satoshi

    2016-06-01

    A growing body of research on statistical applications for characterization of atmospheric aerosol Fourier transform infrared (FT-IR) samples collected on polytetrafluoroethylene (PTFE) filters (e.g., Russell et al., 2011; Ruthenburg et al., 2014) and a rising interest in analyzing FT-IR samples collected by air quality monitoring networks call for an automated PTFE baseline correction solution. The existing polynomial technique (Takahama et al., 2013) is not scalable to a project with a large number of aerosol samples because it contains many parameters and requires expert intervention. Therefore, the question of how to develop an automated method for baseline correcting hundreds to thousands of ambient aerosol spectra given the variability in both environmental mixture composition and PTFE baselines remains. This study approaches the question by detailing the statistical protocol, which allows for the precise definition of analyte and background subregions, applies nonparametric smoothing splines to reproduce sample-specific PTFE variations, and integrates performance metrics from atmospheric aerosol and blank samples alike in the smoothing parameter selection. Referencing 794 atmospheric aerosol samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites collected during 2011, we start by identifying key FT-IR signal characteristics, such as non-negative absorbance or analyte segment transformation, to capture sample-specific transitions between background and analyte. While referring to qualitative properties of PTFE background, the goal of smoothing splines interpolation is to learn the baseline structure in the background region to predict the baseline structure in the analyte region. We then validate the model by comparing smoothing splines baseline-corrected spectra with uncorrected and polynomial baseline (PB)-corrected equivalents via three statistical applications: (1) clustering analysis, (2) functional group quantification, and (3) thermal optical reflectance (TOR) organic carbon (OC) and elemental carbon (EC) predictions. The discrepancy rate for a four-cluster solution is 10 %. For all functional groups but carboxylic COH the discrepancy is ≤ 10 %. Performance metrics obtained from TOR OC and EC predictions (R2 ≥ 0.94 %, bias ≤ 0.01 µg m-3, and error ≤ 0.04 µg m-3) are on a par with those obtained from uncorrected and PB-corrected spectra. The proposed protocol leads to visually and analytically similar estimates as those generated by the polynomial method. More importantly, the automated solution allows us and future users to evaluate its analytical reproducibility while minimizing reducible user bias. We anticipate the protocol will enable FT-IR researchers and data analysts to quickly and reliably analyze a large amount of data and connect them to a variety of available statistical learning methods to be applied to analyte absorbances isolated in atmospheric aerosol samples.

  1. Optimal Futility Interim Design: A Predictive Probability of Success Approach with Time-to-Event Endpoint.

    PubMed

    Tang, Zhongwen

    2015-01-01

    An analytical way to compute predictive probability of success (PPOS) together with credible interval at interim analysis (IA) is developed for big clinical trials with time-to-event endpoints. The method takes account of the fixed data up to IA, the amount of uncertainty in future data, and uncertainty about parameters. Predictive power is a special type of PPOS. The result is confirmed by simulation. An optimal design is proposed by finding optimal combination of analysis time and futility cutoff based on some PPOS criteria.

  2. An Improved Method of AGM for High Precision Geolocation of SAR Images

    NASA Astrophysics Data System (ADS)

    Zhou, G.; He, C.; Yue, T.; Huang, W.; Huang, Y.; Li, X.; Chen, Y.

    2018-05-01

    In order to take full advantage of SAR images, it is necessary to obtain the high precision location of the image. During the geometric correction process of images, to ensure the accuracy of image geometric correction and extract the effective mapping information from the images, precise image geolocation is important. This paper presents an improved analytical geolocation method (IAGM) that determine the high precision geolocation of each pixel in a digital SAR image. This method is based on analytical geolocation method (AGM) proposed by X. K. Yuan aiming at realizing the solution of RD model. Tests will be conducted using RADARSAT-2 SAR image. Comparing the predicted feature geolocation with the position as determined by high precision orthophoto, results indicate an accuracy of 50m is attainable with this method. Error sources will be analyzed and some recommendations about improving image location accuracy in future spaceborne SAR's will be given.

  3. A study of cell electrophoresis as a means of purifying growth hormone secreting cells

    NASA Technical Reports Server (NTRS)

    Plank, Lindsay D.; Hymer, W. C.; Kunze, M. Elaine; Marks, Gary M.; Lanham, J. Wayne

    1983-01-01

    Growth hormone secreting cells of the rat anterior pituitary are heavily laden with granules of growth hormone and can be partialy purified on the basis of their resulting high density. Two methods of preparative cell electrophoresis were investigated as methods of enhancing the purification of growth hormone producing cells: density gradient electrophoresis and continuous flow electrophoresis. Both methods provided a two- to four-fold enrichment in growth hormone production per cell relative to that achieved by previous methods. Measurements of electrophoretic mobilities by two analytical methods, microscopic electrophoresis and laser-tracking electrophoresis, revealed very little distinction between unpurified anterior pituitary cell suspensions and somatotroph-enriched cell suspensions. Predictions calculated on the basis of analytical electrophoretic data are consistent with the hypothesis that sedimentation plays a significant role in both types of preparative electrophoresis and the electrophoretic mobility of the growth hormone secreting subpopulation of cells remains unknown.

  4. Utilizing Photogrammetry and Strain Gage Measurement to Characterize Pressurization of an Inflatable Module

    NASA Technical Reports Server (NTRS)

    Valle, Gerard D.; Selig, Molly; Litteken, Doug; Oliveras, Ovidio

    2012-01-01

    This paper documents the integration of a large hatch penetration into an inflatable module. This paper also documents the comparison of analytical load predictions with measured results utilizing strain measurement. Strain was measured by utilizing photogrammetric measurement and through measurement obtained from strain gages mounted to selected clevises that interface with the structural webbings. Bench testing showed good correlation between strain measurement obtained from an extensometer and photogrammetric measurement especially after the fabric has transitioned through the low load/high strain region of the curve. Test results for the full-scale torus showed mixed results in the lower load and thus lower strain regions. Overall strain, and thus load, measured by strain gages and photogrammetry tracked fairly well with analytical predictions. Methods and areas of improvements are discussed.

  5. Simultaneous determination of potassium guaiacolsulfonate, guaifenesin, diphenhydramine HCl and carbetapentane citrate in syrups by using HPLC-DAD coupled with partial least squares multivariate calibration.

    PubMed

    Dönmez, Ozlem Aksu; Aşçi, Bürge; Bozdoğan, Abdürrezzak; Sungur, Sidika

    2011-02-15

    A simple and rapid analytical procedure was proposed for the determination of chromatographic peaks by means of partial least squares multivariate calibration (PLS) of high-performance liquid chromatography with diode array detection (HPLC-DAD). The method is exemplified with analysis of quaternary mixtures of potassium guaiacolsulfonate (PG), guaifenesin (GU), diphenhydramine HCI (DP) and carbetapentane citrate (CP) in syrup preparations. In this method, the area does not need to be directly measured and predictions are more accurate. Though the chromatographic and spectral peaks of the analytes were heavily overlapped and interferents coeluted with the compounds studied, good recoveries of analytes could be obtained with HPLC-DAD coupled with PLS calibration. This method was tested by analyzing the synthetic mixture of PG, GU, DP and CP. As a comparison method, a classsical HPLC method was used. The proposed methods were applied to syrups samples containing four drugs and the obtained results were statistically compared with each other. Finally, the main advantage of HPLC-PLS method over the classical HPLC method tried to emphasized as the using of simple mobile phase, shorter analysis time and no use of internal standard and gradient elution. Copyright © 2010 Elsevier B.V. All rights reserved.

  6. Aeroacoustic Prediction Codes

    NASA Technical Reports Server (NTRS)

    Gliebe, P; Mani, R.; Shin, H.; Mitchell, B.; Ashford, G.; Salamah, S.; Connell, S.; Huff, Dennis (Technical Monitor)

    2000-01-01

    This report describes work performed on Contract NAS3-27720AoI 13 as part of the NASA Advanced Subsonic Transport (AST) Noise Reduction Technology effort. Computer codes were developed to provide quantitative prediction, design, and analysis capability for several aircraft engine noise sources. The objective was to provide improved, physics-based tools for exploration of noise-reduction concepts and understanding of experimental results. Methods and codes focused on fan broadband and 'buzz saw' noise and on low-emissions combustor noise and compliment work done by other contractors under the NASA AST program to develop methods and codes for fan harmonic tone noise and jet noise. The methods and codes developed and reported herein employ a wide range of approaches, from the strictly empirical to the completely computational, with some being semiempirical analytical, and/or analytical/computational. Emphasis was on capturing the essential physics while still considering method or code utility as a practical design and analysis tool for everyday engineering use. Codes and prediction models were developed for: (1) an improved empirical correlation model for fan rotor exit flow mean and turbulence properties, for use in predicting broadband noise generated by rotor exit flow turbulence interaction with downstream stator vanes: (2) fan broadband noise models for rotor and stator/turbulence interaction sources including 3D effects, noncompact-source effects. directivity modeling, and extensions to the rotor supersonic tip-speed regime; (3) fan multiple-pure-tone in-duct sound pressure prediction methodology based on computational fluid dynamics (CFD) analysis; and (4) low-emissions combustor prediction methodology and computer code based on CFD and actuator disk theory. In addition. the relative importance of dipole and quadrupole source mechanisms was studied using direct CFD source computation for a simple cascadeigust interaction problem, and an empirical combustor-noise correlation model was developed from engine acoustic test results. This work provided several insights on potential approaches to reducing aircraft engine noise. Code development is described in this report, and those insights are discussed.

  7. Second derivatives for approximate spin projection methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, Lee M.; Hratchian, Hrant P., E-mail: hhratchian@ucmerced.edu

    2015-02-07

    The use of broken-symmetry electronic structure methods is required in order to obtain correct behavior of electronically strained open-shell systems, such as transition states, biradicals, and transition metals. This approach often has issues with spin contamination, which can lead to significant errors in predicted energies, geometries, and properties. Approximate projection schemes are able to correct for spin contamination and can often yield improved results. To fully make use of these methods and to carry out exploration of the potential energy surface, it is desirable to develop an efficient second energy derivative theory. In this paper, we formulate the analytical secondmore » derivatives for the Yamaguchi approximate projection scheme, building on recent work that has yielded an efficient implementation of the analytical first derivatives.« less

  8. Wildland Fire Prevention: Today, Intuition--Tomorrow, Management

    Treesearch

    Albert J. Simard; Linda R. Donoghue

    1987-01-01

    Describes, from a historical perspective, methods used to characterize fire prevention problems and evaluate prevention programs and discusses past research efforts to bolster these analytical and management efforts. Highlights research on the sociological perspectives of the wildfire problem and on quantitative fire occurrence prediction and program evaluation systems...

  9. L-shaped piezoelectric motor--part II: analytical modeling.

    PubMed

    Avirovik, Dragan; Karami, M Amin; Inman, Daniel; Priya, Shashank

    2012-01-01

    This paper develops an analytical model for an L-shaped piezoelectric motor. The motor structure has been described in detail in Part I of this study. The coupling of the bending vibration mode of the bimorphs results in an elliptical motion at the tip. The emphasis of this paper is on the development of a precise analytical model which can predict the dynamic behavior of the motor based on its geometry. The motor was first modeled mechanically to identify the natural frequencies and mode shapes of the structure. Next, an electromechanical model of the motor was developed to take into account the piezoelectric effect, and dynamics of L-shaped piezoelectric motor were obtained as a function of voltage and frequency. Finally, the analytical model was validated by comparing it to experiment results and the finite element method (FEM). © 2012 IEEE

  10. Prediction of dynamical systems by symbolic regression

    NASA Astrophysics Data System (ADS)

    Quade, Markus; Abel, Markus; Shafi, Kamran; Niven, Robert K.; Noack, Bernd R.

    2016-07-01

    We study the modeling and prediction of dynamical systems based on conventional models derived from measurements. Such algorithms are highly desirable in situations where the underlying dynamics are hard to model from physical principles or simplified models need to be found. We focus on symbolic regression methods as a part of machine learning. These algorithms are capable of learning an analytically tractable model from data, a highly valuable property. Symbolic regression methods can be considered as generalized regression methods. We investigate two particular algorithms, the so-called fast function extraction which is a generalized linear regression algorithm, and genetic programming which is a very general method. Both are able to combine functions in a certain way such that a good model for the prediction of the temporal evolution of a dynamical system can be identified. We illustrate the algorithms by finding a prediction for the evolution of a harmonic oscillator based on measurements, by detecting an arriving front in an excitable system, and as a real-world application, the prediction of solar power production based on energy production observations at a given site together with the weather forecast.

  11. Benchmark solutions for the galactic heavy-ion transport equations with energy and spatial coupling

    NASA Technical Reports Server (NTRS)

    Ganapol, Barry D.; Townsend, Lawrence W.; Lamkin, Stanley L.; Wilson, John W.

    1991-01-01

    Nontrivial benchmark solutions are developed for the galactic heavy ion transport equations in the straightahead approximation with energy and spatial coupling. Analytical representations of the ion fluxes are obtained for a variety of sources with the assumption that the nuclear interaction parameters are energy independent. The method utilizes an analytical LaPlace transform inversion to yield a closed form representation that is computationally efficient. The flux profiles are then used to predict ion dose profiles, which are important for shield design studies.

  12. A comparison of finite element and analytic models of acoustic scattering from rough poroelastic interfaces.

    PubMed

    Bonomo, Anthony L; Isakson, Marcia J; Chotiros, Nicholas P

    2015-04-01

    The finite element method is used to model acoustic scattering from rough poroelastic surfaces. Both monostatic and bistatic scattering strengths are calculated and compared with three analytic models: Perturbation theory, the Kirchhoff approximation, and the small-slope approximation. It is found that the small-slope approximation is in very close agreement with the finite element results for all cases studied and that perturbation theory and the Kirchhoff approximation can be considered valid in those instances where their predictions match those given by the small-slope approximation.

  13. High Rayleigh number convection in rectangular enclosures with differentially heated vertical walls and aspect ratios between zero and unity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kassemi, S.A.

    1988-04-01

    High Rayleigh number convection in a rectangular cavity with insulated horizontal surfaces and differentially heated vertical walls was analyzed for an arbitrary aspect ratio smaller than or equal to unity. Unlike previous analytical studies, a systematic method of solution based on linearization technique and analytical iteration procedure was developed to obtain approximate closed-form solutions for a wide range of aspect ratios. The predicted velocity and temperature fields are shown to be in excellent agreement with available experimental and numerical data.

  14. High Rayleigh number convection in rectangular enclosures with differentially heated vertical walls and aspect ratios between zero and unity

    NASA Technical Reports Server (NTRS)

    Kassemi, Siavash A.

    1988-01-01

    High Rayleigh number convection in a rectangular cavity with insulated horizontal surfaces and differentially heated vertical walls was analyzed for an arbitrary aspect ratio smaller than or equal to unity. Unlike previous analytical studies, a systematic method of solution based on linearization technique and analytical iteration procedure was developed to obtain approximate closed-form solutions for a wide range of aspect ratios. The predicted velocity and temperature fields are shown to be in excellent agreement with available experimental and numerical data.

  15. Elastic properties of rigid fiber-reinforced composites

    NASA Astrophysics Data System (ADS)

    Chen, J.; Thorpe, M. F.; Davis, L. C.

    1995-05-01

    We study the elastic properties of rigid fiber-reinforced composites with perfect bonding between fibers and matrix, and also with sliding boundary conditions. In the dilute region, there exists an exact analytical solution. Around the rigidity threshold we find the elastic moduli and Poisson's ratio by decomposing the deformation into a compression mode and a rotation mode. For perfect bonding, both modes are important, whereas only the compression mode is operative for sliding boundary conditions. We employ the digital-image-based method and a finite element analysis to perform computer simulations which confirm our analytical predictions.

  16. Satellite recovery - Attitude dynamics of the targets

    NASA Technical Reports Server (NTRS)

    Cochran, J. E., Jr.; Lahr, B. S.

    1986-01-01

    The problems of categorizing and modeling the attitude dynamics of uncontrolled artificial earth satellites which may be targets in recovery attempts are addressed. Methods of classification presented are based on satellite rotational kinetic energy, rotational angular momentum and orbit and on the type of control present prior to the benign failure of the control system. The use of approximate analytical solutions and 'exact' numerical solutions to the equations governing satellite attitude motions to predict uncontrolled attitude motion is considered. Analytical and numerical results are presented for the evolution of satellite attitude motions after active control termination.

  17. A Ricin Forensic Profiling Approach Based on a Complex Set of Biomarkers

    DOE PAGES

    Fredriksson, Sten-Ake; Wunschel, David S.; Lindstrom, Susanne Wiklund; ...

    2018-03-28

    A forensic method for the retrospective determination of preparation methods used for illicit ricin toxin production was developed. The method was based on a complex set of biomarkers, including carbohydrates, fatty acids, seed storage proteins, in combination with data on ricin and Ricinus communis agglutinin. The analyses were performed on samples prepared from four castor bean plant (R. communis) cultivars by four different sample preparation methods (PM1 – PM4) ranging from simple disintegration of the castor beans to multi-step preparation methods including different protein precipitation methods. Comprehensive analytical data was collected by use of a range of analytical methods andmore » robust orthogonal partial least squares-discriminant analysis- models (OPLS-DA) were constructed based on the calibration set. By the use of a decision tree and two OPLS-DA models, the sample preparation methods of test set samples were determined. The model statistics of the two models were good and a 100% rate of correct predictions of the test set was achieved.« less

  18. A Ricin Forensic Profiling Approach Based on a Complex Set of Biomarkers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fredriksson, Sten-Ake; Wunschel, David S.; Lindstrom, Susanne Wiklund

    A forensic method for the retrospective determination of preparation methods used for illicit ricin toxin production was developed. The method was based on a complex set of biomarkers, including carbohydrates, fatty acids, seed storage proteins, in combination with data on ricin and Ricinus communis agglutinin. The analyses were performed on samples prepared from four castor bean plant (R. communis) cultivars by four different sample preparation methods (PM1 – PM4) ranging from simple disintegration of the castor beans to multi-step preparation methods including different protein precipitation methods. Comprehensive analytical data was collected by use of a range of analytical methods andmore » robust orthogonal partial least squares-discriminant analysis- models (OPLS-DA) were constructed based on the calibration set. By the use of a decision tree and two OPLS-DA models, the sample preparation methods of test set samples were determined. The model statistics of the two models were good and a 100% rate of correct predictions of the test set was achieved.« less

  19. Mathematical and field analysis of longitudinal reservoir infill

    NASA Astrophysics Data System (ADS)

    Ke, W. T.; Capart, H.

    2016-12-01

    In reservoirs, severe problems are caused by infilled sediment deposits. In long term, the sediment accumulation reduces the capacity of reservoir storage and flood control benefits. In the short term, the sediment deposits influence the intakes of water-supply and hydroelectricity generation. For the management of reservoir, it is important to understand the deposition process and then to predict the sedimentation in reservoir. To investigate the behaviors of sediment deposits, we propose a one-dimensional simplified theory derived by the Exner equation to predict the longitudinal sedimentation distribution in idealized reservoirs. The theory models the reservoir infill geomorphic actions for three scenarios: delta progradation, near-dam bottom deposition, and final infill. These yield three kinds of self-similar analytical solutions for the reservoir bed profiles, under different boundary conditions. Three analytical solutions are composed by error function, complementary error function, and imaginary error function, respectively. The theory is also computed by finite volume method to test the analytical solutions. The theoretical and numerical predictions are in good agreement with one-dimensional small-scale laboratory experiment. As the theory is simple to apply with analytical solutions and numerical computation, we propose some applications to simulate the long-profile evolution of field reservoirs and focus on the infill sediment deposit volume resulting the uplift of near-dam bottom elevation. These field reservoirs introduced here are Wushe Reservoir, Tsengwen Reservoir, Mudan Reservoir in Taiwan, Lago Dos Bocas in Puerto Rico, and Sakuma Dam in Japan.

  20. Predicting toxicity to Hyalella azteca in pyrogenic-impacted sediments-Do we need to analyze for all 34 PAHs?

    PubMed

    Geiger, Stephen C; Azzolina, Nicholas A; Nakles, David V; Hawthorne, Steven B

    2016-07-01

    Polycyclic aromatic hydrocarbons (PAHs) are major drivers of risk at many urban and/or industrialized sediment sites. The US Environmental Protection Agency (USEPA) currently recommends using measurements of 18 parent + 16 groups of alkylated PAHs (PAH-34) to assess the potential for sediment-bound PAHs to impact benthic organisms at these sites. ASTM Method D7363-13 was developed to directly measure low-level sediment porewater PAH concentrations. These concentrations are then compared to ambient water criteria (final chronic values [FCVs]) to assess the potential for impact to benthic organisms. The interlaboratory validation study that was used to finalize ASTM D7363-13 was developed using 24 of the 2-, 3-, and 4-ring PAHs (PAH-24) that are included in the USEPA PAH-34 analyte list. However, it is the responsibility of the user of ASTM Method D7363 to establish a test method to quantify the remaining 10 higher molecular weight PAHs that make up PAH-34. These higher molecular weight PAHs exhibit extremely low saturation solubilities that make their detection difficult in porewater, which has proven difficult to implement in a contract laboratory setting. As a result, commercial laboratories are hesitant to conduct the method on the entire PAH-34 analyte list. This article presents a statistical comparison of the ability of the PAH-24 and PAH-34 porewater results to predict survival of the freshwater amphipod Hyalella azteca, using the original 269 sediment samples used to gain ASTM D7363 Method approval. The statistical analysis shows that the PAH-24 are statistically indistinguishable from the PAH-34 for predicting toxicity. These results indicate that the analysis of freely dissolved porewater PAH-24 is sufficient for making risk-based decisions based on benthic invertebrate toxicity (survival and growth). This reduced target analyte list should result in a cost-saving for stakeholders and broader implementation of the method at PAH-impacted sediment sites. Integr Environ Assess Manag 2016;12:493-499. © 2015 SETAC. © 2015 SETAC.

  1. A Comparison of Tension and Compression Creep in a Polymeric Composite and the Effects of Physical Aging on Creep Behavior

    NASA Technical Reports Server (NTRS)

    Gates, Thomas S.; Veazie, David R.; Brinson, L. Catherine

    1996-01-01

    Experimental and analytical methods were used to investigate the similarities and differences of the effects of physical aging on creep compliance of IM7/K3B composite loaded in tension and compression. Two matrix dominated loading modes, shear and transverse, were investigated for two load cases, tension and compression. The tests, run over a range of sub-glass transition temperatures, provided material constants, material master curves and aging related parameters. Comparing results from the short-term data indicated that although trends in the data with respect to aging time and aging temperature are similar, differences exist due to load direction and mode. The analytical model used for predicting long-term behavior using short-term data as input worked equally as well for the tension or compression loaded cases. Comparison of the loading modes indicated that the predictive model provided more accurate long term predictions for the shear mode as compared to the transverse mode. Parametric studies showed the usefulness of the predictive model as a tool for investigating long-term performance and compliance acceleration due to temperature.

  2. A Proposed Method to Predict Preterm Birth Using Clinical Data, Standard Maternal Serum Screening, and Cholesterol

    PubMed Central

    ALLEMAN, Brandon W.; SMITH, Amanda R.; BYERS, Heather M.; BEDELL, Bruce; RYCKMAN, Kelli K.; MURRAY, Jeffrey C.; BOROWSKI, Kristi S.

    2013-01-01

    Objective To create a predictive model for preterm birth (PTB) from available clinical data and serum analytes. Study Design Serum analytes, routine pregnancy screening plus cholesterol and corresponding health information were linked to birth certificate data for a cohort of 2699 Iowa women with serum sampled in the first and second trimester. Stepwise logistic regression was used to select the best predictive model for PTB. Results Serum screening markers remained significant predictors of PTB even after controlling for maternal characteristics. The best predictive model included maternal characteristics, first trimester total cholesterol (TC), TC change between trimesters and second trimester alpha-fetoprotein and inhibin A. The model showed better discriminatory ability than PTB history alone and performed similarly in subgroups of women without past PTB. Conclusions Using clinical and serum screening data a potentially useful predictor of PTB was constructed. Validation and replication in other populations, and incorporation of other measures that identify PTB risk, like cervical length, can be a step towards identifying additional women who may benefit from new or currently available interventions. PMID:23500456

  3. An investigation into the two-stage meta-analytic copula modelling approach for evaluating time-to-event surrogate endpoints which comprise of one or more events of interest.

    PubMed

    Dimier, Natalie; Todd, Susan

    2017-09-01

    Clinical trials of experimental treatments must be designed with primary endpoints that directly measure clinical benefit for patients. In many disease areas, the recognised gold standard primary endpoint can take many years to mature, leading to challenges in the conduct and quality of clinical studies. There is increasing interest in using shorter-term surrogate endpoints as substitutes for costly long-term clinical trial endpoints; such surrogates need to be selected according to biological plausibility, as well as the ability to reliably predict the unobserved treatment effect on the long-term endpoint. A number of statistical methods to evaluate this prediction have been proposed; this paper uses a simulation study to explore one such method in the context of time-to-event surrogates for a time-to-event true endpoint. This two-stage meta-analytic copula method has been extensively studied for time-to-event surrogate endpoints with one event of interest, but thus far has not been explored for the assessment of surrogates which have multiple events of interest, such as those incorporating information directly from the true clinical endpoint. We assess the sensitivity of the method to various factors including strength of association between endpoints, the quantity of data available, and the effect of censoring. In particular, we consider scenarios where there exist very little data on which to assess surrogacy. Results show that the two-stage meta-analytic copula method performs well under certain circumstances and could be considered useful in practice, but demonstrates limitations that may prevent universal use. Copyright © 2017 John Wiley & Sons, Ltd.

  4. Study of Surface Wave Propagation in Fluid-Saturated Porous Solids.

    NASA Astrophysics Data System (ADS)

    Azcuaga, Valery Francisco Godinez

    1995-01-01

    This study addresses the surface wave propagation phenomena on fluid-saturated porous solids. The analytical method for calculation of surface wave velocities (Feng and Johnson, JASA, 74, 906, 1983) is extended to the case of a porous solid saturated with a wetting fluid in contact with a non-wetting fluid, in order to study a material combination suitable for experimental investigation. The analytical method is further extended to the case of a non-wetting fluid/wetting fluid-saturated porous solid interface with an arbitrary finite surface stiffness. These extensions of the analytical method allows to theoretically study surface wave propagation phenomena during the saturation process. A modification to the 2-D space-time reflection Green's function (Feng and Johnson, JASA, 74, 915, 1983) is introduced in order to simulate the behavior of surface wave signals detected during the experimental investigation of surface wave propagation on fluid-saturated porous solids (Nagy, Appl. Phys. Lett., 60, 2735, 1992). This modification, together with the introduction of an excess attenuation for the Rayleigh surface mode, makes it possible to explain the apparent velocity changes observed on the surface wave signals during saturation. Experimental results concerning the propagation of surface waves on an alcohol-saturated porous glass are presented. These experiments were performed at frequencies of 500 and 800 kHz and show the simultaneous propagation of the two surface modes predicted by the extended analytical method. Finally an analysis of the displacements associated with the different surface modes is presented. This analysis reveals that it is possible to favor the generation of the Rayleigh surface mode or of the slow surface mode, simply by changing the type of transducer used in the generation of surface waves. Calculations show that a shear transducer couples more energy into the Rayleigh mode, whereas a longitudinal transducer couples more energy into the slow surface mode. Experimental results obtained with the modified experimental system show a qualitative agreement with the theoretical predictions.

  5. Head-target tracking control of well drilling

    NASA Astrophysics Data System (ADS)

    Agzamov, Z. V.

    2018-05-01

    The method of directional drilling trajectory control for oil and gas wells using predictive models is considered in the paper. The developed method does not apply optimization and therefore there is no need for the high-performance computing. Nevertheless, it allows following the well-plan with high precision taking into account process input saturation. Controller output is calculated both from the present target reference point of the well-plan and from well trajectory prediction with using the analytical model. This method allows following a well-plan not only on angular, but also on the Cartesian coordinates. Simulation of the control system has confirmed the high precision and operation performance with a wide range of random disturbance action.

  6. Big data analytics : predicting traffic flow regimes from simulated connected vehicle messages using data analytics and machine learning.

    DOT National Transportation Integrated Search

    2016-12-25

    The key objectives of this study were to: 1. Develop advanced analytical techniques that make use of a dynamically configurable connected vehicle message protocol to predict traffic flow regimes in near-real time in a virtual environment and examine ...

  7. Analytical and numerical prediction of harmonic sound power in the inlet of aero-engines with emphasis on transonic rotation speeds

    NASA Astrophysics Data System (ADS)

    Lewy, Serge; Polacsek, Cyril; Barrier, Raphael

    2014-12-01

    Tone noise radiated through the inlet of a turbofan is mainly due to rotor-stator interactions at subsonic regimes (approach flight), and to the shock waves attached to each blade at supersonic helical tip speeds (takeoff). The axial compressor of a helicopter turboshaft engine is transonic as well and can be studied like turbofans at takeoff. The objective of the paper is to predict the sound power at the inlet radiating into the free field, with a focus on transonic conditions because sound levels are much higher. Direct numerical computation of tone acoustic power is based on a RANS (Reynolds averaged Navier-Stokes) solver followed by an integration of acoustic intensity over specified inlet cross-sections, derived from Cantrell and Hart equations (valid in irrotational flows). In transonic regimes, sound power decreases along the intake because of nonlinear propagation, which must be discriminated from numerical dissipation. This is one of the reasons why an analytical approach is also suggested. It is based on three steps: (i) appraisal of the initial pressure jump of the shock waves; (ii) 2D nonlinear propagation model of Morfey and Fisher; (iii) calculation of the sound power of the 3D ducted acoustic field. In this model, all the blades are assumed to be identical such that only the blade passing frequency and its harmonics are predicted (like in the present numerical simulations). However, transfer from blade passing frequency to multiple pure tones can be evaluated in a fourth step through a statistical analysis of irregularities between blades. Interest of the analytical method is to provide a good estimate of nonlinear acoustic propagation in the upstream duct while being easy and fast to compute. The various methods are applied to two turbofan models, respectively in approach (subsonic) and takeoff (transonic) conditions, and to a Turbomeca turboshaft engine (transonic case). The analytical method in transonic appears to be quite reliable by comparison with the numerical solution and with available experimental data.

  8. Nearshore Tsunami Inundation Model Validation: Toward Sediment Transport Applications

    USGS Publications Warehouse

    Apotsos, Alex; Buckley, Mark; Gelfenbaum, Guy; Jaffe, Bruce; Vatvani, Deepak

    2011-01-01

    Model predictions from a numerical model, Delft3D, based on the nonlinear shallow water equations are compared with analytical results and laboratory observations from seven tsunami-like benchmark experiments, and with field observations from the 26 December 2004 Indian Ocean tsunami. The model accurately predicts the magnitude and timing of the measured water levels and flow velocities, as well as the magnitude of the maximum inundation distance and run-up, for both breaking and non-breaking waves. The shock-capturing numerical scheme employed describes well the total decrease in wave height due to breaking, but does not reproduce the observed shoaling near the break point. The maximum water levels observed onshore near Kuala Meurisi, Sumatra, following the 26 December 2004 tsunami are well predicted given the uncertainty in the model setup. The good agreement between the model predictions and the analytical results and observations demonstrates that the numerical solution and wetting and drying methods employed are appropriate for modeling tsunami inundation for breaking and non-breaking long waves. Extension of the model to include sediment transport may be appropriate for long, non-breaking tsunami waves. Using available sediment transport formulations, the sediment deposit thickness at Kuala Meurisi is predicted generally within a factor of 2.

  9. Using Long-Short-Term-Memory Recurrent Neural Networks to Predict Aviation Engine Vibrations

    NASA Astrophysics Data System (ADS)

    ElSaid, AbdElRahman Ahmed

    This thesis examines building viable Recurrent Neural Networks (RNN) using Long Short Term Memory (LSTM) neurons to predict aircraft engine vibrations. The different networks are trained on a large database of flight data records obtained from an airline containing flights that suffered from excessive vibration. RNNs can provide a more generalizable and robust method for prediction over analytical calculations of engine vibration, as analytical calculations must be solved iteratively based on specific empirical engine parameters, and this database contains multiple types of engines. Further, LSTM RNNs provide a "memory" of the contribution of previous time series data which can further improve predictions of future vibration values. LSTM RNNs were used over traditional RNNs, as those suffer from vanishing/exploding gradients when trained with back propagation. The study managed to predict vibration values for 1, 5, 10, and 20 seconds in the future, with 2.84% 3.3%, 5.51% and 10.19% mean absolute error, respectively. These neural networks provide a promising means for the future development of warning systems so that suitable actions can be taken before the occurrence of excess vibration to avoid unfavorable situations during flight.

  10. Lock-in amplifier error prediction and correction in frequency sweep measurements.

    PubMed

    Sonnaillon, Maximiliano Osvaldo; Bonetto, Fabian Jose

    2007-01-01

    This article proposes an analytical algorithm for predicting errors in lock-in amplifiers (LIAs) working with time-varying reference frequency. Furthermore, a simple method for correcting such errors is presented. The reference frequency can be swept in order to measure the frequency response of a system within a given spectrum. The continuous variation of the reference frequency produces a measurement error that depends on three factors: the sweep speed, the LIA low-pass filters, and the frequency response of the measured system. The proposed error prediction algorithm is based on the final value theorem of the Laplace transform. The correction method uses a double-sweep measurement. A mathematical analysis is presented and validated with computational simulations and experimental measurements.

  11. Towards Adaptive Educational Assessments: Predicting Student Performance using Temporal Stability and Data Analytics in Learning Management Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thakur, Gautam; Olama, Mohammed M; McNair, Wade

    Data-driven assessments and adaptive feedback are becoming a cornerstone research in educational data analytics and involve developing methods for exploring the unique types of data that come from the educational context. For example, predicting college student performance is crucial for both the students and educational institutions. It can support timely intervention to prevent students from failing a course, increasing efficacy of advising functions, and improving course completion rate. In this paper, we present our efforts in using data analytics that enable educationists to design novel data-driven assessment and feedback mechanisms. In order to achieve this objective, we investigate temporal stabilitymore » of students grades and perform predictive analytics on academic data collected from 2009 through 2013 in one of the most commonly used learning management systems, called Moodle. First, we have identified the data features useful for assessments and predicting student outcomes such as students scores in homework assignments, quizzes, exams, in addition to their activities in discussion forums and their total Grade Point Average(GPA) at the same term they enrolled in the course. Second, time series models in both frequency and time domains are applied to characterize the progression as well as overall projections of the grades. In particular, the model analyzed the stability as well as fluctuation of grades among students during the collegiate years (from freshman to senior) and disciplines. Third, Logistic Regression and Neural Network predictive models are used to identify students as early as possible who are in danger of failing the course they are currently enrolled in. These models compute the likelihood of any given student failing (or passing) the current course. The time series analysis indicates that assessments and continuous feedback are critical for freshman and sophomores (even with easy courses) than for seniors, and those assessments may be provided using the predictive models. Numerical results are presented to evaluate and compare the performance of the developed models and their predictive accuracy. Our results show that there are strong ties associated with the first few weeks for coursework and they have an impact on the design and distribution of individual modules.« less

  12. An Interoperable Electronic Medical Record-Based Platform for Personalized Predictive Analytics

    ERIC Educational Resources Information Center

    Abedtash, Hamed

    2017-01-01

    Precision medicine refers to the delivering of customized treatment to patients based on their individual characteristics, and aims to reduce adverse events, improve diagnostic methods, and enhance the efficacy of therapies. Among efforts to achieve the goals of precision medicine, researchers have used observational data for developing predictive…

  13. Predicting Nursing Facility Transition Candidates Using AID: A Case Study

    ERIC Educational Resources Information Center

    James, Mary L.; Wiley, Elizabeth; Fries, Brant E.

    2007-01-01

    Purpose: Although the nursing facility transition literature is growing, little research has analyzed the characteristics of individuals so assisted or compared participants to those who remain institutionalized. This article describes an analytic method that researchers can apply to address these knowledge gaps, using the Arkansas Passages…

  14. Method of predicting mechanical properties of decayed wood

    DOEpatents

    Kelley, Stephen S.

    2003-07-15

    A method for determining the mechanical properties of decayed wood that has been exposed to wood decay microorganisms, comprising: a) illuminating a surface of decayed wood that has been exposed to wood decay microorganisms with wavelengths from visible and near infrared (VIS-NIR) spectra; b) analyzing the surface of the decayed wood using a spectrometric method, the method generating a first spectral data of wavelengths in VIS-NIR spectra region; and c) using a multivariate analysis to predict mechanical properties of decayed wood by comparing the first spectral data with a calibration model, the calibration model comprising a second spectrometric method of spectral data of wavelengths in VIS-NIR spectra obtained from a reference decay wood, the second spectral data being correlated with a known mechanical property analytical result obtained from the reference decayed wood.

  15. Numerical method to compute acoustic scattering effect of a moving source.

    PubMed

    Song, Hao; Yi, Mingxu; Huang, Jun; Pan, Yalin; Liu, Dawei

    2016-01-01

    In this paper, the aerodynamic characteristic of a ducted tail rotor in hover has been numerically studied using CFD method. An analytical time domain formulation based on Ffowcs Williams-Hawkings (FW-H) equation is derived for the prediction of the acoustic velocity field and used as Neumann boundary condition on a rigid scattering surface. In order to predict the aerodynamic noise, a hybrid method combing computational aeroacoustics with an acoustic thin-body boundary element method has been proposed. The aerodynamic results and the calculated sound pressure levels (SPLs) are compared with the known method for validation. Simulation results show that the duct can change the value of SPLs and the sound directivity. Compared with the isolate tail rotor, the SPLs of the ducted tail rotor are smaller at certain azimuth.

  16. Consideration of some factors affecting low-frequency fuselage noise transmission for propeller aircraft

    NASA Technical Reports Server (NTRS)

    Mixson, J. S.; Roussos, L. A.

    1986-01-01

    Possible reasons for disagreement between measured and predicted trends of sidewall noise transmission at low frequency are investigated using simplified analysis methods. An analytical model combining incident plane acoustic waves with an infinite flat panel is used to study the effects of sound incidence angle, plate structural properties, frequency, absorption, and the difference between noise reduction and transmission loss. Analysis shows that these factors have significant effects on noise transmission but they do not account for the differences between measured and predicted trends at low frequencies. An analytical model combining an infinite flat plate with a normally incident acoustic wave having exponentially decaying magnitude along one coordinate is used to study the effect of a localized source distribution such as is associated with propeller noise. Results show that the localization brings the predicted low-frequency trend of noise transmission into better agreement with measured propeller results. This effect is independent of low-frequency stiffness effects that have been previously reported to be associated with boundary conditions.

  17. Measurement and prediction of propeller flow field on the PTA aircraft at speeds of up to Mach 0.85. [Propfan Test Assessment

    NASA Technical Reports Server (NTRS)

    Aljabri, Abdullah S.

    1988-01-01

    High speed subsonic transports powered by advanced propellers provide significant fuel savings compared to turbofan powered transports. Unfortunately, however, propfans must operate in aircraft-induced nonuniform flow fields which can lead to high blade cyclic stresses, vibration and noise. To optimize the design and installation of these advanced propellers, therefore, detailed knowledge of the complex flow field is required. As part of the NASA Propfan Test Assessment (PTA) program, a 1/9 scale semispan model of the Gulfstream II propfan test-bed aircraft was tested in the NASA-Lewis 8 x 6 supersonic wind tunnel to obtain propeller flow field data. Detailed radial and azimuthal surveys were made to obtain the total pressure in the flow and the three components of velocity. Data was acquired for Mach numbers ranging from 0.6 to 0.85. Analytical predictions were also made using a subsonic panel method, QUADPAN. Comparison of wind-tunnel measurements and analytical predictions show good agreement throughout the Mach range.

  18. [Prediction of the side-cut product yield of atmospheric/vacuum distillation unit by NIR crude oil rapid assay].

    PubMed

    Wang, Yan-Bin; Hu, Yu-Zhong; Li, Wen-Le; Zhang, Wei-Song; Zhou, Feng; Luo, Zhi

    2014-10-01

    In the present paper, based on the fast evaluation technique of near infrared, a method to predict the yield of atmos- pheric and vacuum line was developed, combined with H/CAMS software. Firstly, the near-infrared (NIR) spectroscopy method for rapidly determining the true boiling point of crude oil was developed. With commercially available crude oil spectroscopy da- tabase and experiments test from Guangxi Petrochemical Company, calibration model was established and a topological method was used as the calibration. The model can be employed to predict the true boiling point of crude oil. Secondly, the true boiling point based on NIR rapid assay was converted to the side-cut product yield of atmospheric/vacuum distillation unit by H/CAMS software. The predicted yield and the actual yield of distillation product for naphtha, diesel, wax and residual oil were compared in a 7-month period. The result showed that the NIR rapid crude assay can predict the side-cut product yield accurately. The near infrared analytic method for predicting yield has the advantages of fast analysis, reliable results, and being easy to online operate, and it can provide elementary data for refinery planning optimization and crude oil blending.

  19. Heterogeneous postsurgical data analytics for predictive modeling of mortality risks in intensive care units.

    PubMed

    Yun Chen; Hui Yang

    2014-01-01

    The rapid advancements of biomedical instrumentation and healthcare technology have resulted in data-rich environments in hospitals. However, the meaningful information extracted from rich datasets is limited. There is a dire need to go beyond current medical practices, and develop data-driven methods and tools that will enable and help (i) the handling of big data, (ii) the extraction of data-driven knowledge, (iii) the exploitation of acquired knowledge for optimizing clinical decisions. This present study focuses on the prediction of mortality rates in Intensive Care Units (ICU) using patient-specific healthcare recordings. It is worth mentioning that postsurgical monitoring in ICU leads to massive datasets with unique properties, e.g., variable heterogeneity, patient heterogeneity, and time asyncronization. To cope with the challenges in ICU datasets, we developed the postsurgical decision support system with a series of analytical tools, including data categorization, data pre-processing, feature extraction, feature selection, and predictive modeling. Experimental results show that the proposed data-driven methodology outperforms traditional approaches and yields better results based on the evaluation of real-world ICU data from 4000 subjects in the database. This research shows great potentials for the use of data-driven analytics to improve the quality of healthcare services.

  20. Description and comparison of selected models for hydrologic analysis of ground-water flow, St Joseph River basin, Indiana

    USGS Publications Warehouse

    Peters, J.G.

    1987-01-01

    The Indiana Department of Natural Resources (IDNR) is developing water-management policies designed to assess the effects of irrigation and other water uses on water supply in the basin. In support of this effort, the USGS, in cooperation with IDNR, began a study to evaluate appropriate methods for analyzing the effects of pumping on ground-water levels and streamflow in the basin 's glacial aquifer systems. Four analytical models describe drawdown for a nonleaky, confined aquifer and fully penetrating well; a leaky, confined aquifer and fully penetrating well; a leaky, confined aquifer and partially penetrating well; and an unconfined aquifer and partially penetrating well. Analytical equations, simplifying assumptions, and methods of application are described for each model. In addition to these four models, several other analytical models were used to predict the effects of ground-water pumping on water levels in the aquifer and on streamflow in local areas with up to two pumping wells. Analytical models for a variety of other hydrogeologic conditions are cited. A digital ground-water flow model was used to describe how a numerical model can be applied to a glacial aquifer system. The numerical model was used to predict the effects of six pumping plans in 46.5 sq mi area with as many as 150 wells. Water budgets for the six pumping plans were used to estimate the effect of pumping on streamflow reduction. Results of the analytical and numerical models indicate that, in general, the glacial aquifers in the basin are highly permeable. Radial hydraulic conductivity calculated by the analytical models ranged from 280 to 600 ft/day, compared to 210 and 360 ft/day used in the numerical model. Maximum seasonal pumping for irrigation produced maximum calculated drawdown of only one-fourth of available drawdown and reduced streamflow by as much as 21%. Analytical models are useful in estimating aquifer properties and predicting local effects of pumping in areas with simple lithology and boundary conditions and with few pumping wells. Numerical models are useful in regional areas with complex hydrogeology with many pumping wells and provide detailed water budgets useful for estimating the sources of water in pumping simulations. Numerical models are useful in constructing flow nets. The choice of which type of model to use is also based on the nature and scope of questions to be answered and on the degree of accuracy required. (Author 's abstract)

  1. Development of an Analytical Method for the Determination of Amoxicillin in Commercial Drugs and Wastewater Samples, and Assessing its Stability in Simulated Gastric Digestion.

    PubMed

    Unutkan, Tugçe; Bakirdere, Sezgin; Keyf, Seyfullah

    2018-01-01

    A highly sensitive analytical HPLC-UV method was developed for the determination of amoxicillin in drugs and wastewater samples at a single wavelength (230 nm). In order to substantially predict the in vivo behavior of amoxicillin, drug samples were subjected to simulated gastric conditions. The calibration plot of the method was linear from 0.050 to 500 mg L-1 with a correlation coefficient of 0.9999. The limit of detection and limit of quantitation were found to be 16 and 54 μg L-1, respectively. The percentage recovery of amoxicillin in wastewater was found to be 97.0 ± 1.6%. The method was successfully applied for the qualitative and quantitative determination of amoxicillin in drug samples including tablets and suspensions. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. Exploring the Potential of Predictive Analytics and Big Data in Emergency Care.

    PubMed

    Janke, Alexander T; Overbeek, Daniel L; Kocher, Keith E; Levy, Phillip D

    2016-02-01

    Clinical research often focuses on resource-intensive causal inference, whereas the potential of predictive analytics with constantly increasing big data sources remains largely unexplored. Basic prediction, divorced from causal inference, is much easier with big data. Emergency care may benefit from this simpler application of big data. Historically, predictive analytics have played an important role in emergency care as simple heuristics for risk stratification. These tools generally follow a standard approach: parsimonious criteria, easy computability, and independent validation with distinct populations. Simplicity in a prediction tool is valuable, but technological advances make it no longer a necessity. Emergency care could benefit from clinical predictions built using data science tools with abundant potential input variables available in electronic medical records. Patients' risks could be stratified more precisely with large pools of data and lower resource requirements for comparing each clinical encounter to those that came before it, benefiting clinical decisionmaking and health systems operations. The largest value of predictive analytics comes early in the clinical encounter, in which diagnostic and prognostic uncertainty are high and resource-committing decisions need to be made. We propose an agenda for widening the application of predictive analytics in emergency care. Throughout, we express cautious optimism because there are myriad challenges related to database infrastructure, practitioner uptake, and patient acceptance. The quality of routinely compiled clinical data will remain an important limitation. Complementing big data sources with prospective data may be necessary if predictive analytics are to achieve their full potential to improve care quality in the emergency department. Copyright © 2015 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.

  3. Influence versus intent for predictive analytics in situation awareness

    NASA Astrophysics Data System (ADS)

    Cui, Biru; Yang, Shanchieh J.; Kadar, Ivan

    2013-05-01

    Predictive analytics in situation awareness requires an element to comprehend and anticipate potential adversary activities that might occur in the future. Most work in high level fusion or predictive analytics utilizes machine learning, pattern mining, Bayesian inference, and decision tree techniques to predict future actions or states. The emergence of social computing in broader contexts has drawn interests in bringing the hypotheses and techniques from social theory to algorithmic and computational settings for predictive analytics. This paper aims at answering the question on how influence and attitude (some interpreted such as intent) of adversarial actors can be formulated and computed algorithmically, as a higher level fusion process to provide predictions of future actions. The challenges in this interdisciplinary endeavor include drawing existing understanding of influence and attitude in both social science and computing fields, as well as the mathematical and computational formulation for the specific context of situation to be analyzed. The study of `influence' has resurfaced in recent years due to the emergence of social networks in the virtualized cyber world. Theoretical analysis and techniques developed in this area are discussed in this paper in the context of predictive analysis. Meanwhile, the notion of intent, or `attitude' using social theory terminologies, is a relatively uncharted area in the computing field. Note that a key objective of predictive analytics is to identify impending/planned attacks so their `impact' and `threat' can be prevented. In this spirit, indirect and direct observables are drawn and derived to infer the influence network and attitude to predict future threats. This work proposes an integrated framework that jointly assesses adversarial actors' influence network and their attitudes as a function of past actions and action outcomes. A preliminary set of algorithms are developed and tested using the Global Terrorism Database (GTD). Our results reveals the benefits to perform joint predictive analytics with both attitude and influence. At the same time, we discover significant challenges in deriving influence and attitude from indirect observables for diverse adversarial behavior. These observations warrant further investigation of optimal use of influence and attitude for predictive analytics, as well as the potential inclusion of other environmental or capability elements for the actors.

  4. Observability during planetary approach navigation

    NASA Technical Reports Server (NTRS)

    Bishop, Robert H.; Burkhart, P. Daniel; Thurman, Sam W.

    1993-01-01

    The objective of the research is to develop an analytic technique to predict the relative navigation capability of different Earth-based radio navigation measurements. In particular, the problem is to determine the relative ability of geocentric range and Doppler measurements to detect the effects of the target planet gravitational attraction on the spacecraft during the planetary approach and near-encounter mission phases. A complete solution to the two-dimensional problem has been developed. Relatively simple analytic formulas are obtained for range and Doppler measurements which describe the observability content of the measurement data along the approach trajectories. An observability measure is defined which is based on the observability matrix for nonlinear systems. The results show good agreement between the analytic observability analysis and the computational batch processing method.

  5. Mechanical and analytical screening of braided composites for transport fuselage applications

    NASA Technical Reports Server (NTRS)

    Fedro, Mark J.; Gunther, Christian; Ko, Frank K.

    1991-01-01

    The mechanics of materials progress in support of the goal of understanding the application of braided composites in a transport aircraft fuselage are summarized. Composites consisting of both 2-D and 3-D braid patterns are investigated. Both consolidation of commingled graphite/PEEK and resin transfer molding of graphite-epoxy braided composite processes are studied. Mechanical tests were used to examine unnotched tension, open hole tension, compression, compression after impact, in-plane shear, out-of-plane tension, bearing, and crippling. Analytical methods are also developed and applied to predict the stiffness and strengths of test specimens. A preliminary study using the test data and analytical results is performed to assess the applicability of braided composites to a commercial aircraft fuselage.

  6. Formal and physical equivalence in two cases in contemporary quantum physics

    NASA Astrophysics Data System (ADS)

    Fraser, Doreen

    2017-08-01

    The application of analytic continuation in quantum field theory (QFT) is juxtaposed to T-duality and mirror symmetry in string theory. Analytic continuation-a mathematical transformation that takes the time variable t to negative imaginary time-it-was initially used as a mathematical technique for solving perturbative Feynman diagrams, and was subsequently the basis for the Euclidean approaches within mainstream QFT (e.g., Wilsonian renormalization group methods, lattice gauge theories) and the Euclidean field theory program for rigorously constructing non-perturbative models of interacting QFTs. A crucial difference between theories related by duality transformations and those related by analytic continuation is that the former are judged to be physically equivalent while the latter are regarded as physically inequivalent. There are other similarities between the two cases that make comparing and contrasting them a useful exercise for clarifying the type of argument that is needed to support the conclusion that dual theories are physically equivalent. In particular, T-duality and analytic continuation in QFT share the criterion for predictive equivalence that two theories agree on the complete set of expectation values and the mass spectra and the criterion for formal equivalence that there is a "translation manual" between the physically significant algebras of observables and sets of states in the two theories. The analytic continuation case study illustrates how predictive and formal equivalence are compatible with physical inequivalence, but not in the manner of standard underdetermination cases. Arguments for the physical equivalence of dual theories must cite considerations beyond predictive and formal equivalence. The analytic continuation case study is an instance of the strategy of developing a physical theory by extending the formal or mathematical equivalence with another physical theory as far as possible. That this strategy has resulted in developments in pure mathematics as well as theoretical physics is another feature that this case study has in common with dualities in string theory.

  7. Accelerated testing of space mechanisms

    NASA Technical Reports Server (NTRS)

    Murray, S. Frank; Heshmat, Hooshang

    1995-01-01

    This report contains a review of various existing life prediction techniques used for a wide range of space mechanisms. Life prediction techniques utilized in other non-space fields such as turbine engine design are also reviewed for applicability to many space mechanism issues. The development of new concepts on how various tribological processes are involved in the life of the complex mechanisms used for space applications are examined. A 'roadmap' for the complete implementation of a tribological prediction approach for complex mechanical systems including standard procedures for test planning, analytical models for life prediction and experimental verification of the life prediction and accelerated testing techniques are discussed. A plan is presented to demonstrate a method for predicting the life and/or performance of a selected space mechanism mechanical component.

  8. Predicting Rib Fracture Risk With Whole-Body Finite Element Models: Development and Preliminary Evaluation of a Probabilistic Analytical Framework

    PubMed Central

    Forman, Jason L.; Kent, Richard W.; Mroz, Krystoffer; Pipkorn, Bengt; Bostrom, Ola; Segui-Gomez, Maria

    2012-01-01

    This study sought to develop a strain-based probabilistic method to predict rib fracture risk with whole-body finite element (FE) models, and to describe a method to combine the results with collision exposure information to predict injury risk and potential intervention effectiveness in the field. An age-adjusted ultimate strain distribution was used to estimate local rib fracture probabilities within an FE model. These local probabilities were combined to predict injury risk and severity within the whole ribcage. The ultimate strain distribution was developed from a literature dataset of 133 tests. Frontal collision simulations were performed with the THUMS (Total HUman Model for Safety) model with four levels of delta-V and two restraints: a standard 3-point belt and a progressive 3.5–7 kN force-limited, pretensioned (FL+PT) belt. The results of three simulations (29 km/h standard, 48 km/h standard, and 48 km/h FL+PT) were compared to matched cadaver sled tests. The numbers of fractures predicted for the comparison cases were consistent with those observed experimentally. Combining these results with field exposure informantion (ΔV, NASS-CDS 1992–2002) suggests a 8.9% probability of incurring AIS3+ rib fractures for a 60 year-old restrained by a standard belt in a tow-away frontal collision with this restraint, vehicle, and occupant configuration, compared to 4.6% for the FL+PT belt. This is the first study to describe a probabilistic framework to predict rib fracture risk based on strains observed in human-body FE models. Using this analytical framework, future efforts may incorporate additional subject or collision factors for multi-variable probabilistic injury prediction. PMID:23169122

  9. Predictive probability methods for interim monitoring in clinical trials with longitudinal outcomes.

    PubMed

    Zhou, Ming; Tang, Qi; Lang, Lixin; Xing, Jun; Tatsuoka, Kay

    2018-04-17

    In clinical research and development, interim monitoring is critical for better decision-making and minimizing the risk of exposing patients to possible ineffective therapies. For interim futility or efficacy monitoring, predictive probability methods are widely adopted in practice. Those methods have been well studied for univariate variables. However, for longitudinal studies, predictive probability methods using univariate information from only completers may not be most efficient, and data from on-going subjects can be utilized to improve efficiency. On the other hand, leveraging information from on-going subjects could allow an interim analysis to be potentially conducted once a sufficient number of subjects reach an earlier time point. For longitudinal outcomes, we derive closed-form formulas for predictive probabilities, including Bayesian predictive probability, predictive power, and conditional power and also give closed-form solutions for predictive probability of success in a future trial and the predictive probability of success of the best dose. When predictive probabilities are used for interim monitoring, we study their distributions and discuss their analytical cutoff values or stopping boundaries that have desired operating characteristics. We show that predictive probabilities utilizing all longitudinal information are more efficient for interim monitoring than that using information from completers only. To illustrate their practical application for longitudinal data, we analyze 2 real data examples from clinical trials. Copyright © 2018 John Wiley & Sons, Ltd.

  10. Analytical method of waste allocation in waste management systems: Concept, method and case study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bergeron, Francis C., E-mail: francis.b.c@videotron.ca

    Waste is not a rejected item to dispose anymore but increasingly a secondary resource to exploit, influencing waste allocation among treatment operations in a waste management (WM) system. The aim of this methodological paper is to present a new method for the assessment of the WM system, the “analytical method of the waste allocation process” (AMWAP), based on the concept of the “waste allocation process” defined as the aggregation of all processes of apportioning waste among alternative waste treatment operations inside or outside the spatial borders of a WM system. AMWAP contains a conceptual framework and an analytical approach. Themore » conceptual framework includes, firstly, a descriptive model that focuses on the description and classification of the WM system. It includes, secondly, an explanatory model that serves to explain and to predict the operation of the WM system. The analytical approach consists of a step-by-step analysis for the empirical implementation of the conceptual framework. With its multiple purposes, AMWAP provides an innovative and objective modular method to analyse a WM system which may be integrated in the framework of impact assessment methods and environmental systems analysis tools. Its originality comes from the interdisciplinary analysis of the WAP and to develop the conceptual framework. AMWAP is applied in the framework of an illustrative case study on the household WM system of Geneva (Switzerland). It demonstrates that this method provides an in-depth and contextual knowledge of WM. - Highlights: • The study presents a new analytical method based on the waste allocation process. • The method provides an in-depth and contextual knowledge of the waste management system. • The paper provides a reproducible procedure for professionals, experts and academics. • It may be integrated into impact assessment or environmental system analysis tools. • An illustrative case study is provided based on household waste management in Geneva.« less

  11. Predicting Sasang Constitution Using Body-Shape Information

    PubMed Central

    Jang, Eunsu; Do, Jun-Hyeong; Jin, HeeJeong; Park, KiHyun; Ku, Boncho; Lee, Siwoo; Kim, Jong Yeol

    2012-01-01

    Objectives. Body measurement plays a pivotal role not only in the diagnosis of disease but also in the classification of typology. Sasang constitutional medicine, which is one of the forms of Traditional Korean Medicine, is considered to be strongly associated with body shape. We attempted to determine whether a Sasang constitutional analytic tool based on body shape information (SCAT-B) could predict Sasang constitution (SC). Methods. After surveying 23 Oriental medical clinics, 2,677 subjects were recruited and body shape information was collected. The SCAT-Bs for males and females were developed using multinomial logistic regression. Stepwise forward-variable selection was applied using the score statistic and Wald's test. Results. The predictive rates of the SCAT-B for Tae-eumin (TE), Soeumin (SE), and Soyangin (SY) types in males and females were 80.2%, 56.9%, and 37.7% (males) and 69.3%, 38.9%, and 50.0% (females) in the training set and were 74%, 70.1%, and 35% (males), and 67.4%, 66.3%, and 53.7% (females) in the test set, respectively. Higher constitutional probability scores showed a trend for association with higher predictability. Conclusions. This study shows that the Sasang constitutional analytic tool, which is based on body shape information, may be relatively highly predictive of TE type but may be less predictive when used for SY type. PMID:22792124

  12. Predictive Analytics to Support Real-Time Management in Pathology Facilities.

    PubMed

    Lessard, Lysanne; Michalowski, Wojtek; Chen Li, Wei; Amyot, Daniel; Halwani, Fawaz; Banerjee, Diponkar

    2016-01-01

    Predictive analytics can provide valuable support to the effective management of pathology facilities. The introduction of new tests and technologies in anatomical pathology will increase the volume of specimens to be processed, as well as the complexity of pathology processes. In order for predictive analytics to address managerial challenges associated with the volume and complexity increases, it is important to pinpoint the areas where pathology managers would most benefit from predictive capabilities. We illustrate common issues in managing pathology facilities with an analysis of the surgical specimen process at the Department of Pathology and Laboratory Medicine (DPLM) at The Ottawa Hospital, which processes all surgical specimens for the Eastern Ontario Regional Laboratory Association. We then show how predictive analytics could be used to support management. Our proposed approach can be generalized beyond the DPLM, contributing to a more effective management of pathology facilities and in turn to quicker clinical diagnoses.

  13. Predictive Analytics to Support Real-Time Management in Pathology Facilities

    PubMed Central

    Lessard, Lysanne; Michalowski, Wojtek; Chen Li, Wei; Amyot, Daniel; Halwani, Fawaz; Banerjee, Diponkar

    2016-01-01

    Predictive analytics can provide valuable support to the effective management of pathology facilities. The introduction of new tests and technologies in anatomical pathology will increase the volume of specimens to be processed, as well as the complexity of pathology processes. In order for predictive analytics to address managerial challenges associated with the volume and complexity increases, it is important to pinpoint the areas where pathology managers would most benefit from predictive capabilities. We illustrate common issues in managing pathology facilities with an analysis of the surgical specimen process at the Department of Pathology and Laboratory Medicine (DPLM) at The Ottawa Hospital, which processes all surgical specimens for the Eastern Ontario Regional Laboratory Association. We then show how predictive analytics could be used to support management. Our proposed approach can be generalized beyond the DPLM, contributing to a more effective management of pathology facilities and in turn to quicker clinical diagnoses. PMID:28269873

  14. Design of Biomedical Robots for Phenotype Prediction Problems

    PubMed Central

    deAndrés-Galiana, Enrique J.; Sonis, Stephen T.

    2016-01-01

    Abstract Genomics has been used with varying degrees of success in the context of drug discovery and in defining mechanisms of action for diseases like cancer and neurodegenerative and rare diseases in the quest for orphan drugs. To improve its utility, accuracy, and cost-effectiveness optimization of analytical methods, especially those that translate to clinically relevant outcomes, is critical. Here we define a novel tool for genomic analysis termed a biomedical robot in order to improve phenotype prediction, identifying disease pathogenesis and significantly defining therapeutic targets. Biomedical robot analytics differ from historical methods in that they are based on melding feature selection methods and ensemble learning techniques. The biomedical robot mathematically exploits the structure of the uncertainty space of any classification problem conceived as an ill-posed optimization problem. Given a classifier, there exist different equivalent small-scale genetic signatures that provide similar predictive accuracies. We perform the sensitivity analysis to noise of the biomedical robot concept using synthetic microarrays perturbed by different kinds of noises in expression and class assignment. Finally, we show the application of this concept to the analysis of different diseases, inferring the pathways and the correlation networks. The final aim of a biomedical robot is to improve knowledge discovery and provide decision systems to optimize diagnosis, treatment, and prognosis. This analysis shows that the biomedical robots are robust against different kinds of noises and particularly to a wrong class assignment of the samples. Assessing the uncertainty that is inherent to any phenotype prediction problem is the right way to address this kind of problem. PMID:27347715

  15. Design of Biomedical Robots for Phenotype Prediction Problems.

    PubMed

    deAndrés-Galiana, Enrique J; Fernández-Martínez, Juan Luis; Sonis, Stephen T

    2016-08-01

    Genomics has been used with varying degrees of success in the context of drug discovery and in defining mechanisms of action for diseases like cancer and neurodegenerative and rare diseases in the quest for orphan drugs. To improve its utility, accuracy, and cost-effectiveness optimization of analytical methods, especially those that translate to clinically relevant outcomes, is critical. Here we define a novel tool for genomic analysis termed a biomedical robot in order to improve phenotype prediction, identifying disease pathogenesis and significantly defining therapeutic targets. Biomedical robot analytics differ from historical methods in that they are based on melding feature selection methods and ensemble learning techniques. The biomedical robot mathematically exploits the structure of the uncertainty space of any classification problem conceived as an ill-posed optimization problem. Given a classifier, there exist different equivalent small-scale genetic signatures that provide similar predictive accuracies. We perform the sensitivity analysis to noise of the biomedical robot concept using synthetic microarrays perturbed by different kinds of noises in expression and class assignment. Finally, we show the application of this concept to the analysis of different diseases, inferring the pathways and the correlation networks. The final aim of a biomedical robot is to improve knowledge discovery and provide decision systems to optimize diagnosis, treatment, and prognosis. This analysis shows that the biomedical robots are robust against different kinds of noises and particularly to a wrong class assignment of the samples. Assessing the uncertainty that is inherent to any phenotype prediction problem is the right way to address this kind of problem.

  16. [Systems epidemiology].

    PubMed

    Huang, T; Li, L M

    2018-05-10

    The era of medical big data, translational medicine and precision medicine brings new opportunities for the study of etiology of chronic complex diseases. How to implement evidence-based medicine, translational medicine and precision medicine are the challenges we are facing. Systems epidemiology, a new field of epidemiology, combines medical big data with system biology and examines the statistical model of disease risk, the future risk simulation and prediction using the data at molecular, cellular, population, social and ecological levels. Due to the diversity and complexity of big data sources, the development of study design and analytic methods of systems epidemiology face new challenges and opportunities. This paper summarizes the theoretical basis, concept, objectives, significances, research design and analytic methods of systems epidemiology and its application in the field of public health.

  17. Thermal/structural design verification strategies for large space structures

    NASA Technical Reports Server (NTRS)

    Benton, David

    1988-01-01

    Requirements for space structures of increasing size, complexity, and precision have engendered a search for thermal design verification methods that do not impose unreasonable costs, that fit within the capabilities of existing facilities, and that still adequately reduce technical risk. This requires a combination of analytical and testing methods. This requires two approaches. The first is to limit thermal testing to sub-elements of the total system only in a compact configuration (i.e., not fully deployed). The second approach is to use a simplified environment to correlate analytical models with test results. These models can then be used to predict flight performance. In practice, a combination of these approaches is needed to verify the thermal/structural design of future very large space systems.

  18. Social Web mining and exploitation for serious applications: Technosocial Predictive Analytics and related technologies for public health, environmental and national security surveillance.

    PubMed

    Kamel Boulos, Maged N; Sanfilippo, Antonio P; Corley, Courtney D; Wheeler, Steve

    2010-10-01

    This paper explores Technosocial Predictive Analytics (TPA) and related methods for Web "data mining" where users' posts and queries are garnered from Social Web ("Web 2.0") tools such as blogs, micro-blogging and social networking sites to form coherent representations of real-time health events. The paper includes a brief introduction to commonly used Social Web tools such as mashups and aggregators, and maps their exponential growth as an open architecture of participation for the masses and an emerging way to gain insight about people's collective health status of whole populations. Several health related tool examples are described and demonstrated as practical means through which health professionals might create clear location specific pictures of epidemiological data such as flu outbreaks. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  19. Bayesian-based estimation of acoustic surface impedance: Finite difference frequency domain approach.

    PubMed

    Bockman, Alexander; Fackler, Cameron; Xiang, Ning

    2015-04-01

    Acoustic performance for an interior requires an accurate description of the boundary materials' surface acoustic impedance. Analytical methods may be applied to a small class of test geometries, but inverse numerical methods provide greater flexibility. The parameter estimation problem requires minimizing prediction vice observed acoustic field pressure. The Bayesian-network sampling approach presented here mitigates other methods' susceptibility to noise inherent to the experiment, model, and numerics. A geometry agnostic method is developed here and its parameter estimation performance is demonstrated for an air-backed micro-perforated panel in an impedance tube. Good agreement is found with predictions from the ISO standard two-microphone, impedance-tube method, and a theoretical model for the material. Data by-products exclusive to a Bayesian approach are analyzed to assess sensitivity of the method to nuisance parameters.

  20. An analytical and experimental study of sound propagation and attenuation in variable-area ducts. [reducing aircraft engine noise

    NASA Technical Reports Server (NTRS)

    Nayfeh, A. H.; Kaiser, J. E.; Marshall, R. L.; Hurst, L. J.

    1978-01-01

    The performance of sound suppression techniques in ducts that produce refraction effects due to axial velocity gradients was evaluated. A computer code based on the method of multiple scales was used to calculate the influence of axial variations due to slow changes in the cross-sectional area as well as transverse gradients due to the wall boundary layers. An attempt was made to verify the analytical model through direct comparison of experimental and computational results and the analytical determination of the influence of axial gradients on optimum liner properties. However, the analytical studies were unable to examine the influence of non-parallel ducts on the optimum linear conditions. For liner properties not close to optimum, the analytical predictions and the experimental measurements were compared. The circumferential variations of pressure amplitudes and phases at several axial positions were examined in straight and variable-area ducts, hard-wall and lined sections with and without a mean flow. Reasonable agreement between the theoretical and experimental results was obtained.

  1. Noise Certification Predictions for FJX-2-Powered Aircraft Using Analytic Methods

    NASA Technical Reports Server (NTRS)

    Berton, Jeffrey J.

    1999-01-01

    Williams International Co. is currently developing the 700-pound thrust class FJX-2 turbofan engine for the general Aviation Propulsion Program's Turbine Engine Element. As part of the 1996 NASA-Williams cooperative working agreement, NASA agreed to analytically calculate the noise certification levels of the FJX-2-powered V-Jet II test bed aircraft. Although the V-Jet II is a demonstration aircraft that is unlikely to be produced and certified, the noise results presented here may be considered to be representative of the noise levels of small, general aviation jet aircraft that the FJX-2 would power. A single engine variant of the V-Jet II, the V-Jet I concept airplane, is also considered. Reported in this paper are the analytically predicted FJX-2/V-Jet noise levels appropriate for Federal Aviation Regulation certification. Also reported are FJX-2/V-Jet noise levels using noise metrics appropriate for the propeller-driven aircraft that will be its major market competition, as well as a sensitivity analysis of the certification noise levels to major system uncertainties.

  2. Estimating statistical isotropy violation in CMB due to non-circular beam and complex scan in minutes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pant, Nidhi; Das, Santanu; Mitra, Sanjit

    Mild, unavoidable deviations from circular-symmetry of instrumental beams along with scan strategy can give rise to measurable Statistical Isotropy (SI) violation in Cosmic Microwave Background (CMB) experiments. If not accounted properly, this spurious signal can complicate the extraction of other SI violation signals (if any) in the data. However, estimation of this effect through exact numerical simulation is computationally intensive and time consuming. A generalized analytical formalism not only provides a quick way of estimating this signal, but also gives a detailed understanding connecting the leading beam anisotropy components to a measurable BipoSH characterisation of SI violation. In this paper,more » we provide an approximate generic analytical method for estimating the SI violation generated due to a non-circular (NC) beam and arbitrary scan strategy, in terms of the Bipolar Spherical Harmonic (BipoSH) spectra. Our analytical method can predict almost all the features introduced by a NC beam in a complex scan and thus reduces the need for extensive numerical simulation worth tens of thousands of CPU hours into minutes long calculations. As an illustrative example, we use WMAP beams and scanning strategy to demonstrate the easability, usability and efficiency of our method. We test all our analytical results against that from exact numerical simulations.« less

  3. Rapid Harmonic Analysis of Piezoelectric MEMS Resonators.

    PubMed

    Puder, Jonathan M; Pulskamp, Jeffrey S; Rudy, Ryan Q; Cassella, Cristian; Rinaldi, Matteo; Chen, Guofeng; Bhave, Sunil A; Polcawich, Ronald G

    2018-06-01

    This paper reports on a novel simulation method combining the speed of analytical evaluation with the accuracy of finite-element analysis (FEA). This method is known as the rapid analytical-FEA technique (RAFT). The ability of the RAFT to accurately predict frequency response orders of magnitude faster than conventional simulation methods while providing deeper insights into device design not possible with other types of analysis is detailed. Simulation results from the RAFT across wide bandwidths are compared to measured results of resonators fabricated with various materials, frequencies, and topologies with good agreement. These include resonators targeting beam extension, disk flexure, and Lamé beam modes. An example scaling analysis is presented and other applications enabled are discussed as well. The supplemental material includes example code for implementation in ANSYS, although any commonly employed FEA package may be used.

  4. Ultra-Short-Term Wind Power Prediction Using a Hybrid Model

    NASA Astrophysics Data System (ADS)

    Mohammed, E.; Wang, S.; Yu, J.

    2017-05-01

    This paper aims to develop and apply a hybrid model of two data analytical methods, multiple linear regressions and least square (MLR&LS), for ultra-short-term wind power prediction (WPP), for example taking, Northeast China electricity demand. The data was obtained from the historical records of wind power from an offshore region, and from a wind farm of the wind power plant in the areas. The WPP achieved in two stages: first, the ratios of wind power were forecasted using the proposed hybrid method, and then the transformation of these ratios of wind power to obtain forecasted values. The hybrid model combines the persistence methods, MLR and LS. The proposed method included two prediction types, multi-point prediction and single-point prediction. WPP is tested by applying different models such as autoregressive moving average (ARMA), autoregressive integrated moving average (ARIMA) and artificial neural network (ANN). By comparing results of the above models, the validity of the proposed hybrid model is confirmed in terms of error and correlation coefficient. Comparison of results confirmed that the proposed method works effectively. Additional, forecasting errors were also computed and compared, to improve understanding of how to depict highly variable WPP and the correlations between actual and predicted wind power.

  5. Double-multiple streamtube model for studying vertical-axis wind turbines

    NASA Astrophysics Data System (ADS)

    Paraschivoiu, Ion

    1988-08-01

    This work describes the present state-of-the-art in double-multiple streamtube method for modeling the Darrieus-type vertical-axis wind turbine (VAWT). Comparisons of the analytical results with the other predictions and available experimental data show a good agreement. This method, which incorporates dynamic-stall and secondary effects, can be used for generating a suitable aerodynamic-load model for structural design analysis of the Darrieus rotor.

  6. Structural response synthesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ozisik, H.; Keltie, R.F.

    The open loop control technique of predicting a conditioned input signal based on a specified output response for a second order system has been analyzed both analytically and numerically to gain a firm understanding of the method. Differences between this method of control and digital closed loop control using pole cancellation were investigated as a follow up to previous experimental work. Application of the technique to diamond turning using a fast tool is also discussed.

  7. Ray Tracing and Modal Methods for Modeling Radio Propagation in Tunnels With Rough Walls

    PubMed Central

    Zhou, Chenming

    2017-01-01

    At the ultrahigh frequencies common to portable radios, tunnels such as mine entries are often modeled by hollow dielectric waveguides. The roughness condition of the tunnel walls has an influence on radio propagation, and therefore should be taken into account when an accurate power prediction is needed. This paper investigates how wall roughness affects radio propagation in tunnels, and presents a unified ray tracing and modal method for modeling radio propagation in tunnels with rough walls. First, general analytical formulas for modeling the influence of the wall roughness are derived, based on the modal method and the ray tracing method, respectively. Second, the equivalence of the ray tracing and modal methods in the presence of wall roughnesses is mathematically proved, by showing that the ray tracing-based analytical formula can converge to the modal-based formula through the Poisson summation formula. The derivation and findings are verified by simulation results based on ray tracing and modal methods. PMID:28935995

  8. Rectification of depth measurement using pulsed thermography with logarithmic peak second derivative method

    NASA Astrophysics Data System (ADS)

    Li, Xiaoli; Zeng, Zhi; Shen, Jingling; Zhang, Cunlin; Zhao, Yuejin

    2018-03-01

    Logarithmic peak second derivative (LPSD) method is the most popular method for depth prediction in pulsed thermography. It is widely accepted that this method is independent of defect size. The theoretical model for LPSD method is based on the one-dimensional solution of heat conduction without considering the effect of defect size. When a decay term considering defect aspect ratio is introduced into the solution to correct the three-dimensional thermal diffusion effect, we found that LPSD method is affected by defect size by analytical model. Furthermore, we constructed the relation between the characteristic time of LPSD method and defect aspect ratio, which was verified with the experimental results of stainless steel and glass fiber reinforced plate (GFRP) samples. We also proposed an improved LPSD method for depth prediction when the effect of defect size was considered, and the rectification results of stainless steel and GFRP samples were presented and discussed.

  9. The analytical and numerical approaches to the theory of the Moon's librations: Modern analysis and results

    NASA Astrophysics Data System (ADS)

    Petrova, N.; Zagidullin, A.; Nefedyev, Y.; Kosulin, V.; Andreev, A.

    2017-11-01

    Observing physical librations of celestial bodies and the Moon represents one of the astronomical methods of remotely assessing the internal structure of a celestial body without conducting expensive space experiments. The paper contains a review of recent advances in studying the Moon's structure using various methods of obtaining and applying the lunar physical librations (LPhL) data. In this article LPhL simulation methods of assessing viscoelastic and dissipative properties of the lunar body and lunar core parameters, whose existence has been recently confirmed during the seismic data reprocessing of ;Apollo; space mission, are described. Much attention is paid to physical interpretation of the free librations phenomenon and the methods for its determination. In the paper the practical application of the most accurate analytical LPhL tables (Rambaux and Williams, 2011) is discussed. The tables were built on the basis of complex analytical processing of the residual differences obtained when comparing long-term series of laser observations with the numerical ephemeris DE421. In the paper an efficiency analysis of two approaches to LPhL theory is conducted: the numerical and the analytical ones. It has been shown that in lunar investigation both approaches complement each other in various aspects: the numerical approach provides high accuracy of the theory, which is required for the proper processing of modern observations, the analytical approach allows to comprehend the essence of the phenomena in the lunar rotation, predict and interpret new effects in the observations of lunar body and lunar core parameters.

  10. Petermann I and II spot size: Accurate semi analytical description involving Nelder-Mead method of nonlinear unconstrained optimization and three parameter fundamental modal field

    NASA Astrophysics Data System (ADS)

    Roy Choudhury, Raja; Roy Choudhury, Arundhati; Kanti Ghose, Mrinal

    2013-01-01

    A semi-analytical model with three optimizing parameters and a novel non-Gaussian function as the fundamental modal field solution has been proposed to arrive at an accurate solution to predict various propagation parameters of graded-index fibers with less computational burden than numerical methods. In our semi analytical formulation the optimization of core parameter U which is usually uncertain, noisy or even discontinuous, is being calculated by Nelder-Mead method of nonlinear unconstrained minimizations as it is an efficient and compact direct search method and does not need any derivative information. Three optimizing parameters are included in the formulation of fundamental modal field of an optical fiber to make it more flexible and accurate than other available approximations. Employing variational technique, Petermann I and II spot sizes have been evaluated for triangular and trapezoidal-index fibers with the proposed fundamental modal field. It has been demonstrated that, the results of the proposed solution identically match with the numerical results over a wide range of normalized frequencies. This approximation can also be used in the study of doped and nonlinear fiber amplifier.

  11. A Predictive Model of Daily Seismic Activity Induced by Mining, Developed with Data Mining Methods

    NASA Astrophysics Data System (ADS)

    Jakubowski, Jacek

    2014-12-01

    The article presents the development and evaluation of a predictive classification model of daily seismic energy emissions induced by longwall mining in sector XVI of the Piast coal mine in Poland. The model uses data on tremor energy, basic characteristics of the longwall face and mined output in this sector over the period from July 1987 to March 2011. The predicted binary variable is the occurrence of a daily sum of tremor seismic energies in a longwall that is greater than or equal to the threshold value of 105 J. Three data mining analytical methods were applied: logistic regression,neural networks, and stochastic gradient boosted trees. The boosted trees model was chosen as the best for the purposes of the prediction. The validation sample results showed its good predictive capability, taking the complex nature of the phenomenon into account. This may indicate the applied model's suitability for a sequential, short-term prediction of mining induced seismic activity.

  12. A simple analytical method to estimate all exit parameters of a cross-flow air dehumidifier using liquid desiccant

    PubMed Central

    Bassuoni, M.M.

    2013-01-01

    The dehumidifier is a key component in liquid desiccant air-conditioning systems. Analytical solutions have more advantages than numerical solutions in studying the dehumidifier performance parameters. This paper presents the performance results of exit parameters from an analytical model of an adiabatic cross-flow liquid desiccant air dehumidifier. Calcium chloride is used as desiccant material in this investigation. A program performing the analytical solution is developed using the engineering equation solver software. Good accuracy has been found between analytical solution and reliable experimental results with a maximum deviation of +6.63% and −5.65% in the moisture removal rate. The method developed here can be used in the quick prediction of the dehumidifier performance. The exit parameters from the dehumidifier are evaluated under the effects of variables such as air temperature and humidity, desiccant temperature and concentration, and air to desiccant flow rates. The results show that hot humid air and desiccant concentration have the greatest impact on the performance of the dehumidifier. The moisture removal rate is decreased with increasing both air inlet temperature and desiccant temperature while increases with increasing air to solution mass ratio, inlet desiccant concentration, and inlet air humidity ratio. PMID:25685485

  13. Connecting clinical and actuarial prediction with rule-based methods.

    PubMed

    Fokkema, Marjolein; Smits, Niels; Kelderman, Henk; Penninx, Brenda W J H

    2015-06-01

    Meta-analyses comparing the accuracy of clinical versus actuarial prediction have shown actuarial methods to outperform clinical methods, on average. However, actuarial methods are still not widely used in clinical practice, and there has been a call for the development of actuarial prediction methods for clinical practice. We argue that rule-based methods may be more useful than the linear main effect models usually employed in prediction studies, from a data and decision analytic as well as a practical perspective. In addition, decision rules derived with rule-based methods can be represented as fast and frugal trees, which, unlike main effects models, can be used in a sequential fashion, reducing the number of cues that have to be evaluated before making a prediction. We illustrate the usability of rule-based methods by applying RuleFit, an algorithm for deriving decision rules for classification and regression problems, to a dataset on prediction of the course of depressive and anxiety disorders from Penninx et al. (2011). The RuleFit algorithm provided a model consisting of 2 simple decision rules, requiring evaluation of only 2 to 4 cues. Predictive accuracy of the 2-rule model was very similar to that of a logistic regression model incorporating 20 predictor variables, originally applied to the dataset. In addition, the 2-rule model required, on average, evaluation of only 3 cues. Therefore, the RuleFit algorithm appears to be a promising method for creating decision tools that are less time consuming and easier to apply in psychological practice, and with accuracy comparable to traditional actuarial methods. (c) 2015 APA, all rights reserved).

  14. Impact of active controls technology on structural integrity

    NASA Technical Reports Server (NTRS)

    Noll, Thomas; Austin, Edward; Donley, Shawn; Graham, George; Harris, Terry

    1991-01-01

    This paper summarizes the findings of The Technical Cooperation Program to assess the impact of active controls technology on the structural integrity of aeronautical vehicles and to evaluate the present state-of-the-art for predicting the loads caused by a flight-control system modification and the resulting change in the fatigue life of the flight vehicle. The potential for active controls to adversely affect structural integrity is described, and load predictions obtained using two state-of-the-art analytical methods are given.

  15. Dynamic properties and damping predictions for laminated plates: High order theories - Timoshenko beam

    NASA Astrophysics Data System (ADS)

    Diveyev, Bohdan; Konyk, Solomija; Crocker, Malcolm J.

    2018-01-01

    The main aim of this study is to predict the elastic and damping properties of composite laminated plates. This problem has an exact elasticity solution for simple uniform bending and transverse loading conditions. This paper presents a new stress analysis method for the accurate determination of the detailed stress distributions in laminated plates subjected to cylindrical bending. Some approximate methods for the stress state predictions for laminated plates are presented here. The present method is adaptive and does not rely on strong assumptions about the model of the plate. The theoretical model described here incorporates deformations of each sheet of the lamina, which account for the effects of transverse shear deformation, transverse normal strain-stress and nonlinear variation of displacements with respect to the thickness coordinate. Predictions of the dynamic and damping values of laminated plates for various geometrical, mechanical and fastening properties are presented. Comparison with the Timoshenko beam theory is systematically made for analytical and approximation variants.

  16. Comparison of integrated clustering methods for accurate and stable prediction of building energy consumption data

    DOE PAGES

    Hsu, David

    2015-09-27

    Clustering methods are often used to model energy consumption for two reasons. First, clustering is often used to process data and to improve the predictive accuracy of subsequent energy models. Second, stable clusters that are reproducible with respect to non-essential changes can be used to group, target, and interpret observed subjects. However, it is well known that clustering methods are highly sensitive to the choice of algorithms and variables. This can lead to misleading assessments of predictive accuracy and mis-interpretation of clusters in policymaking. This paper therefore introduces two methods to the modeling of energy consumption in buildings: clusterwise regression,more » also known as latent class regression, which integrates clustering and regression simultaneously; and cluster validation methods to measure stability. Using a large dataset of multifamily buildings in New York City, clusterwise regression is compared to common two-stage algorithms that use K-means and model-based clustering with linear regression. Predictive accuracy is evaluated using 20-fold cross validation, and the stability of the perturbed clusters is measured using the Jaccard coefficient. These results show that there seems to be an inherent tradeoff between prediction accuracy and cluster stability. This paper concludes by discussing which clustering methods may be appropriate for different analytical purposes.« less

  17. An analytical model for pressure of volume fractured tight oil reservoir with horizontal well

    NASA Astrophysics Data System (ADS)

    Feng, Qihong; Dou, Kaiwen; Zhang, Xianmin; Xing, Xiangdong; Xia, Tian

    2017-05-01

    The property of tight oil reservoir is worse than common reservoir that we usually seen before, the porosity and permeability is low, the diffusion is very complex. Therefore, the ordinary depletion method is useless here. The volume fracture breaks through the conventional EOR mechanism, which set the target by amplifying the contact area of fracture and reservoir so as to improving the production of every single well. In order to forecast the production effectively, we use the traditional dual-porosity model, build an analytical model for production of volume fractured tight oil reservoir with horizontal well, and get the analytical solution in Laplace domain. Then we construct the log-log plot of dimensionless pressure and time by stiffest conversion. After that, we discuss the influential factors of pressure. Several factors like cross flow, skin factors and threshold pressure gradient was analyzed in the article. This model provides a useful method for tight oil production forecast and it has certain guiding significance for the production capacity prediction and dynamic analysis.

  18. Completing the Link between Exposure Science and Toxicology for Improved Environmental Health Decision Making: The Aggregate Exposure Pathway Framework

    EPA Science Inventory

    Driven by major scientific advances in analytical methods, biomonitoring, computation, and a newly articulated vision for a greater impact in public health, the field of exposure science is undergoing a rapid transition from a field of observation to a field of prediction. Deploy...

  19. Tornadoes and transmission reliability planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teles, J.E.; Anderson, S.W.; Landgren, G.L.

    1980-01-01

    The objective of this paper is to introduce an analytical approach for predicting overhead transmission line outages that are caused by tornadoes. The method is presently being used to determine the effects of tornadoes on various right-of-way configurations associated with a generating station project or the supply to a major substation. 2 refs.

  20. A Meta-Analytic Review of Research on Gender Differences in Sexuality, 1993-2007

    ERIC Educational Resources Information Center

    Petersen, Jennifer L.; Hyde, Janet Shibley

    2010-01-01

    In 1993 Oliver and Hyde conducted a meta-analysis on gender differences in sexuality. The current study updated that analysis with current research and methods. Evolutionary psychology, cognitive social learning theory, social structural theory, and the gender similarities hypothesis provided predictions about gender differences in sexuality. We…

  1. Prediction of the Thermal Conductivity of Refrigerants by Computational Methods and Artificial Neural Network.

    PubMed

    Ghaderi, Forouzan; Ghaderi, Amir H; Ghaderi, Noushin; Najafi, Bijan

    2017-01-01

    Background: The thermal conductivity of fluids can be calculated by several computational methods. However, these methods are reliable only at the confined levels of density, and there is no specific computational method for calculating thermal conductivity in the wide ranges of density. Methods: In this paper, two methods, an Artificial Neural Network (ANN) approach and a computational method established upon the Rainwater-Friend theory, were used to predict the value of thermal conductivity in all ranges of density. The thermal conductivity of six refrigerants, R12, R14, R32, R115, R143, and R152 was predicted by these methods and the effectiveness of models was specified and compared. Results: The results show that the computational method is a usable method for predicting thermal conductivity at low levels of density. However, the efficiency of this model is considerably reduced in the mid-range of density. It means that this model cannot be used at density levels which are higher than 6. On the other hand, the ANN approach is a reliable method for thermal conductivity prediction in all ranges of density. The best accuracy of ANN is achieved when the number of units is increased in the hidden layer. Conclusion: The results of the computational method indicate that the regular dependence between thermal conductivity and density at higher densities is eliminated. It can develop a nonlinear problem. Therefore, analytical approaches are not able to predict thermal conductivity in wide ranges of density. Instead, a nonlinear approach such as, ANN is a valuable method for this purpose.

  2. Prediction of the Thermal Conductivity of Refrigerants by Computational Methods and Artificial Neural Network

    PubMed Central

    Ghaderi, Forouzan; Ghaderi, Amir H.; Ghaderi, Noushin; Najafi, Bijan

    2017-01-01

    Background: The thermal conductivity of fluids can be calculated by several computational methods. However, these methods are reliable only at the confined levels of density, and there is no specific computational method for calculating thermal conductivity in the wide ranges of density. Methods: In this paper, two methods, an Artificial Neural Network (ANN) approach and a computational method established upon the Rainwater-Friend theory, were used to predict the value of thermal conductivity in all ranges of density. The thermal conductivity of six refrigerants, R12, R14, R32, R115, R143, and R152 was predicted by these methods and the effectiveness of models was specified and compared. Results: The results show that the computational method is a usable method for predicting thermal conductivity at low levels of density. However, the efficiency of this model is considerably reduced in the mid-range of density. It means that this model cannot be used at density levels which are higher than 6. On the other hand, the ANN approach is a reliable method for thermal conductivity prediction in all ranges of density. The best accuracy of ANN is achieved when the number of units is increased in the hidden layer. Conclusion: The results of the computational method indicate that the regular dependence between thermal conductivity and density at higher densities is eliminated. It can develop a nonlinear problem. Therefore, analytical approaches are not able to predict thermal conductivity in wide ranges of density. Instead, a nonlinear approach such as, ANN is a valuable method for this purpose. PMID:29188217

  3. Temperature field for radiative tomato peeling

    NASA Astrophysics Data System (ADS)

    Cuccurullo, G.; Giordano, L.

    2017-01-01

    Nowadays peeling of tomatoes is performed by using steam or lye, which are expensive and polluting techniques, thus sustainable alternatives are searched for dry peeling and, among that, radiative heating seems to be a fairly promising method. This paper aims to speed up the prediction of surface temperatures useful for realizing dry-peeling, thus a 1D-analytical model for the unsteady temperature field in a rotating tomato exposed to a radiative heating source is presented. Since only short times are of interest for the problem at hand, the model involves a semi-infinite slab cooled by convective heat transfer while heated by a pulsating heat source. The model being linear, the solution is derived following the Laplace Transform method. A 3D finite element model of the rotating tomato is introduced as well in order to validate the analytical solution. A satisfactory agreement is attained. Therefore, two different ways to predict the onset of the peeling conditions are available which can be of help for proper design of peeling plants. Particular attention is paid to study surface temperature uniformity, that being a critical parameter for realizing an easy tomato peeling.

  4. Mathematical model to estimate risk of calcium-containing renal stones

    NASA Technical Reports Server (NTRS)

    Pietrzyk, R. A.; Feiveson, A. H.; Whitson, P. A.

    1999-01-01

    BACKGROUND/AIMS: Astronauts exposed to microgravity during the course of spaceflight undergo physiologic changes that alter the urinary environment so as to increase the risk of renal stone formation. This study was undertaken to identify a simple method with which to evaluate the potential risk of renal stone development during spaceflight. METHOD: We used a large database of urinary risk factors obtained from 323 astronauts before and after spaceflight to generate a mathematical model with which to predict the urinary supersaturation of calcium stone forming salts. RESULT: This model, which involves the fewest possible analytical variables (urinary calcium, citrate, oxalate, phosphorus, and total volume), reliably and accurately predicted the urinary supersaturation of the calcium stone forming salts when compared to results obtained from a group of 6 astronauts who collected urine during flight. CONCLUSIONS: The use of this model will simplify both routine medical monitoring during spaceflight as well as the evaluation of countermeasures designed to minimize renal stone development. This model also can be used for Earth-based applications in which access to analytical resources is limited.

  5. Diffusiophoresis in one-dimensional solute gradients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ault, Jesse T.; Warren, Patrick B.; Shin, Sangwoo

    Here, the diffusiophoretic motion of suspended colloidal particles under one-dimensional solute gradients is solved using numerical and analytical techniques. Similarity solutions are developed for the injection and withdrawal dynamics of particles into semi-infinite pores. Furthermore, a method of characteristics formulation of the diffusion-free particle transport model is presented and integrated to realize particle trajectories. Analytical solutions are presented for the limit of small particle diffusiophoretic mobility Γ p relative to the solute diffusivity D s for particle motions in both semi-infinite and finite domains. Results confirm the build up of local maxima and minima in the propagating particle front dynamics.more » The method of characteristics is shown to successfully predict particle motions and the position of the particle front, although it fails to accurately predict suspended particle concentrations in the vicinity of sharp gradients, such as at the particle front peak seen in some injection cases, where particle diffusion inevitably plays an important role. Results inform the design of applications in which the use of applied solute gradients can greatly enhance particle injection into and withdrawal from pores.« less

  6. Joint nonlinearity effects in the design of a flexible truss structure control system

    NASA Technical Reports Server (NTRS)

    Mercadal, Mathieu

    1986-01-01

    Nonlinear effects are introduced in the dynamics of large space truss structures by the connecting joints which are designed with rather important tolerances to facilitate the assembly of the structures in space. The purpose was to develop means to investigate the nonlinear dynamics of the structures, particularly the limit cycles that might occur when active control is applied to the structures. An analytical method was sought and derived to predict the occurrence of limit cycles and to determine their stability. This method is mainly based on the quasi-linearization of every joint using describing functions. This approach was proven successful when simple dynamical systems were tested. Its applicability to larger systems depends on the amount of computations it requires, and estimates of the computational task tend to indicate that the number of individual sources of nonlinearity should be limited. Alternate analytical approaches, which do not account for every single nonlinearity, or the simulation of a simplified model of the dynamical system should, therefore, be investigated to determine a more effective way to predict limit cycles in large dynamical systems with an important number of distributed nonlinearities.

  7. Determining noise temperatures in beam waveguide systems

    NASA Technical Reports Server (NTRS)

    Imbriale, W.; Veruttipong, W.; Otoshi, T.; Franco, M.

    1994-01-01

    A new 34-m research and development antenna was fabricated and tested as a precursor to introducing beam waveguide (BWG) antennas and Ka-band (32 GHz) frequencies into the NASA/JPL Deep Space Network. For deep space use, system noise temperature is a critical parameter. There are thought to be two major contributors to noise temperature in a BWG system: the spillover past the mirrors, and the conductivity loss in the walls. However, to date, there are no generally accepted methods for computing noise temperatures in a beam waveguide system. An extensive measurement program was undertaken to determine noise temperatures in such a system along with a correspondent effort in analytic prediction. Utilizing a very sensitive radiometer, noise temperature measurements were made at the Cassegrain focus, an intermediate focal point, and the focal point in the basement pedestal room. Several different horn diameters were used to simulate different amounts of spillover past the mirrors. Two analytic procedures were developed for computing noise temperature, one utilizing circular waveguide modes and the other a semiempirical approach. The results of both prediction methods are compared to the experimental data.

  8. Extension of the Helmholtz-Smoluchowski velocity to the hydrophobic microchannels with velocity slip.

    PubMed

    Park, H M; Kim, T W

    2009-01-21

    Electrokinetic flows through hydrophobic microchannels experience velocity slip at the microchannel wall, which affects volumetric flow rate and solute retention time. The usual method of predicting the volumetric flow rate and velocity profile for hydrophobic microchannels is to solve the Navier-Stokes equation and the Poisson-Boltzmann equation for the electric potential with the boundary condition of velocity slip expressed by the Navier slip coefficient, which is computationally demanding and defies analytic solutions. In the present investigation, we have devised a simple method of predicting the velocity profiles and volumetric flow rates of electrokinetic flows by extending the concept of the Helmholtz-Smoluchowski velocity to microchannels with Navier slip. The extended Helmholtz-Smoluchowski velocity is simple to use and yields accurate results as compared to the exact solutions. Employing the extended Helmholtz-Smoluchowski velocity, the analytical expressions for volumetric flow rate and velocity profile for electrokinetic flows through rectangular microchannels with Navier slip have been obtained at high values of zeta potential. The range of validity of the extended Helmholtz-Smoluchowski velocity is also investigated.

  9. Comparison of the performance of the CMS Hierarchical Condition Category (CMS-HCC) risk adjuster with the Charlson and Elixhauser comorbidity measures in predicting mortality.

    PubMed

    Li, Pengxiang; Kim, Michelle M; Doshi, Jalpa A

    2010-08-20

    The Centers for Medicare and Medicaid Services (CMS) has implemented the CMS-Hierarchical Condition Category (CMS-HCC) model to risk adjust Medicare capitation payments. This study intends to assess the performance of the CMS-HCC risk adjustment method and to compare it to the Charlson and Elixhauser comorbidity measures in predicting in-hospital and six-month mortality in Medicare beneficiaries. The study used the 2005-2006 Chronic Condition Data Warehouse (CCW) 5% Medicare files. The primary study sample included all community-dwelling fee-for-service Medicare beneficiaries with a hospital admission between January 1st, 2006 and June 30th, 2006. Additionally, four disease-specific samples consisting of subgroups of patients with principal diagnoses of congestive heart failure (CHF), stroke, diabetes mellitus (DM), and acute myocardial infarction (AMI) were also selected. Four analytic files were generated for each sample by extracting inpatient and/or outpatient claims for each patient. Logistic regressions were used to compare the methods. Model performance was assessed using the c-statistic, the Akaike's information criterion (AIC), the Bayesian information criterion (BIC) and their 95% confidence intervals estimated using bootstrapping. The CMS-HCC had statistically significant higher c-statistic and lower AIC and BIC values than the Charlson and Elixhauser methods in predicting in-hospital and six-month mortality across all samples in analytic files that included claims from the index hospitalization. Exclusion of claims for the index hospitalization generally led to drops in model performance across all methods with the highest drops for the CMS-HCC method. However, the CMS-HCC still performed as well or better than the other two methods. The CMS-HCC method demonstrated better performance relative to the Charlson and Elixhauser methods in predicting in-hospital and six-month mortality. The CMS-HCC model is preferred over the Charlson and Elixhauser methods if information about the patient's diagnoses prior to the index hospitalization is available and used to code the risk adjusters. However, caution should be exercised in studies evaluating inpatient processes of care and where data on pre-index admission diagnoses are unavailable.

  10. Dynamic Rod Worth Measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chao, Y.A.; Chapman, D.M.; Hill, D.J.

    2000-12-15

    The dynamic rod worth measurement (DRWM) technique is a method of quickly validating the predicted bank worth of control rods and shutdown rods. The DRWM analytic method is based on three-dimensional, space-time kinetic simulations of the rapid rod movements. Its measurement data is processed with an advanced digital reactivity computer. DRWM has been used as the method of bank worth validation at numerous plant startups with excellent results. The process and methodology of DRWM are described, and the measurement results of using DRWM are presented.

  11. High-performance heat pipes for heat recovery applications

    NASA Technical Reports Server (NTRS)

    Saaski, E. W.; Hartl, J. H.

    1980-01-01

    Methods to improve the performance of reflux heat pipes for heat recovery applications were examined both analytically and experimentally. Various models for the estimation of reflux heat pipe transport capacity were surveyed in the literature and compared with experimental data. A high transport capacity reflux heat pipe was developed that provides up to a factor of 10 capacity improvement over conventional open tube designs; analytical models were developed for this device and incorporated into a computer program HPIPE. Good agreement of the model predictions with data for R-11 and benzene reflux heat pipes was obtained.

  12. Measurement of Plastic Stress and Strain for Analytical Method Verification (MSFC Center Director's Discretionary Fund Project No. 93-08)

    NASA Technical Reports Server (NTRS)

    Price, J. M.; Steeve, B. E.; Swanson, G. R.

    1999-01-01

    The analytical prediction of stress, strain, and fatigue life at locations experiencing local plasticity is full of uncertainties. Much of this uncertainty arises from the material models and their use in the numerical techniques used to solve plasticity problems. Experimental measurements of actual plastic strains would allow the validity of these models and solutions to be tested. This memorandum describes how experimental plastic residual strain measurements were used to verify the results of a thermally induced plastic fatigue failure analysis of a space shuttle main engine fuel pump component.

  13. Analytical Modeling of Groundwater Seepages to St. Lucie Estuary

    NASA Astrophysics Data System (ADS)

    Lee, J.; Yeh, G.; Hu, G.

    2008-12-01

    In this paper, six analytical models describing hydraulic interaction of stream-aquifer systems were applied to St Lucie Estuary (SLE) River Estuaries. These are analytical solutions for: (1) flow from a finite aquifer to a canal, (2) flow from an infinite aquifer to a canal, (3) the linearized Laplace system in a seepage surface, (4) wave propagation in the aquifer, (5) potential flow through stratified unconfined aquifers, and (6) flow through stratified confined aquifers. Input data for analytical solutions were obtained from monitoring wells and river stages at seepage-meter sites. Four transects in the study area are available: Club Med, Harbour Ridge, Lutz/MacMillan, and Pendarvis Cove located in the St. Lucie River. The analytical models were first calibrated with seepage meter measurements and then used to estimate of groundwater discharges into St. Lucie River. From this process, analytical relationships between the seepage rate and river stages and/or groundwater tables were established to predict the seasonal and monthly variation in groundwater seepage into SLE. It was found the seepage rate estimations by analytical models agreed well with measured data for some cases but only fair for some other cases. This is not unexpected because analytical solutions have some inherently simplified assumptions, which may be more valid for some cases than the others. From analytical calculations, it is possible to predict approximate seepage rates in the study domain when the assumptions underlying these analytical models are valid. The finite and infinite aquifer models and the linearized Laplace method are good for sites Pendarvis Cove and Lutz/MacMillian, but fair for the other two sites. The wave propagation model gave very good agreement in phase but only fairly agreement in magnitude for all four sites. The stratified unconfined and confined aquifer models gave similarly good agreements with measurements at three sites but poorly at the Club Med site. None of the analytical models presented here can fit the data at this site. To give better estimates at all sites numerical modeling that couple river hydraulics and groundwater flow involving less simplifications of and assumptions for the system may have to be adapted.

  14. Pavement Performance : Approaches Using Predictive Analytics

    DOT National Transportation Integrated Search

    2018-03-23

    Acceptable pavement condition is paramount to road safety. Using predictive analytics techniques, this project attempted to develop models that provide an assessment of pavement condition based on an array of indictors that include pavement distress,...

  15. Analytical Finite Element Simulation Model for Structural Crashworthiness Prediction

    DOT National Transportation Integrated Search

    1974-02-01

    The analytical development and appropriate derivations are presented for a simulation model of vehicle crashworthiness prediction. Incremental equations governing the nonlinear elasto-plastic dynamic response of three-dimensional frame structures are...

  16. New robust bilinear least squares method for the analysis of spectral-pH matrix data.

    PubMed

    Goicoechea, Héctor C; Olivieri, Alejandro C

    2005-07-01

    A new second-order multivariate method has been developed for the analysis of spectral-pH matrix data, based on a bilinear least-squares (BLLS) model achieving the second-order advantage and handling multiple calibration standards. A simulated Monte Carlo study of synthetic absorbance-pH data allowed comparison of the newly proposed BLLS methodology with constrained parallel factor analysis (PARAFAC) and with the combination multivariate curve resolution-alternating least-squares (MCR-ALS) technique under different conditions of sample-to-sample pH mismatch and analyte-background ratio. The results indicate an improved prediction ability for the new method. Experimental data generated by measuring absorption spectra of several calibration standards of ascorbic acid and samples of orange juice were subjected to second-order calibration analysis with PARAFAC, MCR-ALS, and the new BLLS method. The results indicate that the latter method provides the best analytical results in regard to analyte recovery in samples of complex composition requiring strict adherence to the second-order advantage. Linear dependencies appear when multivariate data are produced by using the pH or a reaction time as one of the data dimensions, posing a challenge to classical multivariate calibration models. The presently discussed algorithm is useful for these latter systems.

  17. An analytical and experimental investigation of sandwich composites subjected to low-velocity impact

    NASA Astrophysics Data System (ADS)

    Anderson, Todd Alan

    1999-12-01

    This study involves an experimental and analytical investigation of low-velocity impact phenomenon in sandwich composite structures. The analytical solution of a three-dimensional finite-geometry multi-layer specially orthotropic panel subjected to static and transient transverse loading cases is presented. The governing equations of the static and dynamic formulations are derived from Reissner's functional and solved by enforcing the continuity of traction and displacement components between adjacent layers. For the dynamic loading case, the governing equations are solved by applying Fourier or Laplace transformation in time. Additionally, the static solution is extended to solve the contact problem between the sandwich laminate and a rigid sphere. An iterative method is employed to determine the sphere's unknown contact area and pressure distribution. A failure criterion is then applied to the sandwich laminate's stress and strain field to predict impact damage. The analytical accuracy of the present study is verified through comparisons with finite element models, other analyses, and through experimentation. Low-velocity impact tests were conducted to characterize the type and extent of the damage observed in a variety of sandwich configurations with graphite/epoxy face sheets and foam or honeycomb cores. Correlation of the residual indentation and cross-sectional views of the impacted specimens provides a criterion for the extent of damage. Quasi-static indentation tests are also performed and show excellent agreement when compared with the analytical predictions. Finally, piezoelectric polyvinylidene fluoride (PVF2) film sensors are found to be effective in detecting low-velocity impact.

  18. An analytical method to simulate the H I 21-cm visibility signal for intensity mapping experiments

    NASA Astrophysics Data System (ADS)

    Sarkar, Anjan Kumar; Bharadwaj, Somnath; Marthi, Visweshwar Ram

    2018-01-01

    Simulations play a vital role in testing and validating H I 21-cm power spectrum estimation techniques. Conventional methods use techniques like N-body simulations to simulate the sky signal which is then passed through a model of the instrument. This makes it necessary to simulate the H I distribution in a large cosmological volume, and incorporate both the light-cone effect and the telescope's chromatic response. The computational requirements may be particularly large if one wishes to simulate many realizations of the signal. In this paper, we present an analytical method to simulate the H I visibility signal. This is particularly efficient if one wishes to simulate a large number of realizations of the signal. Our method is based on theoretical predictions of the visibility correlation which incorporate both the light-cone effect and the telescope's chromatic response. We have demonstrated this method by applying it to simulate the H I visibility signal for the upcoming Ooty Wide Field Array Phase I.

  19. The Role of Teamwork in the Analysis of Big Data: A Study of Visual Analytics and Box Office Prediction.

    PubMed

    Buchanan, Verica; Lu, Yafeng; McNeese, Nathan; Steptoe, Michael; Maciejewski, Ross; Cooke, Nancy

    2017-03-01

    Historically, domains such as business intelligence would require a single analyst to engage with data, develop a model, answer operational questions, and predict future behaviors. However, as the problems and domains become more complex, organizations are employing teams of analysts to explore and model data to generate knowledge. Furthermore, given the rapid increase in data collection, organizations are struggling to develop practices for intelligence analysis in the era of big data. Currently, a variety of machine learning and data mining techniques are available to model data and to generate insights and predictions, and developments in the field of visual analytics have focused on how to effectively link data mining algorithms with interactive visuals to enable analysts to explore, understand, and interact with data and data models. Although studies have explored the role of single analysts in the visual analytics pipeline, little work has explored the role of teamwork and visual analytics in the analysis of big data. In this article, we present an experiment integrating statistical models, visual analytics techniques, and user experiments to study the role of teamwork in predictive analytics. We frame our experiment around the analysis of social media data for box office prediction problems and compare the prediction performance of teams, groups, and individuals. Our results indicate that a team's performance is mediated by the team's characteristics such as openness of individual members to others' positions and the type of planning that goes into the team's analysis. These findings have important implications for how organizations should create teams in order to make effective use of information from their analytic models.

  20. Real-time determination of critical quality attributes using near-infrared spectroscopy: a contribution for Process Analytical Technology (PAT).

    PubMed

    Rosas, Juan G; Blanco, Marcel; González, Josep M; Alcalà, Manel

    2012-08-15

    Process Analytical Technology (PAT) is playing a central role in current regulations on pharmaceutical production processes. Proper understanding of all operations and variables connecting the raw materials to end products is one of the keys to ensuring quality of the products and continuous improvement in their production. Near infrared spectroscopy (NIRS) has been successfully used to develop faster and non-invasive quantitative methods for real-time predicting critical quality attributes (CQA) of pharmaceutical granulates (API content, pH, moisture, flowability, angle of repose and particle size). NIR spectra have been acquired from the bin blender after granulation process in a non-classified area without the need of sample withdrawal. The methodology used for data acquisition, calibration modelling and method application in this context is relatively inexpensive and can be easily implemented by most pharmaceutical laboratories. For this purpose, Partial Least-Squares (PLS) algorithm was used to calculate multivariate calibration models, that provided acceptable Root Mean Square Error of Predictions (RMSEP) values (RMSEP(API)=1.0 mg/g; RMSEP(pH)=0.1; RMSEP(Moisture)=0.1%; RMSEP(Flowability)=0.6 g/s; RMSEP(Angle of repose)=1.7° and RMSEP(Particle size)=2.5%) that allowed the application for routine analyses of production batches. The proposed method affords quality assessment of end products and the determination of important parameters with a view to understanding production processes used by the pharmaceutical industry. As shown here, the NIRS technique is a highly suitable tool for Process Analytical Technologies. Copyright © 2012 Elsevier B.V. All rights reserved.

  1. A multiplex PCR assay for the rapid and sensitive detection of methicillin-resistant Staphylococcus aureus and simultaneous discrimination of Staphylococcus aureus from coagulase-negative staphylococci.

    PubMed

    Xu, Benjin; Liu, Ling; Liu, Li; Li, Xinping; Li, Xiaofang; Wang, Xin

    2012-11-01

    Methicillin-resistant Staphylococcus aureus (MRSA) is a global health concern, which had been detected in food and food production animals. Conventional testing for detection of MRSA takes 3 to 5 d to yield complete information of the organism and its antibiotic sensitivity pattern. So, a rapid method is needed to diagnose and treat the MRSA infections. The present study focused on the development of a multiplex PCR assay for the rapid and sensitive detection of MRSA. The assay simultaneously detected 4 genes, namely, 16S rRNA of the Staphylococcus genus, femA of S. aureus, mecA that encodes methicillin resistance, and one internal control. It was rapid and yielded results within 4 h. The analytical sensitivity and specificity of the multiplex PCR assay was evaluated by comparing it with the conventional method. The analytical sensitivity of the multiplex PCR assay at the DNA level was 10 ng DNA. The analytical specificity was evaluated with 10 reference staphylococci strains and was 100%. The diagnostic evaluation of MRSA was carried out using 360 foodborne staphylococci isolates, and showed 99.1% of specificity, 96.4% of sensitivity, 97.5% of positive predictive value, and 97.3% of negative predictive value compared to the conventional method. The inclusion of an internal control in the multiplex PCR assay is important to exclude false-negative cases. This test can be used as an effective diagnostic and surveillance tool to investigate the spread and emergence of MRSA. © 2012 Institute of Food Technologists®

  2. Predicting the chromatographic retention of polymers: poly(methyl methacrylate)s and polyacryate blends.

    PubMed

    Bashir, Mubasher A; Radke, Wolfgang

    2007-09-07

    The suitability of a retention model especially designed for polymers is investigated to describe and predict the chromatographic retention behavior of poly(methyl methacrylate)s as a function of mobile phase composition and gradient steepness. It is found that three simple yet rationally chosen chromatographic experiments suffice to extract the analyte specific model parameters necessary to calculate the retention volumes. This allows predicting accurate retention volumes based on a minimum number of initial experiments. Therefore, methods for polymer separations can be developed in relatively short time. The suitability of the virtual chromatography approach to predict the separation of polymer blend is demonstrated for the first time using a blend of different polyacrylates.

  3. Prediction of vortex shedding from circular and noncircular bodies in subsonic flow

    NASA Technical Reports Server (NTRS)

    Mendenhall, Michael R.; Lesieutre, Daniel J.

    1987-01-01

    An engineering prediction method and associated computer code VTXCLD are presented which predict nose vortex shedding from circular and noncircular bodies in subsonic flow at angles of attack and roll. The axisymmetric body is represented by point sources and doublets, and noncircular cross sections are transformed to a circle by either analytical or numerical conformal transformations. The leeward vortices are modeled by discrete vortices in crossflow planes along the body; thus, the three-dimensional steady flow problem is reduced to a two-dimensional, unsteady, separated flow problem for solution. Comparison of measured and predicted surface pressure distributions, flowfield surveys, and aerodynamic characteristics are presented for bodies with circular and noncircular cross sectional shapes.

  4. Prediction of Experimental Surface Heat Flux of Thin Film Gauges using ANFIS

    NASA Astrophysics Data System (ADS)

    Sarma, Shrutidhara; Sahoo, Niranjan; Unal, Aynur

    2018-05-01

    Precise quantification of surface heat fluxes in highly transient environment is of paramount importance from the design point of view of several engineering equipment like thermal protection or cooling systems. Such environments are simulated in experimental facilities by exposing the surface with transient heat loads typically step/impulsive in nature. The surface heating rates are then determined from highly transient temperature history captured by efficient surface temperature sensors. The classical approach is to use thin film gauges (TFGs) in which temperature variations are acquired within milliseconds, thereby allowing calculation of surface heat flux, based on the theory of one-dimensional heat conduction on a semi-infinite body. With recent developments in the soft computing methods, the present study is an attempt for the application of intelligent system technique, called adaptive neuro fuzzy inference system (ANFIS) to recover surface heat fluxes from a given temperature history recorded by TFGs without having the need to solve lengthy analytical equations. Experiments have been carried out by applying known quantity of `impulse heat load' through laser beam on TFGs. The corresponding voltage signals have been acquired and surface heat fluxes are estimated through classical analytical approach. These signals are then used to `train' the ANFIS model, which later predicts output for `test' values. Results from both methods have been compared and these surface heat fluxes are used to predict the non-linear relationship between thermal and electrical properties of the gauges that are exceedingly pertinent to the design of efficient TFGs. Further, surface plots have been created to give an insight about dimensionality effect of the non-linear dependence of thermal/electrical parameters on each other. Later, it is observed that a properly optimized ANFIS model can predict the impulsive heat profiles with significant accuracy. This paper thus shows the appropriateness of soft computing technique as a practically constructive replacement for tedious analytical formulation and henceforth, effectively quantifies the modeling of TFGs.

  5. Review on failure prediction techniques of composite single lap joint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ab Ghani, A.F., E-mail: ahmadfuad@utem.edu.my; Rivai, Ahmad, E-mail: ahmadrivai@utem.edu.my

    2016-03-29

    Adhesive bonding is the most appropriate joining method in construction of composite structures. The use of reliable design and prediction technique will produce better performance of bonded joints. Several papers from recent papers and journals have been reviewed and synthesized to understand the current state of the art in this area. It is done by studying the most relevant analytical solutions for composite adherends with start of reviewing the most fundamental ones involving beam/plate theory. It is then extended to review single lap joint non linearity and failure prediction and finally on the failure prediction on composite single lap joint.more » The review also encompasses the finite element modelling part as tool to predict the elastic response of composite single lap joint and failure prediction numerically.« less

  6. Contact-coupled impact of slender rods: analysis and experimental validation

    PubMed Central

    Tibbitts, Ira B.; Kakarla, Deepika; Siskey, Stephanie; Ochoa, Jorge A.; Ong, Kevin L.; Brannon, Rebecca M.

    2013-01-01

    To validate models of contact mechanics in low speed structural impact, slender rods were impacted in a drop tower, and measurements of the contact and vibration were compared to analytical and finite element (FE) models. The contact area was recorded using a novel thin-film transfer technique, and the contact duration was measured using electrical continuity. Strain gages recorded the vibratory strain in one rod, and a laser Doppler vibrometer measured speed. The experiment was modeled analytically on a one-dimensional spatial domain using a quasi-static Hertzian contact law and a system of delay differential equations. The three-dimensional FE model used hexahedral elements, a penalty contact algorithm, and explicit time integration. A small submodel taken from the initial global FE model economically refined the analysis in the small contact region. Measured contact areas were within 6% of both models’ predictions, peak speeds within 2%, cyclic strains within 12 με (RMS value), and contact durations within 2 μs. The global FE model and the measurements revealed small disturbances, not predicted by the analytical model, believed to be caused by interactions of the non-planar stress wavefront with the rod’s ends. The accuracy of the predictions for this simple test, as well as the versatility of the diagnostic tools, validates the theoretical and computational models, corroborates instrument calibration, and establishes confidence that the same methods may be used in experimental and computational study of contact mechanics during impact of more complicated structures. Recommendations are made for applying the methods to a particular biomechanical problem: the edge-loading of a loose prosthetic hip joint which can lead to premature wear and prosthesis failure. PMID:24729630

  7. Micromechanics Analysis Code With Generalized Method of Cells (MAC/GMC): User Guide. Version 3

    NASA Technical Reports Server (NTRS)

    Arnold, S. M.; Bednarcyk, B. A.; Wilt, T. E.; Trowbridge, D.

    1999-01-01

    The ability to accurately predict the thermomechanical deformation response of advanced composite materials continues to play an important role in the development of these strategic materials. Analytical models that predict the effective behavior of composites are used not only by engineers performing structural analysis of large-scale composite components but also by material scientists in developing new material systems. For an analytical model to fulfill these two distinct functions it must be based on a micromechanics approach which utilizes physically based deformation and life constitutive models and allows one to generate the average (macro) response of a composite material given the properties of the individual constituents and their geometric arrangement. Here the user guide for the recently developed, computationally efficient and comprehensive micromechanics analysis code, MAC, who's predictive capability rests entirely upon the fully analytical generalized method of cells, GMC, micromechanics model is described. MAC/ GMC is a versatile form of research software that "drives" the double or triply periodic micromechanics constitutive models based upon GMC. MAC/GMC enhances the basic capabilities of GMC by providing a modular framework wherein 1) various thermal, mechanical (stress or strain control) and thermomechanical load histories can be imposed, 2) different integration algorithms may be selected, 3) a variety of material constitutive models (both deformation and life) may be utilized and/or implemented, and 4) a variety of fiber architectures (both unidirectional, laminate and woven) may be easily accessed through their corresponding representative volume elements contained within the supplied library of RVEs or input directly by the user, and 5) graphical post processing of the macro and/or micro field quantities is made available.

  8. A General Simulation Method for Multiple Bodies in Proximate Flight

    NASA Technical Reports Server (NTRS)

    Meakin, Robert L.

    2003-01-01

    Methods of unsteady aerodynamic simulation for an arbitrary number of independent bodies flying in close proximity are considered. A novel method to efficiently detect collision contact points is described. A method to compute body trajectories in response to aerodynamic loads, applied loads, and inter-body collisions is also given. The physical correctness of the methods are verified by comparison to a set of analytic solutions. The methods, combined with a Navier-Stokes solver, are used to demonstrate the possibility of predicting the unsteady aerodynamics and flight trajectories of moving bodies that involve rigid-body collisions.

  9. Helios: Understanding Solar Evolution Through Text Analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Randazzese, Lucien

    This proof-of-concept project focused on developing, testing, and validating a range of bibliometric, text analytic, and machine-learning based methods to explore the evolution of three photovoltaic (PV) technologies: Cadmium Telluride (CdTe), Dye-Sensitized solar cells (DSSC), and Multi-junction solar cells. The analytical approach to the work was inspired by previous work by the same team to measure and predict the scientific prominence of terms and entities within specific research domains. The goal was to create tools that could assist domain-knowledgeable analysts in investigating the history and path of technological developments in general, with a focus on analyzing step-function changes in performance,more » or “breakthroughs,” in particular. The text-analytics platform developed during this project was dubbed Helios. The project relied on computational methods for analyzing large corpora of technical documents. For this project we ingested technical documents from the following sources into Helios: Thomson Scientific Web of Science (papers), the U.S. Patent & Trademark Office (patents), the U.S. Department of Energy (technical documents), the U.S. National Science Foundation (project funding summaries), and a hand curated set of full-text documents from Thomson Scientific and other sources.« less

  10. Flow through three-dimensional arrangements of cylinders with alternating streamwise planar tilt

    NASA Astrophysics Data System (ADS)

    Sahraoui, M.; Marshall, H.; Kaviany, M.

    1993-09-01

    In this report, fluid flow through a three-dimensional model for the fibrous filters is examined. In this model, the three-dimensional Stokes equation with the appropriate periodic boundary conditions is solved using the finite volume method. In addition to the numerical solution, we attempt to model this flow analytically by using the two-dimensional extended analytic solution in each of the unit cells of the three-dimensional structure. Particle trajectories computed using the superimposed analytic solution of the flow field are closed to those computed using the numerical solution of the flow field. The numerical results show that the pressure drop is not affected significantly by the relative angle of rotation of the cylinders for the high porosity used in this study (epsilon = 0.8 and epsilon = 0.95). The numerical solution and the superimposed analytic solution are also compared in terms of the particle capture efficiency. The results show that the efficiency predictions using the two methods are within 10% for St = 0.01 and 5% for St = 100. As the the porosity decreases, the three-dimensional effect becomes more significant and a difference of 35% is obtained for epsilon = 0.8.

  11. A comparison of experiment and theory for sound propagation in variable area ducts

    NASA Technical Reports Server (NTRS)

    Nayfeh, A. H.; Kaiser, J. E.; Marshall, R. L.; Hurst, C. J.

    1980-01-01

    An experimental and analytical program has been carried out to evaluate sound suppression techniques in ducts that produce refraction effects due to axial velocity gradients. The analytical program employs a computer code based on the method of multiple scales to calculate the influence of axial variations due to slow changes in the cross-sectional area as well as transverse gradients due to the wall boundary layers. Detailed comparisons between the analytical predictions and the experimental measurements have been made. The circumferential variations of pressure amplitudes and phases at several axial positions have been examined in straight and variable area ducts, with hard walls and lined sections, and with and without a mean flow. Reasonable agreement between the theoretical and experimental results has been found.

  12. Least Square Regression Method for Estimating Gas Concentration in an Electronic Nose System

    PubMed Central

    Khalaf, Walaa; Pace, Calogero; Gaudioso, Manlio

    2009-01-01

    We describe an Electronic Nose (ENose) system which is able to identify the type of analyte and to estimate its concentration. The system consists of seven sensors, five of them being gas sensors (supplied with different heater voltage values), the remainder being a temperature and a humidity sensor, respectively. To identify a new analyte sample and then to estimate its concentration, we use both some machine learning techniques and the least square regression principle. In fact, we apply two different training models; the first one is based on the Support Vector Machine (SVM) approach and is aimed at teaching the system how to discriminate among different gases, while the second one uses the least squares regression approach to predict the concentration of each type of analyte. PMID:22573980

  13. Using Text Analytics of AJPE Article Titles to Reveal Trends In Pharmacy Education Over the Past Two Decades.

    PubMed

    Pedrami, Farnoush; Asenso, Pamela; Devi, Sachin

    2016-08-25

    Objective. To identify trends in pharmacy education during last two decades using text mining. Methods. Articles published in the American Journal of Pharmaceutical Education (AJPE) in the past two decades were compiled in a database. Custom text analytics software was written using Visual Basic programming language in the Visual Basic for Applications (VBA) editor of Excel 2007. Frequency of words appearing in article titles was calculated using the custom VBA software. Data were analyzed to identify the emerging trends in pharmacy education. Results. Three educational trends emerged: active learning, interprofessional, and cultural competency. Conclusion. The text analytics program successfully identified trends in article topics and may be a useful compass to predict the future course of pharmacy education.

  14. Flight and analytical investigations of a structural mode excitation system on the YF-12A airplane

    NASA Technical Reports Server (NTRS)

    Goforth, E. A.; Murphy, R. C.; Beranek, J. A.; Davis, R. A.

    1987-01-01

    A structural excitation system, using an oscillating canard vane to generate force, was mounted on the forebody of the YF-12A airplane. The canard vane was used to excite the airframe structural modes during flight in the subsonic, transonic, and supersonic regimes. Structural modal responses generated by the canard vane forces were measured at the flight test conditions by airframe-mounted accelerometers. Correlations of analytical and experimental aeroelastic results were made. Doublet lattice, steady state double lattice with uniform lag, Mach box, and piston theory all produced acceptable analytical aerodynamic results within the restrictions that apply to each. In general, the aerodynamic theory methods, carefully applied, were found to predict the dynamic behavior of the YF-12A aircraft adequately.

  15. A ricin forensic profiling approach based on a complex set of biomarkers.

    PubMed

    Fredriksson, Sten-Åke; Wunschel, David S; Lindström, Susanne Wiklund; Nilsson, Calle; Wahl, Karen; Åstot, Crister

    2018-08-15

    A forensic method for the retrospective determination of preparation methods used for illicit ricin toxin production was developed. The method was based on a complex set of biomarkers, including carbohydrates, fatty acids, seed storage proteins, in combination with data on ricin and Ricinus communis agglutinin. The analyses were performed on samples prepared from four castor bean plant (R. communis) cultivars by four different sample preparation methods (PM1-PM4) ranging from simple disintegration of the castor beans to multi-step preparation methods including different protein precipitation methods. Comprehensive analytical data was collected by use of a range of analytical methods and robust orthogonal partial least squares-discriminant analysis- models (OPLS-DA) were constructed based on the calibration set. By the use of a decision tree and two OPLS-DA models, the sample preparation methods of test set samples were determined. The model statistics of the two models were good and a 100% rate of correct predictions of the test set was achieved. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. In vitro enantioselective human liver microsomal metabolism and prediction of in vivo pharmacokinetic parameters of tetrabenazine by DLLME-CE.

    PubMed

    Bocato, Mariana Zuccherato; de Lima Moreira, Fernanda; de Albuquerque, Nayara Cristina Perez; de Gaitani, Cristiane Masetto; de Oliveira, Anderson Rodrigo Moraes

    2016-09-05

    A new capillary electrophoresis method for the enantioselective analysis of cis- and trans- dihydrotetrabenazine (diHTBZ) after in vitro metabolism by human liver microsomes (HLMs) was developed. The chiral electrophoretic separations were performed by using tris-phosphate buffer (pH 2.5) containing 1% (w/v) carboxymethyl-β-CD as background electrolyte with an applied voltage of +15kV and capillary temperature kept at 15°C. Dispersive liquid-liquid microextraction was employed to extract the analytes from HLMs. Dichloromethane was used as extraction solvent (75μL) and acetone as disperser solvent (150μL). The method was validated according to official guidelines and showed to be linear over the concentration range of 0.29-19.57μmolL(-1) (r=0.9955) for each metabolite enantiomer. Within- and between-day precision and accuracy evaluated by relative standard deviation and relative error were lower than 15% for all enantiomers. The stability assay showed that the analytes kept stable under handling, storage and in metabolism conditions. After method validation, an enantioselective in vitro metabolism and in vivo pharmacokinetic prediction was carried out. This study showed a stereoselective metabolism and the observed kinetic profile indicated a substrate inhibition behavior. DiHTBZ enantiomers were catalyzed mainly by CYP2C19 and the predicted clearance suggests that liver metabolism is the main route for TBZ elimination which supports the literature data. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. How health leaders can benefit from predictive analytics.

    PubMed

    Giga, Aliyah

    2017-11-01

    Predictive analytics can support a better integrated health system providing continuous, coordinated, and comprehensive person-centred care to those who could benefit most. In addition to dollars saved, using a predictive model in healthcare can generate opportunities for meaningful improvements in efficiency, productivity, costs, and better population health with targeted interventions toward patients at risk.

  18. Rapid determination of thermodynamic parameters from one-dimensional programmed-temperature gas chromatography for use in retention time prediction in comprehensive multidimensional chromatography.

    PubMed

    McGinitie, Teague M; Ebrahimi-Najafabadi, Heshmatollah; Harynuk, James J

    2014-01-17

    A new method for estimating the thermodynamic parameters of ΔH(T0), ΔS(T0), and ΔCP for use in thermodynamic modeling of GC×GC separations has been developed. The method is an alternative to the traditional isothermal separations required to fit a three-parameter thermodynamic model to retention data. Herein, a non-linear optimization technique is used to estimate the parameters from a series of temperature-programmed separations using the Nelder-Mead simplex algorithm. With this method, the time required to obtain estimates of thermodynamic parameters a series of analytes is significantly reduced. This new method allows for precise predictions of retention time with the average error being only 0.2s for 1D separations. Predictions for GC×GC separations were also in agreement with experimental measurements; having an average relative error of 0.37% for (1)tr and 2.1% for (2)tr. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Social Web mining and exploitation for serious applications: Technosocial Predictive Analytics and related technologies for public health, environmental and national security surveillance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamel Boulos, Maged; Sanfilippo, Antonio P.; Corley, Courtney D.

    2010-03-17

    This paper explores techno-social predictive analytics (TPA) and related methods for Web “data mining” where users’ posts and queries are garnered from Social Web (“Web 2.0”) tools such as blogs, microblogging and social networking sites to form coherent representations of real-time health events. The paper includes a brief introduction to commonly used Social Web tools such as mashups and aggregators, and maps their exponential growth as an open architecture of participation for the masses and an emerging way to gain insight about people’s collective health status of whole populations. Several health related tool examples are described and demonstrated as practicalmore » means through which health professionals might create clear location specific pictures of epidemiological data such as flu outbreaks.« less

  20. Acoustic impedance of micro perforated membranes: Velocity continuity condition at the perforation boundary.

    PubMed

    Li, Chenxi; Cazzolato, Ben; Zander, Anthony

    2016-01-01

    The classic analytical model for the sound absorption of micro perforated materials is well developed and is based on a boundary condition where the velocity of the material is assumed to be zero, which is accurate when the material vibration is negligible. This paper develops an analytical model for finite-sized circular micro perforated membranes (MPMs) by applying a boundary condition such that the velocity of air particles on the hole wall boundary is equal to the membrane vibration velocity (a zero-slip condition). The acoustic impedance of the perforation, which varies with its position, is investigated. A prediction method for the overall impedance of the holes and the combined impedance of the MPM is also provided. The experimental results for four different MPM configurations are used to validate the model and good agreement between the experimental and predicted results is achieved.

  1. Analytical modeling of eddy-current losses caused by pulse-width-modulation switching in permanent-magnet brushless direct-current motors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deng, F.; Nehl, T.W.

    1998-09-01

    Because of their high efficiency and power density the PM brushless dc motor is a strong candidate for electric and hybrid vehicle propulsion systems. An analytical approach is developed to predict the inverter high frequency pulse width modulation (PWM) switching caused eddy-current losses in a permanent magnet brushless dc motor. The model uses polar coordinates to take curvature effects into account, and is also capable of including the space harmonic effect of the stator magnetic field and the stator lamination effect on the losses. The model was applied to an existing motor design and was verified with the finite elementmore » method. Good agreement was achieved between the two approaches. Hence, the model is expected to be very helpful in predicting PWM switching losses in permanent magnet machine design.« less

  2. Investigation of chaos and its control in a Duffing-type nano beam model

    NASA Astrophysics Data System (ADS)

    Jha, Abhishek Kumar; Dasgupta, Sovan Sundar

    2018-04-01

    The prediction of chaos of a nano beam with harmonic excitation is investigated. Using the Galerkin method the nonlinear lumped model of a clamped-clamped nano beam with nonlinear cubic stiffness is obtained. This is a Duffing system with hardening type of nonlinearity. Based on the energy function and the phase portrait of the system, the resonator dynamics is categorized into four situations in which Using Malnikov function, an analytical criterion for homoclinic intersection in the form of inequality is written in terms of the system parameters. A numerical study including largest lyapunov exponent, Poincare diagram and phase portrait confirm the analytical prediction of chaos and effect of forcing amplitude. Subsequently, a linear velocity feedback controller is introduced into the system to successfully control the chaotic motion of the system at a faster rate at larger value of gain parameter.

  3. The structure of separated flow regions occurring near the leading edge of airfoils - including transition

    NASA Technical Reports Server (NTRS)

    1989-01-01

    Laser Doppler Velocimeter data, static pressure data, and smoke flow visualization data was obtained and analyzed to correlate with separation bubble data. The Eppler 387 airfoil was focused on at a chord Reynolds number of 100,000 and an angle of attack of 2 deg. Additional data was also obtained from the NACA 663-018 airfoil at a chord Reynolds number of 160,000 and an angle of attack of 12 deg. The structure and behavior of the transition separation bubble was documented along with the redeveloping boundary layer after reattachment over an airfoil at low Reynolds numbers. The understanding of the complex flow phenomena was examined so that analytic methods for predicting their formation and development can be improved. These analytic techniques have applications in the design and performance prediction of airfoils operating in the low Reynolds number flight regime.

  4. An analytically-based method for predicting the noise generated by the interaction between turbulence and a serrated leading edge

    NASA Astrophysics Data System (ADS)

    Mathews, J. R.; Peake, N.

    2018-05-01

    This paper considers the interaction of turbulence with a serrated leading edge. We investigate the noise produced by an aerofoil moving through a turbulent perturbation to uniform flow by considering the scattered pressure from the leading edge. We model the aerofoil as an infinite half plane with a leading edge serration, and develop an analytical model using a Green's function based upon the work of Howe. This allows us to consider both deterministic eddies and synthetic turbulence interacting with the leading edge. We show that it is possible to reduce the noise by using a serrated leading edge compared with a straight edge, but the optimal noise-reducing choice of serration is hard to predict due to the complex interaction. We also consider the effect of angle of attack, and find that in general the serrations are less effective at higher angles of attack.

  5. Evaluation of plasma proteomic data for Alzheimer disease state classification and for the prediction of progression from mild cognitive impairment to Alzheimer disease.

    PubMed

    Llano, Daniel A; Devanarayan, Viswanath; Simon, Adam J

    2013-01-01

    Previous studies that have examined the potential for plasma markers to serve as biomarkers for Alzheimer disease (AD) have studied single analytes and focused on the amyloid-β and τ isoforms and have failed to yield conclusive results. In this study, we performed a multivariate analysis of 146 plasma analytes (the Human DiscoveryMAP v 1.0 from Rules-Based Medicine) in 527 subjects with AD, mild cognitive impairment (MCI), or cognitively normal elderly subjects from the Alzheimer's Disease Neuroimaging Initiative database. We identified 4 different proteomic signatures, each using 5 to 14 analytes, that differentiate AD from control patients with sensitivity and specificity ranging from 74% to 85%. Five analytes were common to all 4 signatures: apolipoprotein A-II, apolipoprotein E, serum glutamic oxaloacetic transaminase, α-1-microglobulin, and brain natriuretic peptide. None of the signatures adequately predicted progression from MCI to AD over a 12- and 24-month period. A new panel of analytes, optimized to predict MCI to AD conversion, was able to provide 55% to 60% predictive accuracy. These data suggest that a simple panel of plasma analytes may provide an adjunctive tool to differentiate AD from controls, may provide mechanistic insights to the etiology of AD, but cannot adequately predict MCI to AD conversion.

  6. A new method for determining acoustic-liner admittance in a rectangular duct with grazing flow from experimental data

    NASA Technical Reports Server (NTRS)

    Watson, W. R.

    1984-01-01

    A method is developed for determining acoustic liner admittance in a rectangular duct with grazing flow. The axial propagation constant, cross mode order, and mean flow profile is measured. These measured data are then input into an analytical program which determines the unknown admittance value. The analytical program is based upon a finite element discretization of the acoustic field and a reposing of the unknown admittance value as a linear eigenvalue problem on the admittance value. Gaussian elimination is employed to solve this eigenvalue problem. The method used is extendable to grazing flows with boundary layers in both transverse directions of an impedance tube (or duct). Predicted admittance values are compared both with exact values that can be obtained for uniform mean flow profiles and with those from a Runge Kutta integration technique for cases involving a one dimensional boundary layer.

  7. Reliability analysis of composite structures

    NASA Technical Reports Server (NTRS)

    Kan, Han-Pin

    1992-01-01

    A probabilistic static stress analysis methodology has been developed to estimate the reliability of a composite structure. Closed form stress analysis methods are the primary analytical tools used in this methodology. These structural mechanics methods are used to identify independent variables whose variations significantly affect the performance of the structure. Once these variables are identified, scatter in their values is evaluated and statistically characterized. The scatter in applied loads and the structural parameters are then fitted to appropriate probabilistic distribution functions. Numerical integration techniques are applied to compute the structural reliability. The predicted reliability accounts for scatter due to variability in material strength, applied load, fabrication and assembly processes. The influence of structural geometry and mode of failure are also considerations in the evaluation. Example problems are given to illustrate various levels of analytical complexity.

  8. Meteorological Measurements from Satellite Platforms. [stability and control of flexible stochastic satellites

    NASA Technical Reports Server (NTRS)

    Suomi, V. E.

    1975-01-01

    The stability of stochastic satellites and the stability and control of flexible satellites were investigated. The effects of random environmental torques and noises in the moments of inertia of spinning and three-axes stabilized satellites were first compared analytically by four methods and by analog simulations. Among the analytical methods, it was shown that the Fokker-Planck formulation yields predictions which most coincide with the simulation results. It was then shown that the required stability criterion of a satellite is quite different from that obtained by a deterministic approach, under the assumption that the environmental and control torques experienced by the satellite are random. Finally, it was demonstrated that, by monitoring the deformations of the flexible elements of a satellite, the effectiveness of the satellite control system can be increased considerably.

  9. Applying Sequential Analytic Methods to Self-Reported Information to Anticipate Care Needs.

    PubMed

    Bayliss, Elizabeth A; Powers, J David; Ellis, Jennifer L; Barrow, Jennifer C; Strobel, MaryJo; Beck, Arne

    2016-01-01

    Identifying care needs for newly enrolled or newly insured individuals is important under the Affordable Care Act. Systematically collected patient-reported information can potentially identify subgroups with specific care needs prior to service use. We conducted a retrospective cohort investigation of 6,047 individuals who completed a 10-question needs assessment upon initial enrollment in Kaiser Permanente Colorado (KPCO), a not-for-profit integrated delivery system, through the Colorado State Individual Exchange. We used responses from the Brief Health Questionnaire (BHQ), to develop a predictive model for cost for receiving care in the top 25 percent, then applied cluster analytic techniques to identify different high-cost subpopulations. Per-member, per-month cost was measured from 6 to 12 months following BHQ response. BHQ responses significantly predictive of high-cost care included self-reported health status, functional limitations, medication use, presence of 0-4 chronic conditions, self-reported emergency department (ED) use during the prior year, and lack of prior insurance. Age, gender, and deductible-based insurance product were also predictive. The largest possible range of predicted probabilities of being in the top 25 percent of cost was 3.5 percent to 96.4 percent. Within the top cost quartile, examples of potentially actionable clusters of patients included those with high morbidity, prior utilization, depression risk and financial constraints; those with high morbidity, previously uninsured individuals with few financial constraints; and relatively healthy, previously insured individuals with medication needs. Applying sequential predictive modeling and cluster analytic techniques to patient-reported information can identify subgroups of individuals within heterogeneous populations who may benefit from specific interventions to optimize initial care delivery.

  10. Space vehicle acoustics prediction improvement for payloads. [space shuttle

    NASA Technical Reports Server (NTRS)

    Dandridge, R. E.

    1979-01-01

    The modal analysis method was extensively modified for the prediction of space vehicle noise reduction in the shuttle payload enclosure, and this program was adapted to the IBM 360 computer. The predicted noise reduction levels for two test cases were compared with experimental results to determine the validity of the analytical model for predicting space vehicle payload noise environments in the 10 Hz one-third octave band regime. The prediction approach for the two test cases generally gave reasonable magnitudes and trends when compared with the measured noise reduction spectra. The discrepancies in the predictions could be corrected primarily by improved modeling of the vehicle structural walls and of the enclosed acoustic space to obtain a more accurate assessment of normal modes. Techniques for improving and expandng the noise prediction for a payload environment are also suggested.

  11. Centrifugal ultrafiltration of human serum for improving immunoglobulin A quantification using attenuated total reflectance infrared spectroscopy.

    PubMed

    Elsohaby, Ibrahim; McClure, J Trenton; Riley, Christopher B; Bryanton, Janet; Bigsby, Kathryn; Shaw, R Anthony

    2018-02-20

    Attenuated total reflectance infrared (ATR-IR) spectroscopy is a simple, rapid and cost-effective method for the analysis of serum. However, the complex nature of serum remains a limiting factor to the reliability of this method. We investigated the benefits of coupling the centrifugal ultrafiltration with ATR-IR spectroscopy for quantification of human serum IgA concentration. Human serum samples (n = 196) were analyzed for IgA using an immunoturbidimetric assay. ATR-IR spectra were acquired for whole serum samples and for the retentate (residue) reconstituted with saline following 300 kDa centrifugal ultrafiltration. IR-based analytical methods were developed for each of the two spectroscopic datasets, and the accuracy of each of the two methods compared. Analytical methods were based upon partial least squares regression (PLSR) calibration models - one with 5-PLS factors (for whole serum) and the second with 9-PLS factors (for the reconstituted retentate). Comparison of the two sets of IR-based analytical results to reference IgA values revealed improvements in the Pearson correlation coefficient (from 0.66 to 0.76), and the root mean squared error of prediction in IR-based IgA concentrations (from 102 to 79 mg/dL) for the ultrafiltration retentate-based method as compared to the method built upon whole serum spectra. Depleting human serum low molecular weight proteins using a 300 kDa centrifugal filter thus enhances the accuracy IgA quantification by ATR-IR spectroscopy. Further evaluation and optimization of this general approach may ultimately lead to routine analysis of a range of high molecular-weight analytical targets that are otherwise unsuitable for IR-based analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. An Approximate Model for the Performance and Acoustic Predictions of Counterrotating Propeller Configurations. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Denner, Brett William

    1989-01-01

    An approximate method was developed to analyze and predict the acoustics of a counterrotating propeller configuration. The method employs the analytical techniques of Lock and Theodorsen as described by Davidson to predict the steady performance of a counterrotating configuration. Then, a modification of the method of Lesieutre is used to predict the unsteady forces on the blades. Finally, the steady and unsteady loads are used in the numerical method of Succi to predict the unsteady acoustics of the propeller. The numerical results are compared with experimental acoustic measurements of a counterrotating propeller configuration by Gazzaniga operating under several combinations of advance ratio, blade pitch, and number of blades. In addition, a constant-speed commuter-class propeller configuration was designed with the Davidson method and the acoustics analyzed at three advance ratios. Noise levels and frequency spectra were calculated at a number of locations around the configuration. The directivity patterns of the harmonics in both the horizontal and vertical planes were examined, with the conclusion that the noise levels of the even harmonics are relatively independent of direction whereas the noise levels of the odd harmonics are extremely dependent on azimuthal direction in the horizontal plane. The equations of Succi are examined to explain this behavior.

  13. Micromechanics Analysis Code (MAC) User Guide: Version 1.0

    NASA Technical Reports Server (NTRS)

    Wilt, T. E.; Arnold, S. M.

    1994-01-01

    The ability to accurately predict the thermomechanical deformation response of advanced composite materials continues to play an important role in the development of these strategic materials. Analytical models that predict the effective behavior of composites are used not only by engineers performing structural analysis of large-scale composite components but also by material scientists in developing new material systems. For an analytical model to fulfill these two distinct functions it must be based on a micromechanics approach which utilizes physically based deformation and life constitutive models and allows one to generate the average (macro) response of a composite material given the properties of the individual constituents and their geometric arrangement. Here the user guide for the recently developed, computationally efficient and comprehensive micromechanics analysis code, MAC, who's predictive capability rests entirely upon the fully analytical generalized method of cells, GMC, micromechanics model is described. MAC is a versatile form of research software that 'drives' the double or triple ply periodic micromechanics constitutive models based upon GMC. MAC enhances the basic capabilities of GMC by providing a modular framework wherein (1) various thermal, mechanical (stress or strain control), and thermomechanical load histories can be imposed; (2) different integration algorithms may be selected; (3) a variety of constituent constitutive models may be utilized and/or implemented; and (4) a variety of fiber architectures may be easily accessed through their corresponding representative volume elements.

  14. Micromechanics Analysis Code (MAC). User Guide: Version 2.0

    NASA Technical Reports Server (NTRS)

    Wilt, T. E.; Arnold, S. M.

    1996-01-01

    The ability to accurately predict the thermomechanical deformation response of advanced composite materials continues to play an important role in the development of these strategic materials. Analytical models that predict the effective behavior of composites are used not only by engineers performing structural analysis of large-scale composite components but also by material scientists in developing new material systems. For an analytical model to fulfill these two distinct functions it must be based on a micromechanics approach which utilizes physically based deformation and life constitutive models and allows one to generate the average (macro) response of a composite material given the properties of the individual constituents and their geometric arrangement. Here the user guide for the recently developed, computationally efficient and comprehensive micromechanics analysis code's (MAC) who's predictive capability rests entirely upon the fully analytical generalized method of cells (GMC), micromechanics model is described. MAC is a versatile form of research software that 'drives' the double or triply periodic micromechanics constitutive models based upon GMC. MAC enhances the basic capabilities of GMC by providing a modular framework wherein (1) various thermal, mechanical (stress or strain control) and thermomechanical load histories can be imposed, (2) different integration algorithms may be selected, (3) a variety of constituent constitutive models may be utilized and/or implemented, and (4) a variety of fiber and laminate architectures may be easily accessed through their corresponding representative volume elements.

  15. An analytical and experimental study of crack extension in center-notched composites

    NASA Technical Reports Server (NTRS)

    Beuth, Jack L., Jr.; Herakovich, Carl T.

    1987-01-01

    The normal stress ratio theory for crack extension in anisotropic materials is studied analytically and experimentally. The theory is applied within a microscopic-level analysis of a single center notch of arbitrary orientation in a unidirectional composite material. The bulk of the analytical work of this study applies an elasticity solution for an infinite plate with a center line to obtain critical stress and crack growth direction predictions. An elasticity solution for an infinite plate with a center elliptical flaw is also used to obtain qualitative predictions of the location of crack initiation on the border of a rounded notch tip. The analytical portion of the study includes the formulation of a new crack growth theory that includes local shear stress. Normal stress ratio theory predictions are obtained for notched unidirectional tensile coupons and unidirectional Iosipescu shear specimens. These predictions are subsequently compared to experimental results.

  16. Experimental Evaluation of Tuned Chamber Core Panels for Payload Fairing Noise Control

    NASA Technical Reports Server (NTRS)

    Schiller, Noah H.; Allen, Albert R.; Herlan, Jonathan W.; Rosenthal, Bruce N.

    2015-01-01

    Analytical models have been developed to predict the sound absorption and sound transmission loss of tuned chamber core panels. The panels are constructed of two facesheets sandwiching a corrugated core. When ports are introduced through one facesheet, the long chambers within the core can be used as an array of low-frequency acoustic resonators. To evaluate the accuracy of the analytical models, absorption and sound transmission loss tests were performed on flat panels. Measurements show that the acoustic resonators embedded in the panels improve both the absorption and transmission loss of the sandwich structure at frequencies near the natural frequency of the resonators. Analytical predictions for absorption closely match measured data. However, transmission loss predictions miss important features observed in the measurements. This suggests that higher-fidelity analytical or numerical models will be needed to supplement transmission loss predictions in the future.

  17. Analytical solutions of hypersonic type IV shock - shock interactions

    NASA Astrophysics Data System (ADS)

    Frame, Michael John

    An analytical model has been developed to predict the effects of a type IV shock interaction at high Mach numbers. This interaction occurs when an impinging oblique shock wave intersects the most normal portion of a detached bow shock. The flowfield which develops is complicated and contains an embedded jet of supersonic flow, which may be unsteady. The jet impinges on the blunt body surface causing very high pressure and heating loads. Understanding this type of interaction is vital to the designers of cowl lips and leading edges on air- breathing hypersonic vehicles. This analytical model represents the first known attempt at predicting the geometry of the interaction explicitly, without knowing beforehand the jet dimensions, including the length of the transmitted shock where the jet originates. The model uses a hyperbolic equation for the bow shock and by matching mass continuity, flow directions and pressure throughout the flowfield, a prediction of the interaction geometry can be derived. The model has been shown to agree well with the flowfield patterns and properties of experiments and CFD, but the prediction for where the peak pressure is located, and its value, can be significantly in error due to a lack of sophistication in the model of the jet fluid stagnation region. Therefore it is recommended that this region of the flowfield be modeled in more detail and more accurate experimental and CFD measurements be used for validation. However, the analytical model has been shown to be a fast and economic prediction tool, suitable for preliminary design, or for understanding the interactions effects, including the basic physics of the interaction, such as the jet unsteadiness. The model has been used to examine a wide parametric space of possible interactions, including different Mach number, impinging shock strength and location, and cylinder radius. It has also been used to examine the interaction on power-law shaped blunt bodies, a possible candidate for hypersonic leading edges. The formation of vortices at the termination shock of the supersonic jet has been modeled using the analytical method. The vortices lead to deflections in the jet terminating flow, and the presence of the cylinder surface seems to causes the vortices to break off the jet resulting in an oscillation in the jet flow.

  18. Continuous Metabolic Monitoring Based on Multi-Analyte Biomarkers to Predict Exhaustion

    PubMed Central

    Kastellorizios, Michail; Burgess, Diane J.

    2015-01-01

    This work introduces the concept of multi-analyte biomarkers for continuous metabolic monitoring. The importance of using more than one marker lies in the ability to obtain a holistic understanding of the metabolism. This is showcased for the detection and prediction of exhaustion during intense physical exercise. The findings presented here indicate that when glucose and lactate changes over time are combined into multi-analyte biomarkers, their monitoring trends are more sensitive in the subcutaneous tissue, an implantation-friendly peripheral tissue, compared to the blood. This unexpected observation was confirmed in normal as well as type 1 diabetic rats. This study was designed to be of direct value to continuous monitoring biosensor research, where single analytes are typically monitored. These findings can be implemented in new multi-analyte continuous monitoring technologies for more accurate insulin dosing, as well as for exhaustion prediction studies based on objective data rather than the subject’s perception. PMID:26028477

  19. Continuous metabolic monitoring based on multi-analyte biomarkers to predict exhaustion.

    PubMed

    Kastellorizios, Michail; Burgess, Diane J

    2015-06-01

    This work introduces the concept of multi-analyte biomarkers for continuous metabolic monitoring. The importance of using more than one marker lies in the ability to obtain a holistic understanding of the metabolism. This is showcased for the detection and prediction of exhaustion during intense physical exercise. The findings presented here indicate that when glucose and lactate changes over time are combined into multi-analyte biomarkers, their monitoring trends are more sensitive in the subcutaneous tissue, an implantation-friendly peripheral tissue, compared to the blood. This unexpected observation was confirmed in normal as well as type 1 diabetic rats. This study was designed to be of direct value to continuous monitoring biosensor research, where single analytes are typically monitored. These findings can be implemented in new multi-analyte continuous monitoring technologies for more accurate insulin dosing, as well as for exhaustion prediction studies based on objective data rather than the subject's perception.

  20. Automated Predictive Big Data Analytics Using Ontology Based Semantics.

    PubMed

    Nural, Mustafa V; Cotterell, Michael E; Peng, Hao; Xie, Rui; Ma, Ping; Miller, John A

    2015-10-01

    Predictive analytics in the big data era is taking on an ever increasingly important role. Issues related to choice on modeling technique, estimation procedure (or algorithm) and efficient execution can present significant challenges. For example, selection of appropriate and optimal models for big data analytics often requires careful investigation and considerable expertise which might not always be readily available. In this paper, we propose to use semantic technology to assist data analysts and data scientists in selecting appropriate modeling techniques and building specific models as well as the rationale for the techniques and models selected. To formally describe the modeling techniques, models and results, we developed the Analytics Ontology that supports inferencing for semi-automated model selection. The SCALATION framework, which currently supports over thirty modeling techniques for predictive big data analytics is used as a testbed for evaluating the use of semantic technology.

  1. Automated Predictive Big Data Analytics Using Ontology Based Semantics

    PubMed Central

    Nural, Mustafa V.; Cotterell, Michael E.; Peng, Hao; Xie, Rui; Ma, Ping; Miller, John A.

    2017-01-01

    Predictive analytics in the big data era is taking on an ever increasingly important role. Issues related to choice on modeling technique, estimation procedure (or algorithm) and efficient execution can present significant challenges. For example, selection of appropriate and optimal models for big data analytics often requires careful investigation and considerable expertise which might not always be readily available. In this paper, we propose to use semantic technology to assist data analysts and data scientists in selecting appropriate modeling techniques and building specific models as well as the rationale for the techniques and models selected. To formally describe the modeling techniques, models and results, we developed the Analytics Ontology that supports inferencing for semi-automated model selection. The SCALATION framework, which currently supports over thirty modeling techniques for predictive big data analytics is used as a testbed for evaluating the use of semantic technology. PMID:29657954

  2. Simultaneous determination of eight flavonoids in propolis using chemometrics-assisted high performance liquid chromatography-diode array detection.

    PubMed

    Sun, Yan-Mei; Wu, Hai-Long; Wang, Jian-Yao; Liu, Zhi; Zhai, Min; Yu, Ru-Qin

    2014-07-01

    A fast analytical strategy of second-order calibration method based on the alternating trilinear decomposition algorithm (ATLD)-assisted high performance liquid chromatography coupled with a diode array detector (HPLC-DAD) was established for the simultaneous determination of eight flavonoids (rutin, quercetin, luteolin, kaempferol, isorhamnetin, apigenin, galangin and chrysin) in propolis capsules samples. The chromatographic separation was implemented on a Wondasil™ C18 column (250mm×4.6mm, 5μm) within 13min with a binary mobile phase composed of water with 1% formic acid and methanol at a flow rate of 1.0mLmin(-1) after flavonoids were only extracted with methanol by ultrasound extraction for 15min. The baseline problem was overcome by considering background drift as additional compositions or factors as well as the target analytes, and ATLD was employed to handle the overlapping peaks from analytes of interest or from analytes and co-eluting matrix compounds. The linearity was good with the correlation coefficients no less than 0.9947; the limit of detections (LODs) within the range of 3.39-33.05ngmL(-1) were low enough; the accuracy was confirmed by the recoveries ranged from 91.9% to 110.2% and the root-mean-square-error of predictions (RMSEPs) less than 1.1μg/mL. The results indicated that the chromatographic method with the aid of ATLD is efficient, sensitive and cost-effective and can realize the resolution and accurate quantification of flavonoids even in the presence of interferences, thus providing an alternative method for accurate quantification of analytes especially when the complete separation is not easily accomplished. The method was successfully applied to propolis capsules samples and the satisfactory results were obtained. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Solution of the Odderon Problem for Arbitrary Conformal Weights

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wosiek, J.; Janik, R.A.

    1997-10-01

    A new method is applied to solve the Baxter equation for three coupled, noncompact spins. Because of the equivalence with the system of three Reggeized gluons, the intercept of the odderon trajectory is predicted for the first time, as the analytic function of the two relevant parameters. {copyright} {ital 1997} {ital The American Physical Society}

  4. Disciplinary Identity as Analytic Construct and Design Goal: Making Learning Sciences Matter

    ERIC Educational Resources Information Center

    Carlone, Heidi B.

    2017-01-01

    Bent Flyvbjerg (2001), in his book "Making Social Science Matter: Why Social Inquiry Fails and How It Can Succeed Again," argues that social science's aims and methods are currently, and perhaps always will be, ill suited to the type of cumulative and predictive theory that characterizes inquiry and knowledge generation in the natural…

  5. Prediction of Moisture Content for Congou Black Tea Withering Leaves Using Image Features and Nonlinear Method.

    PubMed

    Liang, Gaozhen; Dong, Chunwang; Hu, Bin; Zhu, Hongkai; Yuan, Haibo; Jiang, Yongwen; Hao, Guoshuang

    2018-05-18

    Withering is the first step in the processing of congou black tea. With respect to the deficiency of traditional water content detection methods, a machine vision based NDT (Non Destructive Testing) method was established to detect the moisture content of withered leaves. First, according to the time sequences using computer visual system collected visible light images of tea leaf surfaces, and color and texture characteristics are extracted through the spatial changes of colors. Then quantitative prediction models for moisture content detection of withered tea leaves was established through linear PLS (Partial Least Squares) and non-linear SVM (Support Vector Machine). The results showed correlation coefficients higher than 0.8 between the water contents and green component mean value (G), lightness component mean value (L * ) and uniformity (U), which means that the extracted characteristics have great potential to predict the water contents. The performance parameters as correlation coefficient of prediction set (Rp), root-mean-square error of prediction (RMSEP), and relative standard deviation (RPD) of the SVM prediction model are 0.9314, 0.0411 and 1.8004, respectively. The non-linear modeling method can better describe the quantitative analytical relations between the image and water content. With superior generalization and robustness, the method would provide a new train of thought and theoretical basis for the online water content monitoring technology of automated production of black tea.

  6. Transition Studies on a Swept-Wing Model

    NASA Technical Reports Server (NTRS)

    Saric, William S.

    1996-01-01

    The present investigation contributes to the understanding of boundary-layer stability and transition by providing detailed measurements of carefully-produced stationary crossflow vortices. It is clear that a successful prediction of transition in swept-wing flows must include an understanding of the detailed physics involved. Receptivity and nonlinear effects must not be ignored. Linear stability theory correctly predicts the expected wavelengths and mode shapes for stationary crossflow, but fails to predict the growth rates, even for low amplitudes. As new computational and analytical methods are developed to deal with three-dimensional boundary layers, the data provided by this experiment will serve as a useful benchmark for comparison.

  7. GalaxyGPCRloop: Template-Based and Ab Initio Structure Sampling of the Extracellular Loops of G-Protein-Coupled Receptors.

    PubMed

    Won, Jonghun; Lee, Gyu Rie; Park, Hahnbeom; Seok, Chaok

    2018-06-07

    The second extracellular loops (ECL2s) of G-protein-coupled receptors (GPCRs) are often involved in GPCR functions, and their structures have important implications in drug discovery. However, structure prediction of ECL2 is difficult because of its long length and the structural diversity among different GPCRs. In this study, a new ECL2 conformational sampling method involving both template-based and ab initio sampling was developed. Inspired by the observation of similar ECL2 structures of closely related GPCRs, a template-based sampling method employing loop structure templates selected from the structure database was developed. A new metric for evaluating similarity of the target loop to templates was introduced for template selection. An ab initio loop sampling method was also developed to treat cases without highly similar templates. The ab initio method is based on the previously developed fragment assembly and loop closure method. A new sampling component that takes advantage of secondary structure prediction was added. In addition, a conserved disulfide bridge restraining ECL2 conformation was predicted and analytically incorporated into sampling, reducing the effective dimension of the conformational search space. The sampling method was combined with an existing energy function for comparison with previously reported loop structure prediction methods, and the benchmark test demonstrated outstanding performance.

  8. An analytical method for predicting postwildfire peak discharges

    USGS Publications Warehouse

    Moody, John A.

    2012-01-01

    An analytical method presented here that predicts postwildfire peak discharge was developed from analysis of paired rainfall and runoff measurements collected from selected burned basins. Data were collected from 19 mountainous basins burned by eight wildfires in different hydroclimatic regimes in the western United States (California, Colorado, Nevada, New Mexico, and South Dakota). Most of the data were collected for the year of the wildfire and for 3 to 4 years after the wildfire. These data provide some estimate of the changes with time of postwildfire peak discharges, which are known to be transient but have received little documentation. The only required inputs for the analytical method are the burned area and a quantitative measure of soil burn severity (change in the normalized burn ratio), which is derived from Landsat reflectance data and is available from either the U.S. Department of Agriculture Forest Service or the U.S. Geological Survey. The method predicts the postwildfire peak discharge per unit burned area for the year of a wildfire, the first year after a wildfire, and the second year after a wildfire. It can be used at three levels of information depending on the data available to the user; each subsequent level requires either more data or more processing of the data. Level 1 requires only the burned area. Level 2 requires the burned area and the basin average value of the change in the normalized burn ratio. Level 3 requires the burned area and the calculation of the hydraulic functional connectivity, which is a variable that incorporates the sequence of soil burn severity along hillslope flow paths within the burned basin. Measurements indicate that the unit peak discharge response increases abruptly when the 30-minute maximum rainfall intensity is greater than about 5 millimeters per hour (0.2 inches per hour). This threshold may relate to a change in runoff generation from saturated-excess to infiltration-excess overland flow. The threshold value was about 7.6 millimeters per hour for the year of the wildfire and the first year after the wildfire, and it was about 11.1 millimeters per hour for the second year after the wildfire.

  9. Development of a Nano-Satellite Micro-Coupling Mechanism with Characterization of a Shape Memory Alloy Interference Joint

    DTIC Science & Technology

    2010-12-01

    satellite incorporation are explored by assembly and experimentation. Research on pseudoelastic material properties , analytical predictions, and...are explored by assembly and experimentation. Research on pseudoelastic material properties , analytical predictions, and tests of coupling strengths...20  Table 2.  Material Properties Used in Micro-Coupling Predicted Strength Calculations

  10. Study of Spray Disintegration in Accelerating Flow Fields

    NASA Technical Reports Server (NTRS)

    Nurick, W. H.

    1972-01-01

    An analytical and experimental investigation was conducted to perform "proof of principlem experiments to establish the effects of propellant combustion gas velocity on propella'nt atomization characteristics. The propellants were gaseous oxygen (GOX) and Shell Wax 270. The fuel was thus the same fluid used in earlier primary cold-flow atomization studies using the frozen wax method. Experiments were conducted over a range in L* (30 to 160 inches) at two contraction ratios (2 and 6). Characteristic exhaust velocity (c*) efficiencies varied from SO to 90 percent. The hot fire experimental performance characteristics at a contraction ratio of 6.0 in conjunction with analytical predictions from the drovlet heat-up version of the Distributed Energy Release (DER) combustion computer proDam showed that the apparent initial dropsize compared well with cold-flow predictions (if adjusted for the gas velocity effects). The results also compared very well with the trend in perfomnce as predicted with the model. significant propellant wall impingement at the contraction ratio of 2.0 precluded complete evaluation of the effect of gross changes in combustion gas velocity on spray dropsize.

  11. A correlative study between analysis and experiment on the fracture behavior of graphite/epoxy composites

    NASA Technical Reports Server (NTRS)

    Yeow, Y. T.; Morris, D. H.; Brinson, H. F.

    1979-01-01

    The paper compares the fracture behavior of a composite material by using the analytical models of Waddoups et al. (1971), Whitney and Nuismer (1974, 1975), and Snyder and Cruse (1975) with experimental results from tests performed on center-notched tensile strips. Laminate configurations of (0 deg)8s, (0 deg/90 deg)4s, (+ and -45 deg)4s, and (0 deg/+ and -45 deg/0 deg)2s from T300/934 graphite/epoxy are tested. These particular configurations are used so that the effect of various degrees of anisotropy can be studied. The procedure adopted uses the results from one test for crack size aspect ratio to predict the results of tests of other aspect ratios. For those methods that use a characteristic dimension, predictions are made by assuming the magnitude of this dimension to be constant. The validity of this assumption for a laminate is assessed by comparing predicted and experimental results. Analytical models using a characteristic dimension are compared to the model developed by Cruse (1973).

  12. High-frequency, high-intensity photoionization

    NASA Astrophysics Data System (ADS)

    Reiss, H. R.

    1996-02-01

    Two analytical methods for computing ionization by high-frequency fields are compared. Predicted ionization rates compare well, but energy predictions for the onset of ionization differ radically. The difference is shown to arise from the use of a transformation in one of the methods that alters the zero from which energy is measured. This alteration leads to an apparent energy threshold for ionization that can, especially in the stabilization regime, differ strongly from the laboratory measurement. It is concluded that channel closings in intense-field ionization can occur at high as well as low frequencies. It is also found that the stabilization phenomenon at high frequencies, very prominent for hydrogen, is absent in a short-range potential.

  13. An unsteady rotor/fuselage interaction method

    NASA Technical Reports Server (NTRS)

    Egolf, T. Alan; Lorber, Peter F.

    1987-01-01

    An analytical method has been developed to treat unsteady helicopter rotor, wake, and fuselage interaction aerodynamics. An existing lifting line/prescribed wake rotor analysis and a source panel fuselage analysis were modified to predict vibratory fuselage airloads. The analyses were coupled through the induced flow velocities of the rotor and wake on the fuselage and the fuselage on the rotor. A prescribed displacement technique was used to distort the rotor wake about the fuselage. Sensitivity studies were performed to determine the influence of wake and body geometry on the computed airloads. Predicted and measured mean and unsteady pressures on a cylindrical body in the wake of a two-bladed rotor were compared. Initial results show good qualitative agreement.

  14. A study of fracture phenomena in fiber composite laminates. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Konish, H. J., Jr.

    1973-01-01

    The extension of linear elastic fracture mechanics from ostensibly homogeneous isotropic metallic alloys to heterogeneous anisotropic advanced fiber composites is considered. It is analytically demonstrated that the effects of material anisotropy do not alter the principal characteristics exhibited by a crack in an isotropic material. The heterogeneity of fiber composites is experimentally shown to have a negligible effect on the behavior of a sufficiently long crack. A method is proposed for predicting the fracture strengths of a large class of composite laminates; the values predicted by this method show good agreement with limited experimental data. The limits imposed by material heterogeneity are briefly discussed, and areas for further study are recommended.

  15. Predicting concentrations of trace organic compounds in municipal wastewater treatment plant sludge and biosolids using the PhATE™ model.

    PubMed

    Cunningham, Virginia L; D'Aco, Vincent J; Pfeiffer, Danielle; Anderson, Paul D; Buzby, Mary E; Hannah, Robert E; Jahnke, James; Parke, Neil J

    2012-07-01

    This article presents the capability expansion of the PhATE™ (pharmaceutical assessment and transport evaluation) model to predict concentrations of trace organics in sludges and biosolids from municipal wastewater treatment plants (WWTPs). PhATE was originally developed as an empirical model to estimate potential concentrations of active pharmaceutical ingredients (APIs) in US surface and drinking waters that could result from patient use of medicines. However, many compounds, including pharmaceuticals, are not completely transformed in WWTPs and remain in biosolids that may be applied to land as a soil amendment. This practice leads to concerns about potential exposures of people who may come into contact with amended soils and also about potential effects to plants and animals living in or contacting such soils. The model estimates the mass of API in WWTP influent based on the population served, the API per capita use, and the potential loss of the compound associated with human use (e.g., metabolism). The mass of API on the treated biosolids is then estimated based on partitioning to primary and secondary solids, potential loss due to biodegradation in secondary treatment (e.g., activated sludge), and potential loss during sludge treatment (e.g., aerobic digestion, anaerobic digestion, composting). Simulations using 2 surrogate compounds show that predicted environmental concentrations (PECs) generated by PhATE are in very good agreement with measured concentrations, i.e., well within 1 order of magnitude. Model simulations were then carried out for 18 APIs representing a broad range of chemical and use characteristics. These simulations yielded 4 categories of results: 1) PECs are in good agreement with measured data for 9 compounds with high analytical detection frequencies, 2) PECs are greater than measured data for 3 compounds with high analytical detection frequencies, possibly as a result of as yet unidentified depletion mechanisms, 3) PECs are less than analytical reporting limits for 5 compounds with low analytical detection frequencies, and 4) the PEC is greater than the analytical method reporting limit for 1 compound with a low analytical detection frequency, possibly again as a result of insufficient depletion data. Overall, these results demonstrate that PhATE has the potential to be a very useful tool in the evaluation of APIs in biosolids. Possible applications include: prioritizing APIs for assessment even in the absence of analytical methods; evaluating sludge processing scenarios to explore potential mitigation approaches; using in risk assessments; and developing realistic nationwide concentrations, because PECs can be represented as a cumulative probability distribution. Finally, comparison of PECs to measured concentrations can also be used to identify the need for fate studies of compounds of interest in biosolids. Copyright © 2011 SETAC.

  16. A semi-analytical model for the acoustic impedance of finite length circular holes with mean flow

    NASA Astrophysics Data System (ADS)

    Yang, Dong; Morgans, Aimee S.

    2016-12-01

    The acoustic response of a circular hole with mean flow passing through it is highly relevant to Helmholtz resonators, fuel injectors, perforated plates, screens, liners and many other engineering applications. A widely used analytical model [M.S. Howe. "Onthe theory of unsteady high Reynolds number flow through a circular aperture", Proc. of the Royal Soc. A. 366, 1725 (1979), 205-223] which assumes an infinitesimally short hole was recently shown to be insufficient for predicting the impedance of holes with a finite length. In the present work, an analytical model based on Green's function method is developed to take the hole length into consideration for "short" holes. The importance of capturing the modified vortex noise accurately is shown. The vortices shed at the hole inlet edge are convected to the hole outlet and further downstream to form a vortex sheet. This couples with the acoustic waves and this coupling has the potential to generate as well as absorb acoustic energy in the low frequency region. The impedance predicted by this model shows the importance of capturing the path of the shed vortex. When the vortex path is captured accurately, the impedance predictions agree well with previous experimental and CFD results, for example predicting the potential for generation of acoustic energy at higher frequencies. For "long" holes, a simplified model which combines Howe's model with plane acoustic waves within the hole is developed. It is shown that the most important effect in this case is the acoustic non-compactness of the hole.

  17. Wettability of graphitic-carbon and silicon surfaces: MD modeling and theoretical analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramos-Alvarado, Bladimir; Kumar, Satish; Peterson, G. P.

    2015-07-28

    The wettability of graphitic carbon and silicon surfaces was numerically and theoretically investigated. A multi-response method has been developed for the analysis of conventional molecular dynamics (MD) simulations of droplets wettability. The contact angle and indicators of the quality of the computations are tracked as a function of the data sets analyzed over time. This method of analysis allows accurate calculations of the contact angle obtained from the MD simulations. Analytical models were also developed for the calculation of the work of adhesion using the mean-field theory, accounting for the interfacial entropy changes. A calibration method is proposed to providemore » better predictions of the respective contact angles under different solid-liquid interaction potentials. Estimations of the binding energy between a water monomer and graphite match those previously reported. In addition, a breakdown in the relationship between the binding energy and the contact angle was observed. The macroscopic contact angles obtained from the MD simulations were found to match those predicted by the mean-field model for graphite under different wettability conditions, as well as the contact angles of Si(100) and Si(111) surfaces. Finally, an assessment of the effect of the Lennard-Jones cutoff radius was conducted to provide guidelines for future comparisons between numerical simulations and analytical models of wettability.« less

  18. A method for direct, semi-quantitative analysis of gas phase samples using gas chromatography-inductively coupled plasma-mass spectrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carter, Kimberly E; Gerdes, Kirk

    2013-07-01

    A new and complete GC–ICP-MS method is described for direct analysis of trace metals in a gas phase process stream. The proposed method is derived from standard analytical procedures developed for ICP-MS, which are regularly exercised in standard ICP-MS laboratories. In order to implement the method, a series of empirical factors were generated to calibrate detector response with respect to a known concentration of an internal standard analyte. Calibrated responses are ultimately used to determine the concentration of metal analytes in a gas stream using a semi-quantitative algorithm. The method was verified using a traditional gas injection from a GCmore » sampling valve and a standard gas mixture containing either a 1 ppm Xe + Kr mix with helium balance or 100 ppm Xe with helium balance. Data collected for Xe and Kr gas analytes revealed that agreement of 6–20% with the actual concentration can be expected for various experimental conditions. To demonstrate the method using a relevant “unknown” gas mixture, experiments were performed for continuous 4 and 7 hour periods using a Hg-containing sample gas that was co-introduced into the GC sample loop with the xenon gas standard. System performance and detector response to the dilute concentration of the internal standard were pre-determined, which allowed semi-quantitative evaluation of the analyte. The calculated analyte concentrations varied during the course of the 4 hour experiment, particularly during the first hour of the analysis where the actual Hg concentration was under predicted by up to 72%. Calculated concentration improved to within 30–60% for data collected after the first hour of the experiment. Similar results were seen during the 7 hour test with the deviation from the actual concentration being 11–81% during the first hour and then decreasing for the remaining period. The method detection limit (MDL) was determined for the mercury by injecting the sample gas into the system following a period of equilibration. The MDL for Hg was calculated as 6.8 μg · m -3. This work describes the first complete GC–ICP-MS method to directly analyze gas phase samples, and detailed sample calculations and comparisons to conventional ICP-MS methods are provided.« less

  19. A comparison between GO/aperture-field and physical-optics methods for offset reflectors

    NASA Technical Reports Server (NTRS)

    Rahmat-Samii, Y.

    1984-01-01

    Both geometrical optics (GO)/aperture-field and physical-optics (PO) methods are used extensively in the diffraction analysis of offset parabolic and dual reflectors. An analytical/numerical comparative study is performed to demonstrate the limitations of the GO/aperture-field method for accurately predicting the sidelobe and null positions and levels. In particular, it is shown that for offset parabolic reflectors and for feeds located at the focal point, the predicted far-field patterns (amplitude) by the GO/aperture-field method will always be symmetric even in the offset plane. This, of course, is inaccurate for the general case and it is shown that the physical-optics method can result in asymmetric patterns for cases in which the feed is located at the focal point. Representative numerical data are presented and a comparison is made with available measured data.

  20. An analytical method for predicting the geometrical and optical properties of the human lens under accommodation

    PubMed Central

    Sheil, Conor J.; Bahrami, Mehdi; Goncharov, Alexander V.

    2014-01-01

    We present an analytical method to describe the accommodative changes in the human crystalline lens. The method is based on the geometry-invariant lens model, in which the gradient-index (GRIN) iso-indicial contours are coupled to the external shape. This feature ensures that any given number of iso-indicial contours does not change with accommodation, which preserves the optical integrity of the GRIN structure. The coupling also enables us to define the GRIN structure if the radii and asphericities of the external lens surfaces are known. As an example, the accommodative changes in lenticular radii and central thickness were taken from the literature, while the asphericities of the external surfaces were derived analytically by adhering to the basic physical conditions of constant lens volume and its axial position. The resulting changes in lens geometry are consistent with experimental data, and the optical properties are in line with expected values for optical power and spherical aberration. The aim of the paper is to provide an anatomically and optically accurate lens model that is valid for 3 mm pupils and can be used as a new tool for better understanding of accommodation. PMID:24877022

  1. An analytical method for predicting the geometrical and optical properties of the human lens under accommodation.

    PubMed

    Sheil, Conor J; Bahrami, Mehdi; Goncharov, Alexander V

    2014-05-01

    We present an analytical method to describe the accommodative changes in the human crystalline lens. The method is based on the geometry-invariant lens model, in which the gradient-index (GRIN) iso-indicial contours are coupled to the external shape. This feature ensures that any given number of iso-indicial contours does not change with accommodation, which preserves the optical integrity of the GRIN structure. The coupling also enables us to define the GRIN structure if the radii and asphericities of the external lens surfaces are known. As an example, the accommodative changes in lenticular radii and central thickness were taken from the literature, while the asphericities of the external surfaces were derived analytically by adhering to the basic physical conditions of constant lens volume and its axial position. The resulting changes in lens geometry are consistent with experimental data, and the optical properties are in line with expected values for optical power and spherical aberration. The aim of the paper is to provide an anatomically and optically accurate lens model that is valid for 3 mm pupils and can be used as a new tool for better understanding of accommodation.

  2. An Analytic Approach for Optimal Geometrical Design of GaAs Nanowires for Maximal Light Harvesting in Photovoltaic Cells

    PubMed Central

    Wu, Dan; Tang, Xiaohong; Wang, Kai; Li, Xianqiang

    2017-01-01

    Semiconductor nanowires(NWs) with subwavelength scale diameters have demonstrated superior light trapping features, which unravel a new pathway for low cost and high efficiency future generation solar cells. Unlike other published work, a fully analytic design is for the first time proposed for optimal geometrical parameters of vertically-aligned GaAs NW arrays for maximal energy harvesting. Using photocurrent density as the light absorbing evaluation standard, 2 μm length NW arrays whose multiple diameters and periodicity are quantitatively identified achieving the maximal value of 29.88 mA/cm2 under solar illumination. It also turns out that our method has wide suitability for single, double and four different diameters of NW arrays for highest photon energy harvesting. To validate this analytical method, intensive numerical three-dimensional finite-difference time-domain simulations of the NWs’ light harvesting are also carried out. Compared with the simulation results, the predicted maximal photocurrent densities lie within 1.5% tolerance for all cases. Along with the high accuracy, through directly disclosing the exact geometrical dimensions of NW arrays, this method provides an effective and efficient route for high performance photovoltaic design. PMID:28425488

  3. Green analytical determination of emerging pollutants in environmental waters using excitation-emission photoinduced fluorescence data and multivariate calibration.

    PubMed

    Hurtado-Sánchez, María Del Carmen; Lozano, Valeria A; Rodríguez-Cáceres, María Isabel; Durán-Merás, Isabel; Escandar, Graciela M

    2015-03-01

    An eco-friendly strategy for the simultaneous quantification of three emerging pharmaceutical contaminants is presented. The proposed analytical method, which involves photochemically induced fluorescence matrix data combined with second-order chemometric analysis, was used for the determination of carbamazepine, ofloxacin and piroxicam in water samples of different complexity without the need of chromatographic separation. Excitation-emission photoinduced fluorescence matrices were obtained after UV irradiation, and processed with second-order algorithms. Only one of the tested algorithms was able to overcome the strong spectral overlapping among the studied pollutants and allowed their successful quantitation in very interferent media. The method sensitivity in superficial and underground water samples was enhanced by a simple solid-phase extraction with C18 membranes, which was successful for the extraction/preconcentration of the pollutants at trace levels. Detection limits in preconcentrated (1:125) real water samples ranged from 0.04 to 0.3 ng mL(-1). Relative prediction errors around 10% were achieved. The proposed strategy is significantly simpler and greener than liquid chromatography-mass spectrometry methods, without compromising the analytical quality of the results. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. NASA/FAA general aviation crash dynamics program

    NASA Technical Reports Server (NTRS)

    Thomson, R. G.; Hayduk, R. J.; Carden, H. D.

    1981-01-01

    The program involves controlled full scale crash testing, nonlinear structural analyses to predict large deflection elastoplastic response, and load attenuating concepts for use in improved seat and subfloor structure. Both analytical and experimental methods are used to develop expertise in these areas. Analyses include simplified procedures for estimating energy dissipating capabilities and comprehensive computerized procedures for predicting airframe response. These analyses are developed to provide designers with methods for predicting accelerations, loads, and displacements on collapsing structure. Tests on typical full scale aircraft and on full and subscale structural components are performed to verify the analyses and to demonstrate load attenuating concepts. A special apparatus was built to test emergency locator transmitters when attached to representative aircraft structure. The apparatus is shown to provide a good simulation of the longitudinal crash pulse observed in full scale aircraft crash tests.

  5. Thermodynamic aspects of reformulation of automotive fuels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zudkevitch, D.; Murthy, A.K.S.; Gmehling, J.

    1995-09-01

    A study of procedures for measuring and predicting the RVP and the initial vapor emissions of reformulated gasoline blends which contain one or more oxygenated compounds, viz., Ethanol, MTBE, ETBE, and TAME is discussed. Two computer simulation methods were programmed and tested. In one method, Method A, the D-86 distillation data on the blend are used for predicting the blend`s RVP from a simulation of the Mini RVPE (RVP Equivalent) experiment. The other method, Method B, relies on analytical information (PIANO analyzes) on the nature of the base gasoline and utilizes classical thermodynamics for simulating the same RVPE, Mini experiment.more » Method B, also, predicts the composition and other properties of the initial vapor emission from the fuel. The results indicate that predictions made with both methods agree very well with experimental values. The predictions with Method B illustrate that the admixture of an oxygenate to a gasoline blend changes the volatility of the blend and, also, the composition of the vapor emission. From the example simulations, a blend with 10 vol % ethanol increases the RVP by about 0.8 psi. The accompanying vapor emission will contain about 15% ethanol. Similarly, the vapor emission of a fuel blend with 11 vol % MTBE was calculated to contain about 11 vol % MTBE. Predictions of the behavior of blends with ETBE and ETBE+Ethanol are also presented and discussed. Recognizing that quite some efforts have been invested in developing empirical correlations for predicting RVP, the writers consider the purpose of this paper to be pointing out that the methods of classical thermodynamics are adequate and that there is a need for additional work in developing certain fundamental data that are still lacking.« less

  6. Analytical study of striated nozzle flow with small radius of curvature ratio throats

    NASA Technical Reports Server (NTRS)

    Norton, D. J.; White, R. E.

    1972-01-01

    An analytical method was developed which is capable of estimating the chamber and throat conditions in a nozzle with a low radius of curvature throat. The method was programmed using standard FORTRAN 4 language and includes chemical equilibrium calculation subprograms (modified NASA Lewis program CEC71) as an integral part. The method determines detailed and gross rocket characteristics in the presence of striated flows and gives detailed results for the motor chamber and throat plane with as many as 20 discrete zones. The method employs a simultaneous solution of the mass, momentum, and energy equations and allows propellant types, 0/F ratios, propellant distribution, nozzle geometry, and injection schemes to be varied so to predict spatial velocity, density, pressure, and other thermodynamic variable distributions in the chamber as well as the throat. Results for small radius of curvature have shown good comparison to experimental results. Both gaseous and liquid injection may be considered with frozen or equilibrium flow calculations.

  7. Broadband impedance boundary conditions for the simulation of sound propagation in the time domain.

    PubMed

    Bin, Jonghoon; Yousuff Hussaini, M; Lee, Soogab

    2009-02-01

    An accurate and practical surface impedance boundary condition in the time domain has been developed for application to broadband-frequency simulation in aeroacoustic problems. To show the capability of this method, two kinds of numerical simulations are performed and compared with the analytical/experimental results: one is acoustic wave reflection by a monopole source over an impedance surface and the other is acoustic wave propagation in a duct with a finite impedance wall. Both single-frequency and broadband-frequency simulations are performed within the framework of linearized Euler equations. A high-order dispersion-relation-preserving finite-difference method and a low-dissipation, low-dispersion Runge-Kutta method are used for spatial discretization and time integration, respectively. The results show excellent agreement with the analytical/experimental results at various frequencies. The method accurately predicts both the amplitude and the phase of acoustic pressure and ensures the well-posedness of the broadband time-domain impedance boundary condition.

  8. Mono-isotope Prediction for Mass Spectra Using Bayes Network.

    PubMed

    Li, Hui; Liu, Chunmei; Rwebangira, Mugizi Robert; Burge, Legand

    2014-12-01

    Mass spectrometry is one of the widely utilized important methods to study protein functions and components. The challenge of mono-isotope pattern recognition from large scale protein mass spectral data needs computational algorithms and tools to speed up the analysis and improve the analytic results. We utilized naïve Bayes network as the classifier with the assumption that the selected features are independent to predict mono-isotope pattern from mass spectrometry. Mono-isotopes detected from validated theoretical spectra were used as prior information in the Bayes method. Three main features extracted from the dataset were employed as independent variables in our model. The application of the proposed algorithm to publicMo dataset demonstrates that our naïve Bayes classifier is advantageous over existing methods in both accuracy and sensitivity.

  9. Big data analytics to improve cardiovascular care: promise and challenges.

    PubMed

    Rumsfeld, John S; Joynt, Karen E; Maddox, Thomas M

    2016-06-01

    The potential for big data analytics to improve cardiovascular quality of care and patient outcomes is tremendous. However, the application of big data in health care is at a nascent stage, and the evidence to date demonstrating that big data analytics will improve care and outcomes is scant. This Review provides an overview of the data sources and methods that comprise big data analytics, and describes eight areas of application of big data analytics to improve cardiovascular care, including predictive modelling for risk and resource use, population management, drug and medical device safety surveillance, disease and treatment heterogeneity, precision medicine and clinical decision support, quality of care and performance measurement, and public health and research applications. We also delineate the important challenges for big data applications in cardiovascular care, including the need for evidence of effectiveness and safety, the methodological issues such as data quality and validation, and the critical importance of clinical integration and proof of clinical utility. If big data analytics are shown to improve quality of care and patient outcomes, and can be successfully implemented in cardiovascular practice, big data will fulfil its potential as an important component of a learning health-care system.

  10. Two Approaches in the Lunar Libration Theory: Analytical vs. Numerical Methods

    NASA Astrophysics Data System (ADS)

    Petrova, Natalia; Zagidullin, Arthur; Nefediev, Yurii; Kosulin, Valerii

    2016-10-01

    Observation of the physical libration of the Moon and the celestial bodies is one of the astronomical methods to remotely evaluate the internal structure of a celestial body without using expensive space experiments. Review of the results obtained due to the physical libration study, is presented in the report.The main emphasis is placed on the description of successful lunar laser ranging for libration determination and on the methods of simulating the physical libration. As a result, estimation of the viscoelastic and dissipative properties of the lunar body, of the lunar core parameters were done. The core's existence was confirmed by the recent reprocessing of seismic data Apollo missions. Attention is paid to the physical interpretation of the phenomenon of free libration and methods of its determination.A significant part of the report is devoted to describing the practical application of the most accurate to date the analytical tables of lunar libration built by comprehensive analytical processing of residual differences obtained when comparing the long-term series of laser observations with numerical ephemeris DE421 [1].In general, the basic outline of the report reflects the effectiveness of two approaches in the libration theory - numerical and analytical solution. It is shown that the two approaches complement each other for the study of the Moon in different aspects: numerical approach provides high accuracy of the theory necessary for adequate treatment of modern high-accurate observations and the analytic approach allows you to see the essence of the various kind manifestations in the lunar rotation, predict and interpret the new effects in observations of physical libration [2].[1] Rambaux, N., J. G. Williams, 2011, The Moon's physical librations and determination of their free modes, Celest. Mech. Dyn. Astron., 109, 85-100.[2] Petrova N., A. Zagidullin, Yu. Nefediev. Analysis of long-periodic variations of lunar libration parameters on the basis of analytical theory / // The Russian-Japanese Workshop, 20-25 October, Tokyo (Mitaka) - Mizusawa, Japan. - 2014.

  11. Prediction task guided representation learning of medical codes in EHR.

    PubMed

    Cui, Liwen; Xie, Xiaolei; Shen, Zuojun

    2018-06-18

    There have been rapidly growing applications using machine learning models for predictive analytics in Electronic Health Records (EHR) to improve the quality of hospital services and the efficiency of healthcare resource utilization. A fundamental and crucial step in developing such models is to convert medical codes in EHR to feature vectors. These medical codes are used to represent diagnoses or procedures. Their vector representations have a tremendous impact on the performance of machine learning models. Recently, some researchers have utilized representation learning methods from Natural Language Processing (NLP) to learn vector representations of medical codes. However, most previous approaches are unsupervised, i.e. the generation of medical code vectors is independent from prediction tasks. Thus, the obtained feature vectors may be inappropriate for a specific prediction task. Moreover, unsupervised methods often require a lot of samples to obtain reliable results, but most practical problems have very limited patient samples. In this paper, we develop a new method called Prediction Task Guided Health Record Aggregation (PTGHRA), which aggregates health records guided by prediction tasks, to construct training corpus for various representation learning models. Compared with unsupervised approaches, representation learning models integrated with PTGHRA yield a significant improvement in predictive capability of generated medical code vectors, especially for limited training samples. Copyright © 2018. Published by Elsevier Inc.

  12. Linear modeling of steady-state behavioral dynamics.

    PubMed Central

    Palya, William L; Walter, Donald; Kessel, Robert; Lucke, Robert

    2002-01-01

    The observed steady-state behavioral dynamics supported by unsignaled periods of reinforcement within repeating 2,000-s trials were modeled with a linear transfer function. These experiments employed improved schedule forms and analytical methods to improve the precision of the measured transfer function, compared to previous work. The refinements include both the use of multiple reinforcement periods that improve spectral coverage and averaging of independently determined transfer functions. A linear analysis was then used to predict behavior observed for three different test schedules. The fidelity of these predictions was determined. PMID:11831782

  13. TWT transmitter fault prediction based on ANFIS

    NASA Astrophysics Data System (ADS)

    Li, Mengyan; Li, Junshan; Li, Shuangshuang; Wang, Wenqing; Li, Fen

    2017-11-01

    Fault prediction is an important component of health management, and plays an important role in the reliability guarantee of complex electronic equipments. Transmitter is a unit with high failure rate. The cathode performance of TWT is a common fault of transmitter. In this dissertation, a model based on a set of key parameters of TWT is proposed. By choosing proper parameters and applying adaptive neural network training model, this method, combined with analytic hierarchy process (AHP), has a certain reference value for the overall health judgment of TWT transmitters.

  14. Viscoelastic behavior and life-time predictions

    NASA Technical Reports Server (NTRS)

    Dillard, D. A.; Brinson, H. F.

    1985-01-01

    Fiber reinforced plastics were considered for many structural applications in automotive, aerospace and other industries. A major concern was and remains the failure modes associated with the polymer matrix which serves to bind the fibers together and transfer the load through connections, from fiber to fiber and ply to ply. An accelerated characterization procedure for prediction of delayed failures was developed. This method utilizes time-temperature-stress-moisture superposition principles in conjunction with laminated plate theory. Because failures are inherently nonlinear, the testing and analytic modeling for both moduli and strength is based upon nonlinear viscoelastic concepts.

  15. Accurate Estimate of Some Propagation Characteristics for the First Higher Order Mode in Graded Index Fiber with Simple Analytic Chebyshev Method

    NASA Astrophysics Data System (ADS)

    Dutta, Ivy; Chowdhury, Anirban Roy; Kumbhakar, Dharmadas

    2013-03-01

    Using Chebyshev power series approach, accurate description for the first higher order (LP11) mode of graded index fibers having three different profile shape functions are presented in this paper and applied to predict their propagation characteristics. These characteristics include fractional power guided through the core, excitation efficiency and Petermann I and II spot sizes with their approximate analytic formulations. We have shown that where two and three Chebyshev points in LP11 mode approximation present fairly accurate results, the values based on our calculations involving four Chebyshev points match excellently with available exact numerical results.

  16. Analytical study of fractional equations describing anomalous diffusion of energetic particles

    NASA Astrophysics Data System (ADS)

    Tawfik, A. M.; Fichtner, H.; Schlickeiser, R.; Elhanbaly, A.

    2017-06-01

    To present the main influence of anomalous diffusion on the energetic particle propagation, the fractional derivative model of transport is developed by deriving the fractional modified Telegraph and Rayleigh equations. Analytical solutions of the fractional modified Telegraph and the fractional Rayleigh equations, which are defined in terms of Caputo fractional derivatives, are obtained by using the Laplace transform and the Mittag-Leffler function method. The solutions of these fractional equations are given in terms of special functions like Fox’s H, Mittag-Leffler, Hermite and Hyper-geometric functions. The predicted travelling pulse solutions are discussed in each case for different values of fractional order.

  17. Analytical Bistatic k Space Images Compared to Experimental Swept Frequency EAR Images

    NASA Technical Reports Server (NTRS)

    Shaeffer, John; Cooper, Brett; Hom, Kam

    2004-01-01

    A case study of flat plate scattering images obtained by the analytical bistatic k space and experimental swept frequency ISAR methods is presented. The key advantage of the bistatic k space image is that a single excitation is required, i.e., one frequency I one angle. This means that prediction approaches such as MOM only need to compute one solution at a single frequency. Bistatic image Fourier transform data are obtained by computing the scattered field at various bistatic positions about the body in k space. Experimental image Fourier transform data are obtained from the measured response to a bandwidth of frequencies over a target rotation range.

  18. Experimental investigation of elastic mode control on a model of a transport aircraft

    NASA Technical Reports Server (NTRS)

    Abramovitz, M.; Heimbaugh, R. M.; Nomura, J. K.; Pearson, R. M.; Shirley, W. A.; Stringham, R. H.; Tescher, E. L.; Zoock, I. E.

    1981-01-01

    A 4.5 percent DC-10 derivative flexible model with active controls is fabricated, developed, and tested to investigate the ability to suppress flutter and reduce gust loads with active controlled surfaces. The model is analyzed and tested in both semispan and complete model configuration. Analytical methods are refined and control laws are developed and successfully tested on both versions of the model. A 15 to 25 percent increase in flutter speed due to the active system is demonstrated. The capability of an active control system to significantly reduce wing bending moments due to turbulence is demonstrated. Good correlation is obtained between test and analytical prediction.

  19. Blade loss transient dynamics analysis, volume 1. Task 1: Survey and perspective. [aircraft gas turbine engines

    NASA Technical Reports Server (NTRS)

    Gallardo, V. C.; Gaffney, E. F.; Bach, L. J.; Stallone, M. J.

    1981-01-01

    An analytical technique was developed to predict the behavior of a rotor system subjected to sudden unbalance. The technique is implemented in the Turbine Engine Transient Rotor Analysis (TETRA) computer program using the component element method. The analysis was particularly aimed toward blade-loss phenomena in gas turbine engines. A dual-rotor, casing, and pylon structure can be modeled by the computer program. Blade tip rubs, Coriolis forces, and mechanical clearances are included. The analytical system was verified by modeling and simulating actual test conditions for a rig test as well as a full-engine, blade-release demonstration.

  20. Contamination in food from packaging material.

    PubMed

    Lau, O W; Wong, S K

    2000-06-16

    Packaging has become an indispensible element in the food manufacturing process, and different types of additives, such as antioxidants, stabilizers, lubricants, anti-static and anti-blocking agents, have also been developed to improve the performance of polymeric packaging materials. Recently the packaging has been found to represent a source of contamination itself through the migration of substances from the packaging into food. Various analytical methods have been developed to analyze the migrants in the foodstuff, and migration evaluation procedures based on theoretical prediction of migration from plastic food contact material were also introduced recently. In this paper, the regulatory control, analytical methodology, factors affecting the migration and migration evaluation are reviewed.

  1. Transient excitation and mechanical admittance test techniques for prediction of payload vibration environments

    NASA Technical Reports Server (NTRS)

    Kana, D. D.; Vargas, L. M.

    1977-01-01

    Transient excitation forces were applied separately to simple beam-and-mass launch vehicle and payload models to develop complex admittance functions for the interface and other appropriate points on the structures. These measured admittances were then analytically combined by a matrix representation to obtain a description of the coupled system dynamic characteristics. Response of the payload model to excitation of the launch vehicle model was predicted and compared with results measured on the combined models. These results are also compared with results of earlier work in which a similar procedure was employed except that steady-state sinusoidal excitation techniques were included. It is found that the method employing transient tests produces results that are better overall than the steady state methods. Furthermore, the transient method requires far less time to implement, and provides far better resolution in the data. However, the data acquisition and handling problem is more complex for this method. It is concluded that the transient test and admittance matrix prediction method can be a valuable tool for development of payload vibration tests.

  2. Nonplanar Method for Predicting Incompressible Aerodynamic Coefficients of Rectangular Wings with Circular-Arc Camber. Ph.D. Thesis - Virginia Polytechnic Institute

    NASA Technical Reports Server (NTRS)

    Lamar, J. E.

    1971-01-01

    The development of a nonplanar lifting surface method having a continuous distribution of singularities and satisfying the tangent flow boundary condition on the mean camber surface is given. The method predicts some incompressible longitudinal aerodynamic coefficients of rectangular wings which have circular-arc camber. The solution method is of the integral-equation type and the resulting surface integrals are evaluated by either using numerical or analytical techniques, as are appropriate. Applications are made and the results compared with those from an exact two-dimensional circular-arc camber solution, a three-dimensional flat-wing solution which represents the camber by a projected slope onto the flat surface, and a flat-wing experiment. From these comparisons, the present method is found to predict well the flat-wing experiment and limiting values, in addition to the center of pressure variation at an angle of attack of zero for any camber. For wings having camber ratios larger than about 1.25% and moderate to high aspect ratios, the results deterioriate due to the inadequacy of lifting pressure modes employed.

  3. Proxy-SU(3) symmetry in heavy deformed nuclei

    NASA Astrophysics Data System (ADS)

    Bonatsos, Dennis; Assimakis, I. E.; Minkov, N.; Martinou, Andriana; Cakirli, R. B.; Casten, R. F.; Blaum, K.

    2017-06-01

    Background: Microscopic calculations of heavy nuclei face considerable difficulties due to the sizes of the matrices that need to be solved. Various approximation schemes have been invoked, for example by truncating the spaces, imposing seniority limits, or appealing to various symmetry schemes such as pseudo-SU(3). This paper proposes a new symmetry scheme also based on SU(3). This proxy-SU(3) can be applied to well-deformed nuclei, is simple to use, and can yield analytic predictions. Purpose: To present the new scheme and its microscopic motivation, and to test it using a Nilsson model calculation with the original shell model orbits and with the new proxy set. Method: We invoke an approximate, analytic, treatment of the Nilsson model, that allows the above vetting and yet is also transparent in understanding the approximations involved in the new proxy-SU(3). Results: It is found that the new scheme yields a Nilsson diagram for well-deformed nuclei that is very close to the original Nilsson diagram. The specific levels of approximation in the new scheme are also shown, for each major shell. Conclusions: The new proxy-SU(3) scheme is a good approximation to the full set of orbits in a major shell. Being able to replace a complex shell model calculation with a symmetry-based description now opens up the possibility to predict many properties of nuclei analytically and often in a parameter-free way. The new scheme works best for heavier nuclei, precisely where full microscopic calculations are most challenged. Some cases in which the new scheme can be used, often analytically, to make specific predictions, are shown in a subsequent paper.

  4. Analytical modeling and feasibility study of a multi-GPU cloud-based server (MGCS) framework for non-voxel-based dose calculations.

    PubMed

    Neylon, J; Min, Y; Kupelian, P; Low, D A; Santhanam, A

    2017-04-01

    In this paper, a multi-GPU cloud-based server (MGCS) framework is presented for dose calculations, exploring the feasibility of remote computing power for parallelization and acceleration of computationally and time intensive radiotherapy tasks in moving toward online adaptive therapies. An analytical model was developed to estimate theoretical MGCS performance acceleration and intelligently determine workload distribution. Numerical studies were performed with a computing setup of 14 GPUs distributed over 4 servers interconnected by a 1 Gigabits per second (Gbps) network. Inter-process communication methods were optimized to facilitate resource distribution and minimize data transfers over the server interconnect. The analytically predicted computation time predicted matched experimentally observations within 1-5 %. MGCS performance approached a theoretical limit of acceleration proportional to the number of GPUs utilized when computational tasks far outweighed memory operations. The MGCS implementation reproduced ground-truth dose computations with negligible differences, by distributing the work among several processes and implemented optimization strategies. The results showed that a cloud-based computation engine was a feasible solution for enabling clinics to make use of fast dose calculations for advanced treatment planning and adaptive radiotherapy. The cloud-based system was able to exceed the performance of a local machine even for optimized calculations, and provided significant acceleration for computationally intensive tasks. Such a framework can provide access to advanced technology and computational methods to many clinics, providing an avenue for standardization across institutions without the requirements of purchasing, maintaining, and continually updating hardware.

  5. Immunogenicity of therapeutics: a matter of efficacy and safety.

    PubMed

    Nechansky, Andreas; Kircheis, Ralf

    2010-11-01

    The unwanted immunogenicity of therapeutic proteins is a major concern regarding patient safety. Furthermore, pharmacokinetic, pharmacodynamic and clinical efficacy can be seriously affected by the immunogenicity of therapeutic proteins. Authorities have fully recognized this issue and demand appropriate and well-characterized assays to detect anti-drug antibodies (ADAs). We provide an overview of the immunogenicity topic in general, the regulatory background and insight into underlying immunological mechanisms and the limited ability to predict clinical immunogenicity a priori. Furthermore, we comment on the analytical testing approach and the status-quo of appropriate method validation. The review provides insight regarding the analytical approach that is expected by regulatory authorities overseeing immunogenicity testing requirements. Additionally, the factors influencing immunogenicity are summarized and key references regarding immunogenicity testing approaches and method validation are discussed. The unwanted immunogenicity of protein therapeutics is of major concern because of its potential to affect patient safety and drug efficacy. Analytical testing is sophisticated and requires more than one assay. Because immunogenicity in humans is hardly predictable, assay development has to start in a timely fashion and for clinical studies immunogenicity assay validation is mandatory prior to analyzing patient serum samples. Regarding ADAs, the question remains as to when such antibodies are regarded of clinical relevance and what levels are, if at all, acceptable. In summary, the detection of ADAs should raise the awareness of the physician concerning patient safety and of the sponsor/manufacture concerning the immunogenic potential of the drug product.

  6. Prediction of high temperature metal matrix composite ply properties

    NASA Technical Reports Server (NTRS)

    Caruso, J. J.; Chamis, C. C.

    1988-01-01

    The application of the finite element method (superelement technique) in conjunction with basic concepts from mechanics of materials theory is demonstrated to predict the thermomechanical behavior of high temperature metal matrix composites (HTMMC). The simulated behavior is used as a basis to establish characteristic properties of a unidirectional composite idealized an as equivalent homogeneous material. The ply properties predicted include: thermal properties (thermal conductivities and thermal expansion coefficients) and mechanical properties (moduli and Poisson's ratio). These properties are compared with those predicted by a simplified, analytical composite micromechanics model. The predictive capabilities of the finite element method and the simplified model are illustrated through the simulation of the thermomechanical behavior of a P100-graphite/copper unidirectional composite at room temperature and near matrix melting temperature. The advantage of the finite element analysis approach is its ability to more precisely represent the composite local geometry and hence capture the subtle effects that are dependent on this. The closed form micromechanics model does a good job at representing the average behavior of the constituents to predict composite behavior.

  7. Active Control of Inlet Noise on the JT15D Turbofan Engine

    NASA Technical Reports Server (NTRS)

    Smith, Jerome P.; Hutcheson, Florence V.; Burdisso, Ricardo A.; Fuller, Chris R.

    1999-01-01

    This report presents the key results obtained by the Vibration and Acoustics Laboratories at Virginia Tech over the year from November 1997 to December 1998 on the Active Noise Control of Turbofan Engines research project funded by NASA Langley Research Center. The concept of implementing active noise control techniques with fuselage-mounted error sensors is investigated both analytically and experimentally. The analytical part of the project involves the continued development of an advanced modeling technique to provide prediction and design guidelines for application of active noise control techniques to large, realistic high bypass engines of the type on which active control methods are expected to be applied. Results from the advanced analytical model are presented that show the effectiveness of the control strategies, and the analytical results presented for fuselage error sensors show good agreement with the experimentally observed results and provide additional insight into the control phenomena. Additional analytical results are presented for active noise control used in conjunction with a wavenumber sensing technique. The experimental work is carried out on a running JT15D turbofan jet engine in a test stand at Virginia Tech. The control strategy used in these tests was the feedforward Filtered-X LMS algorithm. The control inputs were supplied by single and multiple circumferential arrays of acoustic sources equipped with neodymium iron cobalt magnets mounted upstream of the fan. The reference signal was obtained from an inlet mounted eddy current probe. The error signals were obtained from a number of pressure transducers flush-mounted in a simulated fuselage section mounted in the engine test cell. The active control methods are investigated when implemented with the control sources embedded within the acoustically absorptive material on a passively-lined inlet. The experimental results show that the combination of active control techniques with fuselage-mounted error sensors and passive control techniques is an effective means of reducing radiated noise from turbofan engines. Strategic selection of the location of the error transducers is shown to be effective for reducing the radiation towards particular directions in the farfield. An analytical model is used to predict the behavior of the control system and to guide the experimental design configurations, and the analytical results presented show good agreement with the experimentally observed results.

  8. Towards accurate cosmological predictions for rapidly oscillating scalar fields as dark matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ureña-López, L. Arturo; Gonzalez-Morales, Alma X., E-mail: lurena@ugto.mx, E-mail: alma.gonzalez@fisica.ugto.mx

    2016-07-01

    As we are entering the era of precision cosmology, it is necessary to count on accurate cosmological predictions from any proposed model of dark matter. In this paper we present a novel approach to the cosmological evolution of scalar fields that eases their analytic and numerical analysis at the background and at the linear order of perturbations. The new method makes use of appropriate angular variables that simplify the writing of the equations of motion, and which also show that the usual field variables play a secondary role in the cosmological dynamics. We apply the method to a scalar fieldmore » endowed with a quadratic potential and revisit its properties as dark matter. Some of the results known in the literature are recovered, and a better understanding of the physical properties of the model is provided. It is confirmed that there exists a Jeans wavenumber k {sub J} , directly related to the suppression of linear perturbations at wavenumbers k > k {sub J} , and which is verified to be k {sub J} = a √ mH . We also discuss some semi-analytical results that are well satisfied by the full numerical solutions obtained from an amended version of the CMB code CLASS. Finally we draw some of the implications that this new treatment of the equations of motion may have in the prediction of cosmological observables from scalar field dark matter models.« less

  9. Ab Initio Anharmonic Analysis of Vibrational Spectra of Uracil Using the Numerical-Analytic Implementation of Operator Van Vleck Perturbation Theory.

    PubMed

    Krasnoshchekov, Sergey V; Vogt, Natalja; Stepanov, Nikolay F

    2015-06-25

    The numerical-analytic implementation of the operator version of the canonical Van Vleck second-order vibrational perturbation theory (CVPT2) is employed for a purely ab initio prediction and interpretation of the infrared (IR) and Raman anharmonic spectra of a medium-size molecule of the diketo tautomer of uracil (2,4(1H,3H)-pyrimidinedione), which has high biological importance as one of the four RNA nucleobases. A nonempirical, semidiagonal quartic potential energy surface (PES) expressed in normal coordinates was evaluated at the MP2/cc-pVTZ level of theory. The quality of the PES was improved by replacing the harmonic frequencies with the "best" estimated CCSD(T)-based values taken from the literature. The theoretical method is enhanced by an accurate treatment of multiple Fermi and Darling-Dennison resonances with evaluation of the corresponding resonance constants W and K (CVPT2+WK method). A prediction of the anharmonic frequencies as well as IR and Raman intensities was used for a detailed interpretation of the experimental spectra of uracil. Very good agreement between predicted and observed vibrational frequencies has been achieved (RMSD ∼4.5 cm(-1)). The model employed gave a theoretically robust treatment of the multiple resonances in the 1680-1790 cm(-1) region. Our new analysis gives the most reliable reassignments of IR and Raman spectra of uracil available to date.

  10. Prediction of subsonic vortex shedding from forebodies with chines

    NASA Technical Reports Server (NTRS)

    Mendenhall, Michael R.; Lesieutre, Daniel J.

    1990-01-01

    An engineering prediction method and associated computer code VTXCHN to predict nose vortex shedding from circular and noncircular forebodies with sharp chine edges in subsonic flow at angles of attack and roll are presented. Axisymmetric bodies are represented by point sources and doublets, and noncircular cross sections are transformed to a circle by either analytical or numerical conformal transformations. The lee side vortex wake is modeled by discrete vortices in crossflow planes along the body; thus the three-dimensional steady flow problem is reduced to a two-dimensional, unsteady, separated flow problem for solution. Comparison of measured and predicted surface pressure distributions, flow field surveys, and aerodynamic characteristics are presented for noncircular bodies alone and forebodies with sharp chines.

  11. A review of the analytical simulation of aircraft crash dynamics

    NASA Technical Reports Server (NTRS)

    Fasanella, Edwin L.; Carden, Huey D.; Boitnott, Richard L.; Hayduk, Robert J.

    1990-01-01

    A large number of full scale tests of general aviation aircraft, helicopters, and one unique air-to-ground controlled impact of a transport aircraft were performed. Additionally, research was also conducted on seat dynamic performance, load-limiting seats, load limiting subfloor designs, and emergency-locator-transmitters (ELTs). Computer programs were developed to provide designers with methods for predicting accelerations, velocities, and displacements of collapsing structure and for estimating the human response to crash loads. The results of full scale aircraft and component tests were used to verify and guide the development of analytical simulation tools and to demonstrate impact load attenuating concepts. Analytical simulation of metal and composite aircraft crash dynamics are addressed. Finite element models are examined to determine their degree of corroboration by experimental data and to reveal deficiencies requiring further development.

  12. Closed-form analytical solutions of high-temperature heat pipe startup and frozen startup limitation

    NASA Technical Reports Server (NTRS)

    Cao, Y.; Faghri, A.

    1992-01-01

    Previous numerical and experimental studies indicate that the high-temperature heat pipe startup process is characterized by a moving hot zone with relatively sharp fronts. Based on the above observation, a flat-front model for an approximate analytical solution is proposed. A closed-form solution related to the temperature distribution in the hot zone and the hot zone length as a function of time are obtained. The analytical results agree well with the corresponding experimental data, and provide a quick prediction method for the heat pipe startup performance. Finally, a heat pipe limitation related to the frozen startup process is identified, and an explicit criterion for the high-temperature heat pipe startup is derived. The frozen startup limit identified in this paper provides a fundamental guidance for high-temperature heat pipe design.

  13. Application of a voltammetric electronic tongue and near infrared spectroscopy for a rapid umami taste assessment.

    PubMed

    Bagnasco, Lucia; Cosulich, M Elisabetta; Speranza, Giovanna; Medini, Luca; Oliveri, Paolo; Lanteri, Silvia

    2014-08-15

    The relationships between sensory attribute and analytical measurements, performed by electronic tongue (ET) and near-infrared spectroscopy (NIRS), were investigated in order to develop a rapid method for the assessment of umami taste. Commercially available umami products and some aminoacids were submitted to sensory analysis. Results were analysed in comparison with the outcomes of analytical measurements. Multivariate exploratory analysis was performed by principal component analysis (PCA). Calibration models for prediction of the umami taste on the basis of ET and NIR signals were obtained using partial least squares (PLS) regression. Different approaches for merging data from the two different analytical instruments were considered. Both of the techniques demonstrated to provide information related with umami taste. In particular, ET signals showed the higher correlation with umami attribute. Data fusion was found to be slightly beneficial - not so significantly as to justify the coupled use of the two analytical techniques. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Review and assessment of the HOST turbine heat transfer program

    NASA Technical Reports Server (NTRS)

    Gladden, Herbert J.

    1988-01-01

    The objectives of the HOST Turbine Heat Transfer subproject were to obtain a better understanding of the physics of the aerothermodynamic phenomena occurring in high-performance gas turbine engines and to assess and improve the analytical methods used to predict the fluid dynamics and heat transfer phenomena. At the time the HOST project was initiated, an across-the-board improvement in turbine design technology was needed. Therefore, a building-block approach was utilized, with research ranging from the study of fundamental phenomena and analytical modeling to experiments in simulated real-engine environments. Experimental research accounted for 75 percent of the project, and analytical efforts accounted for approximately 25 percent. Extensive experimental datasets were created depicting the three-dimensional flow field, high free-stream turbulence, boundary-layer transition, blade tip region heat transfer, film cooling effects in a simulated engine environment, rough-wall cooling enhancement in a rotating passage, and rotor-stator interaction effects. In addition, analytical modeling of these phenomena was initiated using boundary-layer assumptions as well as Navier-Stokes solutions.

  15. Discordance between net analyte signal theory and practical multivariate calibration.

    PubMed

    Brown, Christopher D

    2004-08-01

    Lorber's concept of net analyte signal is reviewed in the context of classical and inverse least-squares approaches to multivariate calibration. It is shown that, in the presence of device measurement error, the classical and inverse calibration procedures have radically different theoretical prediction objectives, and the assertion that the popular inverse least-squares procedures (including partial least squares, principal components regression) approximate Lorber's net analyte signal vector in the limit is disproved. Exact theoretical expressions for the prediction error bias, variance, and mean-squared error are given under general measurement error conditions, which reinforce the very discrepant behavior between these two predictive approaches, and Lorber's net analyte signal theory. Implications for multivariate figures of merit and numerous recently proposed preprocessing treatments involving orthogonal projections are also discussed.

  16. Light aircraft lift, drag, and moment prediction: A review and analysis

    NASA Technical Reports Server (NTRS)

    Smetana, F. O.; Summey, D. C.; Smith, N. S.; Carden, R. K.

    1975-01-01

    The historical development of analytical methods for predicting the lift, drag, and pitching moment of complete light aircraft configurations in cruising flight is reviewed. Theoretical methods, based in part on techniques described in the literature and in part on original work, are developed. These methods form the basis for understanding the computer programs given to: (1) compute the lift, drag, and moment of conventional airfoils, (2) extend these two-dimensional characteristics to three dimensions for moderate-to-high aspect ratio unswept wings, (3) plot complete configurations, (4) convert the fuselage geometric data to the correct input format, (5) compute the fuselage lift and drag, (6) compute the lift and moment of symmetrical airfoils to M = 1.0 by a simplified semi-empirical procedure, and (7) compute, in closed form, the pressure distribution over a prolate spheroid at alpha = 0. Comparisons of the predictions with experiment indicate excellent lift and drag agreement for conventional airfoils and wings. Limited comparisons of body-alone drag characteristics yield reasonable agreement. Also included are discussions for interference effects and techniques for summing the results above to obtain predictions for complete configurations.

  17. Predictive analysis of beer quality by correlating sensory evaluation with higher alcohol and ester production using multivariate statistics methods.

    PubMed

    Dong, Jian-Jun; Li, Qing-Liang; Yin, Hua; Zhong, Cheng; Hao, Jun-Guang; Yang, Pan-Fei; Tian, Yu-Hong; Jia, Shi-Ru

    2014-10-15

    Sensory evaluation is regarded as a necessary procedure to ensure a reproducible quality of beer. Meanwhile, high-throughput analytical methods provide a powerful tool to analyse various flavour compounds, such as higher alcohol and ester. In this study, the relationship between flavour compounds and sensory evaluation was established by non-linear models such as partial least squares (PLS), genetic algorithm back-propagation neural network (GA-BP), support vector machine (SVM). It was shown that SVM with a Radial Basis Function (RBF) had a better performance of prediction accuracy for both calibration set (94.3%) and validation set (96.2%) than other models. Relatively lower prediction abilities were observed for GA-BP (52.1%) and PLS (31.7%). In addition, the kernel function of SVM played an essential role of model training when the prediction accuracy of SVM with polynomial kernel function was 32.9%. As a powerful multivariate statistics method, SVM holds great potential to assess beer quality. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Comparison of baseline removal methods for laser-induced breakdown spectroscopy of geological samples

    NASA Astrophysics Data System (ADS)

    Dyar, M. Darby; Giguere, Stephen; Carey, CJ; Boucher, Thomas

    2016-12-01

    This project examines the causes, effects, and optimization of continuum removal in laser-induced breakdown spectroscopy (LIBS) to produce the best possible prediction accuracy of elemental composition in geological samples. We compare prediction accuracy resulting from several different techniques for baseline removal, including asymmetric least squares (ALS), adaptive iteratively reweighted penalized least squares (Air-PLS), fully automatic baseline correction (FABC), continuous wavelet transformation, median filtering, polynomial fitting, the iterative thresholding Dietrich method, convex hull/rubber band techniques, and a newly-developed technique for Custom baseline removal (BLR). We assess the predictive performance of these methods using partial least-squares analysis for 13 elements of geological interest, expressed as the weight percentages of SiO2, Al2O3, TiO2, FeO, MgO, CaO, Na2O, K2O, and the parts per million concentrations of Ni, Cr, Zn, Mn, and Co. We find that previously published methods for baseline subtraction generally produce equivalent prediction accuracies for major elements. When those pre-existing methods are used, automated optimization of their adjustable parameters is always necessary to wring the best predictive accuracy out of a data set; ideally, it should be done for each individual variable. The new technique of Custom BLR produces significant improvements in prediction accuracy over existing methods across varying geological data sets, instruments, and varying analytical conditions. These results also demonstrate the dual objectives of the continuum removal problem: removing a smooth underlying signal to fit individual peaks (univariate analysis) versus using feature selection to select only those channels that contribute to best prediction accuracy for multivariate analyses. Overall, the current practice of using generalized, one-method-fits-all-spectra baseline removal results in poorer predictive performance for all methods. The extra steps needed to optimize baseline removal for each predicted variable and empower multivariate techniques with the best possible input data for optimal prediction accuracy are shown to be well worth the slight increase in necessary computations and complexity.

  19. General design method for three-dimensional potential flow fields. 1: Theory

    NASA Technical Reports Server (NTRS)

    Stanitz, J. D.

    1980-01-01

    A general design method was developed for steady, three dimensional, potential, incompressible or subsonic-compressible flow. In this design method, the flow field, including the shape of its boundary, was determined for arbitrarily specified, continuous distributions of velocity as a function of arc length along the boundary streamlines. The method applied to the design of both internal and external flow fields, including, in both cases, fields with planar symmetry. The analytic problems associated with stagnation points, closure of bodies in external flow fields, and prediction of turning angles in three dimensional ducts were reviewed.

  20. PARAMO: A Parallel Predictive Modeling Platform for Healthcare Analytic Research using Electronic Health Records

    PubMed Central

    Ng, Kenney; Ghoting, Amol; Steinhubl, Steven R.; Stewart, Walter F.; Malin, Bradley; Sun, Jimeng

    2014-01-01

    Objective Healthcare analytics research increasingly involves the construction of predictive models for disease targets across varying patient cohorts using electronic health records (EHRs). To facilitate this process, it is critical to support a pipeline of tasks: 1) cohort construction, 2) feature construction, 3) cross-validation, 4) feature selection, and 5) classification. To develop an appropriate model, it is necessary to compare and refine models derived from a diversity of cohorts, patient-specific features, and statistical frameworks. The goal of this work is to develop and evaluate a predictive modeling platform that can be used to simplify and expedite this process for health data. Methods To support this goal, we developed a PARAllel predictive MOdeling (PARAMO) platform which 1) constructs a dependency graph of tasks from specifications of predictive modeling pipelines, 2) schedules the tasks in a topological ordering of the graph, and 3) executes those tasks in parallel. We implemented this platform using Map-Reduce to enable independent tasks to run in parallel in a cluster computing environment. Different task scheduling preferences are also supported. Results We assess the performance of PARAMO on various workloads using three datasets derived from the EHR systems in place at Geisinger Health System and Vanderbilt University Medical Center and an anonymous longitudinal claims database. We demonstrate significant gains in computational efficiency against a standard approach. In particular, PARAMO can build 800 different models on a 300,000 patient data set in 3 hours in parallel compared to 9 days if running sequentially. Conclusion This work demonstrates that an efficient parallel predictive modeling platform can be developed for EHR data. This platform can facilitate large-scale modeling endeavors and speed-up the research workflow and reuse of health information. This platform is only a first step and provides the foundation for our ultimate goal of building analytic pipelines that are specialized for health data researchers. PMID:24370496

Top