Sample records for validate analytical models

  1. Validation of the replica trick for simple models

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2018-04-01

    We discuss the replica analytic continuation using several simple models in order to prove mathematically the validity of the replica analysis, which is used in a wide range of fields related to large-scale complex systems. While replica analysis consists of two analytical techniques—the replica trick (or replica analytic continuation) and the thermodynamical limit (and/or order parameter expansion)—we focus our study on replica analytic continuation, which is the mathematical basis of the replica trick. We apply replica analysis to solve a variety of analytical models, and examine the properties of replica analytic continuation. Based on the positive results for these models we propose that replica analytic continuation is a robust procedure in replica analysis.

  2. A Meta-Analytic Investigation of Fiedler's Contingency Model of Leadership Effectiveness.

    ERIC Educational Resources Information Center

    Strube, Michael J.; Garcia, Joseph E.

    According to Fiedler's Contingency Model of Leadership Effectiveness, group performance is a function of the leader-situation interaction. A review of past validations has found several problems associated with the model. Meta-analytic techniques were applied to the Contingency Model in order to assess the validation evidence quantitatively. The…

  3. Considerations regarding the validation of chromatographic mass spectrometric methods for the quantification of endogenous substances in forensics.

    PubMed

    Hess, Cornelius; Sydow, Konrad; Kueting, Theresa; Kraemer, Michael; Maas, Alexandra

    2018-02-01

    The requirement for correct evaluation of forensic toxicological results in daily routine work and scientific studies is reliable analytical data based on validated methods. Validation of a method gives the analyst tools to estimate the efficacy and reliability of the analytical method. Without validation, data might be contested in court and lead to unjustified legal consequences for a defendant. Therefore, new analytical methods to be used in forensic toxicology require careful method development and validation of the final method. Until now, there are no publications on the validation of chromatographic mass spectrometric methods for the detection of endogenous substances although endogenous analytes can be important in Forensic Toxicology (alcohol consumption marker, congener alcohols, gamma hydroxy butyric acid, human insulin and C-peptide, creatinine, postmortal clinical parameters). For these analytes, conventional validation instructions cannot be followed completely. In this paper, important practical considerations in analytical method validation for endogenous substances will be discussed which may be used as guidance for scientists wishing to develop and validate analytical methods for analytes produced naturally in the human body. Especially the validation parameters calibration model, analytical limits, accuracy (bias and precision) and matrix effects and recovery have to be approached differently. Highest attention should be paid to selectivity experiments. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Analytical procedure validation and the quality by design paradigm.

    PubMed

    Rozet, Eric; Lebrun, Pierre; Michiels, Jean-François; Sondag, Perceval; Scherder, Tara; Boulanger, Bruno

    2015-01-01

    Since the adoption of the ICH Q8 document concerning the development of pharmaceutical processes following a quality by design (QbD) approach, there have been many discussions on the opportunity for analytical procedure developments to follow a similar approach. While development and optimization of analytical procedure following QbD principles have been largely discussed and described, the place of analytical procedure validation in this framework has not been clarified. This article aims at showing that analytical procedure validation is fully integrated into the QbD paradigm and is an essential step in developing analytical procedures that are effectively fit for purpose. Adequate statistical methodologies have also their role to play: such as design of experiments, statistical modeling, and probabilistic statements. The outcome of analytical procedure validation is also an analytical procedure design space, and from it, control strategy can be set.

  5. An analytic performance model of disk arrays and its application

    NASA Technical Reports Server (NTRS)

    Lee, Edward K.; Katz, Randy H.

    1991-01-01

    As disk arrays become widely used, tools for understanding and analyzing their performance become increasingly important. In particular, performance models can be invaluable in both configuring and designing disk arrays. Accurate analytic performance models are desirable over other types of models because they can be quickly evaluated, are applicable under a wide range of system and workload parameters, and can be manipulated by a range of mathematical techniques. Unfortunately, analytical performance models of disk arrays are difficult to formulate due to the presence of queuing and fork-join synchronization; a disk array request is broken up into independent disk requests which must all complete to satisfy the original request. We develop, validate, and apply an analytic performance model for disk arrays. We derive simple equations for approximating their utilization, response time, and throughput. We then validate the analytic model via simulation and investigate the accuracy of each approximation used in deriving the analytical model. Finally, we apply the analytical model to derive an equation for the optimal unit of data striping in disk arrays.

  6. Multiple piezo-patch energy harvesters integrated to a thin plate with AC-DC conversion: analytical modeling and numerical validation

    NASA Astrophysics Data System (ADS)

    Aghakhani, Amirreza; Basdogan, Ipek; Erturk, Alper

    2016-04-01

    Plate-like components are widely used in numerous automotive, marine, and aerospace applications where they can be employed as host structures for vibration based energy harvesting. Piezoelectric patch harvesters can be easily attached to these structures to convert the vibrational energy to the electrical energy. Power output investigations of these harvesters require accurate models for energy harvesting performance evaluation and optimization. Equivalent circuit modeling of the cantilever-based vibration energy harvesters for estimation of electrical response has been proposed in recent years. However, equivalent circuit formulation and analytical modeling of multiple piezo-patch energy harvesters integrated to thin plates including nonlinear circuits has not been studied. In this study, equivalent circuit model of multiple parallel piezoelectric patch harvesters together with a resistive load is built in electronic circuit simulation software SPICE and voltage frequency response functions (FRFs) are validated using the analytical distributedparameter model. Analytical formulation of the piezoelectric patches in parallel configuration for the DC voltage output is derived while the patches are connected to a standard AC-DC circuit. The analytic model is based on the equivalent load impedance approach for piezoelectric capacitance and AC-DC circuit elements. The analytic results are validated numerically via SPICE simulations. Finally, DC power outputs of the harvesters are computed and compared with the peak power amplitudes in the AC output case.

  7. Determining passive cooling limits in CPV using an analytical thermal model

    NASA Astrophysics Data System (ADS)

    Gualdi, Federico; Arenas, Osvaldo; Vossier, Alexis; Dollet, Alain; Aimez, Vincent; Arès, Richard

    2013-09-01

    We propose an original thermal analytical model aiming to predict the practical limits of passive cooling systems for high concentration photovoltaic modules. The analytical model is described and validated by comparison with a commercial 3D finite element model. The limiting performances of flat plate cooling systems in natural convection are then derived and discussed.

  8. The use of analytical models in human-computer interface design

    NASA Technical Reports Server (NTRS)

    Gugerty, Leo

    1991-01-01

    Some of the many analytical models in human-computer interface design that are currently being developed are described. The usefulness of analytical models for human-computer interface design is evaluated. Can the use of analytical models be recommended to interface designers? The answer, based on the empirical research summarized here, is: not at this time. There are too many unanswered questions concerning the validity of models and their ability to meet the practical needs of design organizations.

  9. Mechanisms of chemical vapor generation by aqueous tetrahydridoborate. Recent developments toward the definition of a more general reaction model

    NASA Astrophysics Data System (ADS)

    D'Ulivo, Alessandro

    2016-05-01

    A reaction model describing the reactivity of metal and semimetal species with aqueous tetrahydridoborate (THB) has been drawn taking into account the mechanism of chemical vapor generation (CVG) of hydrides, recent evidences on the mechanism of interference and formation of byproducts in arsane generation, and other evidences in the field of the synthesis of nanoparticles and catalytic hydrolysis of THB by metal nanoparticles. The new "non-analytical" reaction model is of more general validity than the previously described "analytical" reaction model for CVG. The non-analytical model is valid for reaction of a single analyte with THB and for conditions approaching those typically encountered in the synthesis of nanoparticles and macroprecipitates. It reduces to the previously proposed analytical model under conditions typically employed in CVG for trace analysis (analyte below the μM level, borane/analyte ≫ 103 mol/mol, no interference). The non-analytical reaction model is not able to explain all the interference effects observed in CVG, which can be achieved only by assuming the interaction among the species of reaction pathways of different analytical substrates. The reunification of CVG, the synthesis of nanoparticles by aqueous THB and the catalytic hydrolysis of THB inside a common frame contribute to rationalization of the complex reactivity of aqueous THB with metal and semimetal species.

  10. Validation of urban freeway models.

    DOT National Transportation Integrated Search

    2015-01-01

    This report describes the methodology, data, conclusions, and enhanced models regarding the validation of two sets of models developed in the Strategic Highway Research Program 2 (SHRP 2) Reliability Project L03, Analytical Procedures for Determining...

  11. Verification of an Analytical Method for Measuring Crystal Nucleation Rates in Glasses from DTA Data

    NASA Technical Reports Server (NTRS)

    Ranasinghe, K. S.; Wei, P. F.; Kelton, K. F.; Ray, C. S.; Day, D. E.

    2004-01-01

    A recently proposed analytical (DTA) method for estimating the nucleation rates in glasses has been evaluated by comparing experimental data with numerically computed nucleation rates for a model lithium disilicate glass. The time and temperature dependent nucleation rates were predicted using the model and compared with those values from an analysis of numerically calculated DTA curves. The validity of the numerical approach was demonstrated earlier by a comparison with experimental data. The excellent agreement between the nucleation rates from the model calculations and fiom the computer generated DTA data demonstrates the validity of the proposed analytical DTA method.

  12. Analytic Modeling of Pressurization and Cryogenic Propellant Conditions for Lunar Landing Vehicle

    NASA Technical Reports Server (NTRS)

    Corpening, Jeremy

    2010-01-01

    This slide presentation reviews the development, validation and application of the model to the Lunar Landing Vehicle. The model named, Computational Propellant and Pressurization Program -- One Dimensional (CPPPO), is used to model in this case cryogenic propellant conditions of the Altair Lunar lander. The validation of CPPPO was accomplished via comparison to an existing analytic model (i.e., ROCETS), flight experiment and ground experiments. The model was used to the Lunar Landing Vehicle perform a parametric analysis on pressurant conditions and to examine the results of unequal tank pressurization and draining for multiple tank designs.

  13. Sample Size and Power Estimates for a Confirmatory Factor Analytic Model in Exercise and Sport: A Monte Carlo Approach

    ERIC Educational Resources Information Center

    Myers, Nicholas D.; Ahn, Soyeon; Jin, Ying

    2011-01-01

    Monte Carlo methods can be used in data analytic situations (e.g., validity studies) to make decisions about sample size and to estimate power. The purpose of using Monte Carlo methods in a validity study is to improve the methodological approach within a study where the primary focus is on construct validity issues and not on advancing…

  14. Experimental Validation of Lightning-Induced Electromagnetic (Indirect) Coupling to Short Monopole Antennas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crull, E W; Brown Jr., C G; Perkins, M P

    2008-07-30

    For short monopoles in this low-power case, it has been shown that a simple circuit model is capable of accurate predictions for the shape and magnitude of the antenna response to lightning-generated electric field coupling effects, provided that the elements of the circuit model have accurate values. Numerical EM simulation can be used to provide more accurate values for the circuit elements than the simple analytical formulas, since the analytical formulas are used outside of their region of validity. However, even with the approximate analytical formulas the simple circuit model produces reasonable results, which would improve if more accurate analyticalmore » models were used. This report discusses the coupling analysis approaches taken to understand the interaction between a time-varying EM field and a short monopole antenna, within the context of lightning safety for nuclear weapons at DOE facilities. It describes the validation of a simple circuit model using laboratory study in order to understand the indirect coupling of energy into a part, and the resulting voltage. Results show that in this low-power case, the circuit model predicts peak voltages within approximately 32% using circuit component values obtained from analytical formulas and about 13% using circuit component values obtained from numerical EM simulation. We note that the analytical formulas are used outside of their region of validity. First, the antenna is insulated and not a bare wire and there are perhaps fringing field effects near the termination of the outer conductor that the formula does not take into account. Also, the effective height formula is for a monopole directly over a ground plane, while in the time-domain measurement setup the monopole is elevated above the ground plane by about 1.5-inch (refer to Figure 5).« less

  15. Method validation using weighted linear regression models for quantification of UV filters in water samples.

    PubMed

    da Silva, Claudia Pereira; Emídio, Elissandro Soares; de Marchi, Mary Rosa Rodrigues

    2015-01-01

    This paper describes the validation of a method consisting of solid-phase extraction followed by gas chromatography-tandem mass spectrometry for the analysis of the ultraviolet (UV) filters benzophenone-3, ethylhexyl salicylate, ethylhexyl methoxycinnamate and octocrylene. The method validation criteria included evaluation of selectivity, analytical curve, trueness, precision, limits of detection and limits of quantification. The non-weighted linear regression model has traditionally been used for calibration, but it is not necessarily the optimal model in all cases. Because the assumption of homoscedasticity was not met for the analytical data in this work, a weighted least squares linear regression was used for the calibration method. The evaluated analytical parameters were satisfactory for the analytes and showed recoveries at four fortification levels between 62% and 107%, with relative standard deviations less than 14%. The detection limits ranged from 7.6 to 24.1 ng L(-1). The proposed method was used to determine the amount of UV filters in water samples from water treatment plants in Araraquara and Jau in São Paulo, Brazil. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Validation of urban freeway models. [supporting datasets

    DOT National Transportation Integrated Search

    2015-01-01

    The goal of the SHRP 2 Project L33 Validation of Urban Freeway Models was to assess and enhance the predictive travel time reliability models developed in the SHRP 2 Project L03, Analytic Procedures for Determining the Impacts of Reliability Mitigati...

  17. Modelling by partial least squares the relationship between the HPLC mobile phases and analytes on phenyl column.

    PubMed

    Markopoulou, Catherine K; Kouskoura, Maria G; Koundourellis, John E

    2011-06-01

    Twenty-five descriptors and 61 structurally different analytes have been used on a partial least squares (PLS) to latent structure technique in order to study chromatographically their interaction mechanism on a phenyl column. According to the model, 240 different retention times of the analytes, expressed as Y variable (log k), at different % MeOH mobile-phase concentrations have been correlated with their theoretical most important structural or molecular descriptors. The goodness-of-fit was estimated by the coefficient of multiple determinations r(2) (0.919), and the root mean square error of estimation (RMSEE=0.1283) values with a predictive ability (Q(2)) of 0.901. The model was further validated using cross-validation (CV), validated by 20 response permutations r(2) (0.0, 0.0146), Q(2) (0.0, -0.136) and validated by external prediction. The contribution of certain mechanism interactions between the analytes, the mobile phase and the column, proportional or counterbalancing is also studied. Trying to evaluate the influence on Y of every variable in a PLS model, VIP (variables importance in the projection) plot provides evidence that lipophilicity (expressed as Log D, Log P), polarizability, refractivity and the eluting power of the mobile phase are dominant in the retention mechanism on a phenyl column. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Geographic and temporal validity of prediction models: Different approaches were useful to examine model performance

    PubMed Central

    Austin, Peter C.; van Klaveren, David; Vergouwe, Yvonne; Nieboer, Daan; Lee, Douglas S.; Steyerberg, Ewout W.

    2017-01-01

    Objective Validation of clinical prediction models traditionally refers to the assessment of model performance in new patients. We studied different approaches to geographic and temporal validation in the setting of multicenter data from two time periods. Study Design and Setting We illustrated different analytic methods for validation using a sample of 14,857 patients hospitalized with heart failure at 90 hospitals in two distinct time periods. Bootstrap resampling was used to assess internal validity. Meta-analytic methods were used to assess geographic transportability. Each hospital was used once as a validation sample, with the remaining hospitals used for model derivation. Hospital-specific estimates of discrimination (c-statistic) and calibration (calibration intercepts and slopes) were pooled using random effects meta-analysis methods. I2 statistics and prediction interval width quantified geographic transportability. Temporal transportability was assessed using patients from the earlier period for model derivation and patients from the later period for model validation. Results Estimates of reproducibility, pooled hospital-specific performance, and temporal transportability were on average very similar, with c-statistics of 0.75. Between-hospital variation was moderate according to I2 statistics and prediction intervals for c-statistics. Conclusion This study illustrates how performance of prediction models can be assessed in settings with multicenter data at different time periods. PMID:27262237

  19. Steady-state analytical model of suspended p-type 3C-SiC bridges under consideration of Joule heating

    NASA Astrophysics Data System (ADS)

    Balakrishnan, Vivekananthan; Dinh, Toan; Phan, Hoang-Phuong; Kozeki, Takahiro; Namazu, Takahiro; Viet Dao, Dzung; Nguyen, Nam-Trung

    2017-07-01

    This paper reports an analytical model and its validation for a released microscale heater made of 3C-SiC thin films. A model for the equivalent electrical and thermal parameters was developed for the two-layer multi-segment heat and electric conduction. The model is based on a 1D energy equation, which considers the temperature-dependent resistivity and allows for the prediction of voltage-current and power-current characteristics of the microheater. The steady-state analytical model was validated by experimental characterization. The results, in particular the nonlinearity caused by temperature dependency, are in good agreement. The low power consumption of the order of 0.18 mW at approximately 310 K indicates the potential use of the structure as thermal sensors in portable applications.

  20. Chemical Sensor Array Response Modeling Using Quantitative Structure-Activity Relationships Technique

    NASA Astrophysics Data System (ADS)

    Shevade, Abhijit V.; Ryan, Margaret A.; Homer, Margie L.; Zhou, Hanying; Manfreda, Allison M.; Lara, Liana M.; Yen, Shiao-Pin S.; Jewell, April D.; Manatt, Kenneth S.; Kisor, Adam K.

    We have developed a Quantitative Structure-Activity Relationships (QSAR) based approach to correlate the response of chemical sensors in an array with molecular descriptors. A novel molecular descriptor set has been developed; this set combines descriptors of sensing film-analyte interactions, representing sensor response, with a basic analyte descriptor set commonly used in QSAR studies. The descriptors are obtained using a combination of molecular modeling tools and empirical and semi-empirical Quantitative Structure-Property Relationships (QSPR) methods. The sensors under investigation are polymer-carbon sensing films which have been exposed to analyte vapors at parts-per-million (ppm) concentrations; response is measured as change in film resistance. Statistically validated QSAR models have been developed using Genetic Function Approximations (GFA) for a sensor array for a given training data set. The applicability of the sensor response models has been tested by using it to predict the sensor activities for test analytes not considered in the training set for the model development. The validated QSAR sensor response models show good predictive ability. The QSAR approach is a promising computational tool for sensing materials evaluation and selection. It can also be used to predict response of an existing sensing film to new target analytes.

  1. Analytical validation of an explicit finite element model of a rolling element bearing with a localised line spall

    NASA Astrophysics Data System (ADS)

    Singh, Sarabjeet; Howard, Carl Q.; Hansen, Colin H.; Köpke, Uwe G.

    2018-03-01

    In this paper, numerically modelled vibration response of a rolling element bearing with a localised outer raceway line spall is presented. The results were obtained from a finite element (FE) model of the defective bearing solved using an explicit dynamics FE software package, LS-DYNA. Time domain vibration signals of the bearing obtained directly from the FE modelling were processed further to estimate time-frequency and frequency domain results, such as spectrogram and power spectrum, using standard signal processing techniques pertinent to the vibration-based monitoring of rolling element bearings. A logical approach to analyses of the numerically modelled results was developed with an aim to presenting the analytical validation of the modelled results. While the time and frequency domain analyses of the results show that the FE model generates accurate bearing kinematics and defect frequencies, the time-frequency analysis highlights the simulation of distinct low- and high-frequency characteristic vibration signals associated with the unloading and reloading of the rolling elements as they move in and out of the defect, respectively. Favourable agreement of the numerical and analytical results demonstrates the validation of the results from the explicit FE modelling of the bearing.

  2. Factors Affecting Higher Order Thinking Skills of Students: A Meta-Analytic Structural Equation Modeling Study

    ERIC Educational Resources Information Center

    Budsankom, Prayoonsri; Sawangboon, Tatsirin; Damrongpanit, Suntorapot; Chuensirimongkol, Jariya

    2015-01-01

    The purpose of the research is to develop and identify the validity of factors affecting higher order thinking skills (HOTS) of students. The thinking skills can be divided into three types: analytical, critical, and creative thinking. This analysis is done by applying the meta-analytic structural equation modeling (MASEM) based on a database of…

  3. Model for Atmospheric Propagation of Spatially Combined Laser Beams

    DTIC Science & Technology

    2016-09-01

    thesis modeling tools is discussed. In Chapter 6, the thesis validated the model with analytical computations and simulations result from...using propagation model . Based on both the analytical computation and WaveTrain results, the diraction e ects simulated in the propagation model are...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS MODEL FOR ATMOSPHERIC PROPAGATION OF SPATIALLY COMBINED LASER BEAMS by Kum Leong Lee

  4. Class-modelling in food analytical chemistry: Development, sampling, optimisation and validation issues - A tutorial.

    PubMed

    Oliveri, Paolo

    2017-08-22

    Qualitative data modelling is a fundamental branch of pattern recognition, with many applications in analytical chemistry, and embraces two main families: discriminant and class-modelling methods. The first strategy is appropriate when at least two classes are meaningfully defined in the problem under study, while the second strategy is the right choice when the focus is on a single class. For this reason, class-modelling methods are also referred to as one-class classifiers. Although, in the food analytical field, most of the issues would be properly addressed by class-modelling strategies, the use of such techniques is rather limited and, in many cases, discriminant methods are forcedly used for one-class problems, introducing a bias in the outcomes. Key aspects related to the development, optimisation and validation of suitable class models for the characterisation of food products are critically analysed and discussed. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Anisotropic Multishell Analytical Modeling of an Intervertebral Disk Subjected to Axial Compression.

    PubMed

    Demers, Sébastien; Nadeau, Sylvie; Bouzid, Abdel-Hakim

    2016-04-01

    Studies on intervertebral disk (IVD) response to various loads and postures are essential to understand disk's mechanical functions and to suggest preventive and corrective actions in the workplace. The experimental and finite-element (FE) approaches are well-suited for these studies, but validating their findings is difficult, partly due to the lack of alternative methods. Analytical modeling could allow methodological triangulation and help validation of FE models. This paper presents an analytical method based on thin-shell, beam-on-elastic-foundation and composite materials theories to evaluate the stresses in the anulus fibrosus (AF) of an axisymmetric disk composed of multiple thin lamellae. Large deformations of the soft tissues are accounted for using an iterative method and the anisotropic material properties are derived from a published biaxial experiment. The results are compared to those obtained by FE modeling. The results demonstrate the capability of the analytical model to evaluate the stresses at any location of the simplified AF. It also demonstrates that anisotropy reduces stresses in the lamellae. This novel model is a preliminary step in developing valuable analytical models of IVDs, and represents a distinctive groundwork that is able to sustain future refinements. This paper suggests important features that may be included to improve model realism.

  6. Analytical model for screening potential CO2 repositories

    USGS Publications Warehouse

    Okwen, R.T.; Stewart, M.T.; Cunningham, J.A.

    2011-01-01

    Assessing potential repositories for geologic sequestration of carbon dioxide using numerical models can be complicated, costly, and time-consuming, especially when faced with the challenge of selecting a repository from a multitude of potential repositories. This paper presents a set of simple analytical equations (model), based on the work of previous researchers, that could be used to evaluate the suitability of candidate repositories for subsurface sequestration of carbon dioxide. We considered the injection of carbon dioxide at a constant rate into a confined saline aquifer via a fully perforated vertical injection well. The validity of the analytical model was assessed via comparison with the TOUGH2 numerical model. The metrics used in comparing the two models include (1) spatial variations in formation pressure and (2) vertically integrated brine saturation profile. The analytical model and TOUGH2 show excellent agreement in their results when similar input conditions and assumptions are applied in both. The analytical model neglects capillary pressure and the pressure dependence of fluid properties. However, simulations in TOUGH2 indicate that little error is introduced by these simplifications. Sensitivity studies indicate that the agreement between the analytical model and TOUGH2 depends strongly on (1) the residual brine saturation, (2) the difference in density between carbon dioxide and resident brine (buoyancy), and (3) the relationship between relative permeability and brine saturation. The results achieved suggest that the analytical model is valid when the relationship between relative permeability and brine saturation is linear or quasi-linear and when the irreducible saturation of brine is zero or very small. ?? 2011 Springer Science+Business Media B.V.

  7. Statistically Qualified Neuro-Analytic system and Method for Process Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.

    1998-11-04

    An apparatus and method for monitoring a process involves development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two steps: deterministic model adaption and stochastic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics,augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation emor minimization technique. Stochastic model adaptation involves qualifying any remaining uncertaintymore » in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system.« less

  8. Acoustic-Structure Interaction in Rocket Engines: Validation Testing

    NASA Technical Reports Server (NTRS)

    Davis, R. Benjamin; Joji, Scott S.; Parks, Russel A.; Brown, Andrew M.

    2009-01-01

    While analyzing a rocket engine component, it is often necessary to account for any effects that adjacent fluids (e.g., liquid fuels or oxidizers) might have on the structural dynamics of the component. To better characterize the fully coupled fluid-structure system responses, an analytical approach that models the system as a coupled expansion of rigid wall acoustic modes and in vacuo structural modes has been proposed. The present work seeks to experimentally validate this approach. To experimentally observe well-coupled system modes, the test article and fluid cavities are designed such that the uncoupled structural frequencies are comparable to the uncoupled acoustic frequencies. The test measures the natural frequencies, mode shapes, and forced response of cylindrical test articles in contact with fluid-filled cylindrical and/or annular cavities. The test article is excited with a stinger and the fluid-loaded response is acquired using a laser-doppler vibrometer. The experimentally determined fluid-loaded natural frequencies are compared directly to the results of the analytical model. Due to the geometric configuration of the test article, the analytical model is found to be valid for natural modes with circumferential wave numbers greater than four. In the case of these modes, the natural frequencies predicted by the analytical model demonstrate excellent agreement with the experimentally determined natural frequencies.

  9. High Fidelity Modeling of Field-Reversed Configuration (FRC) Thrusters (Briefing Charts)

    DTIC Science & Technology

    2017-05-24

    Converged Math → Irrelevant Solutions? Validation: Fluids Example Stoke’s Flow MARTIN, SOUSA, TRAN (AFRL/RQRS) DISTRIBUTION A - APPROVED FOR PUBLIC RELEASE...Convergence Tests Converged Math → Irrelevant Solutions? Must be Aware of Valid Assumption Regions Validation: Fluids Example Stoke’s Flow Potential...AND VALIDATION Verification: Asymptotic Models → Analytical Solutions Yields Exact Convergence Tests Converged Math → Irrelevant Solutions? Must be

  10. Validation of a common data model for active safety surveillance research

    PubMed Central

    Ryan, Patrick B; Reich, Christian G; Hartzema, Abraham G; Stang, Paul E

    2011-01-01

    Objective Systematic analysis of observational medical databases for active safety surveillance is hindered by the variation in data models and coding systems. Data analysts often find robust clinical data models difficult to understand and ill suited to support their analytic approaches. Further, some models do not facilitate the computations required for systematic analysis across many interventions and outcomes for large datasets. Translating the data from these idiosyncratic data models to a common data model (CDM) could facilitate both the analysts' understanding and the suitability for large-scale systematic analysis. In addition to facilitating analysis, a suitable CDM has to faithfully represent the source observational database. Before beginning to use the Observational Medical Outcomes Partnership (OMOP) CDM and a related dictionary of standardized terminologies for a study of large-scale systematic active safety surveillance, the authors validated the model's suitability for this use by example. Validation by example To validate the OMOP CDM, the model was instantiated into a relational database, data from 10 different observational healthcare databases were loaded into separate instances, a comprehensive array of analytic methods that operate on the data model was created, and these methods were executed against the databases to measure performance. Conclusion There was acceptable representation of the data from 10 observational databases in the OMOP CDM using the standardized terminologies selected, and a range of analytic methods was developed and executed with sufficient performance to be useful for active safety surveillance. PMID:22037893

  11. Testing Crites' Model of Career Maturity: A Hierarchical Strategy.

    ERIC Educational Resources Information Center

    Wallbrown, Fred H.; And Others

    1986-01-01

    Investigated the construct validity of Crites' model of career maturity and the Career Maturity Inventory (CMI). Results from a nationwide sample of adolescents, using hierarchical factor analytic methodology, indicated confirmatory support for the multidimensionality of Crites' model of career maturity, and the construct validity of the CMI as a…

  12. Semi-physiologic model validation and bioequivalence trials simulation to select the best analyte for acetylsalicylic acid.

    PubMed

    Cuesta-Gragera, Ana; Navarro-Fontestad, Carmen; Mangas-Sanjuan, Victor; González-Álvarez, Isabel; García-Arieta, Alfredo; Trocóniz, Iñaki F; Casabó, Vicente G; Bermejo, Marival

    2015-07-10

    The objective of this paper is to apply a previously developed semi-physiologic pharmacokinetic model implemented in NONMEM to simulate bioequivalence trials (BE) of acetyl salicylic acid (ASA) in order to validate the model performance against ASA human experimental data. ASA is a drug with first-pass hepatic and intestinal metabolism following Michaelis-Menten kinetics that leads to the formation of two main metabolites in two generations (first and second generation metabolites). The first aim was to adapt the semi-physiological model for ASA in NOMMEN using ASA pharmacokinetic parameters from literature, showing its sequential metabolism. The second aim was to validate this model by comparing the results obtained in NONMEM simulations with published experimental data at a dose of 1000 mg. The validated model was used to simulate bioequivalence trials at 3 dose schemes (100, 1000 and 3000 mg) and with 6 test formulations with decreasing in vivo dissolution rate constants versus the reference formulation (kD 8-0.25 h (-1)). Finally, the third aim was to determine which analyte (parent drug, first generation or second generation metabolite) was more sensitive to changes in formulation performance. The validation results showed that the concentration-time curves obtained with the simulations reproduced closely the published experimental data, confirming model performance. The parent drug (ASA) was the analyte that showed to be more sensitive to the decrease in pharmaceutical quality, with the highest decrease in Cmax and AUC ratio between test and reference formulations. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Analysis of a virtual memory model for maintaining database views

    NASA Technical Reports Server (NTRS)

    Kinsley, Kathryn C.; Hughes, Charles E.

    1992-01-01

    This paper presents an analytical model for predicting the performance of a new support strategy for database views. This strategy, called the virtual method, is compared with traditional methods for supporting views. The analytical model's predictions of improved performance by the virtual method are then validated by comparing these results with those achieved in an experimental implementation.

  14. Configuration and validation of an analytical model predicting secondary neutron radiation in proton therapy using Monte Carlo simulations and experimental measurements.

    PubMed

    Farah, J; Bonfrate, A; De Marzi, L; De Oliveira, A; Delacroix, S; Martinetti, F; Trompier, F; Clairand, I

    2015-05-01

    This study focuses on the configuration and validation of an analytical model predicting leakage neutron doses in proton therapy. Using Monte Carlo (MC) calculations, a facility-specific analytical model was built to reproduce out-of-field neutron doses while separately accounting for the contribution of intra-nuclear cascade, evaporation, epithermal and thermal neutrons. This model was first trained to reproduce in-water neutron absorbed doses and in-air neutron ambient dose equivalents, H*(10), calculated using MCNPX. Its capacity in predicting out-of-field doses at any position not involved in the training phase was also checked. The model was next expanded to enable a full 3D mapping of H*(10) inside the treatment room, tested in a clinically relevant configuration and finally consolidated with experimental measurements. Following the literature approach, the work first proved that it is possible to build a facility-specific analytical model that efficiently reproduces in-water neutron doses and in-air H*(10) values with a maximum difference less than 25%. In addition, the analytical model succeeded in predicting out-of-field neutron doses in the lateral and vertical direction. Testing the analytical model in clinical configurations proved the need to separate the contribution of internal and external neutrons. The impact of modulation width on stray neutrons was found to be easily adjustable while beam collimation remains a challenging issue. Finally, the model performance agreed with experimental measurements with satisfactory results considering measurement and simulation uncertainties. Analytical models represent a promising solution that substitutes for time-consuming MC calculations when assessing doses to healthy organs. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  15. Statistically qualified neuro-analytic failure detection method and system

    DOEpatents

    Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.

    2002-03-02

    An apparatus and method for monitoring a process involve development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two stages: deterministic model adaption and stochastic model modification of the deterministic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics, augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation error minimization technique. Stochastic model modification involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system. Illustrative of the method and apparatus, the method is applied to a peristaltic pump system.

  16. Experimental validation of an analytical kinetic model for edge-localized modes in JET-ITER-like wall

    NASA Astrophysics Data System (ADS)

    Guillemaut, C.; Metzger, C.; Moulton, D.; Heinola, K.; O’Mullane, M.; Balboa, I.; Boom, J.; Matthews, G. F.; Silburn, S.; Solano, E. R.; contributors, JET

    2018-06-01

    The design and operation of future fusion devices relying on H-mode plasmas requires reliable modelling of edge-localized modes (ELMs) for precise prediction of divertor target conditions. An extensive experimental validation of simple analytical predictions of the time evolution of target plasma loads during ELMs has been carried out here in more than 70 JET-ITER-like wall H-mode experiments with a wide range of conditions. Comparisons of these analytical predictions with diagnostic measurements of target ion flux density, power density, impact energy and electron temperature during ELMs are presented in this paper and show excellent agreement. The analytical predictions tested here are made with the ‘free-streaming’ kinetic model (FSM) which describes ELMs as a quasi-neutral plasma bunch expanding along the magnetic field lines into the Scrape-Off Layer without collisions. Consequences of the FSM on energy reflection and deposition on divertor targets during ELMs are also discussed.

  17. Analytical Modeling of Groundwater Seepages to St. Lucie Estuary

    NASA Astrophysics Data System (ADS)

    Lee, J.; Yeh, G.; Hu, G.

    2008-12-01

    In this paper, six analytical models describing hydraulic interaction of stream-aquifer systems were applied to St Lucie Estuary (SLE) River Estuaries. These are analytical solutions for: (1) flow from a finite aquifer to a canal, (2) flow from an infinite aquifer to a canal, (3) the linearized Laplace system in a seepage surface, (4) wave propagation in the aquifer, (5) potential flow through stratified unconfined aquifers, and (6) flow through stratified confined aquifers. Input data for analytical solutions were obtained from monitoring wells and river stages at seepage-meter sites. Four transects in the study area are available: Club Med, Harbour Ridge, Lutz/MacMillan, and Pendarvis Cove located in the St. Lucie River. The analytical models were first calibrated with seepage meter measurements and then used to estimate of groundwater discharges into St. Lucie River. From this process, analytical relationships between the seepage rate and river stages and/or groundwater tables were established to predict the seasonal and monthly variation in groundwater seepage into SLE. It was found the seepage rate estimations by analytical models agreed well with measured data for some cases but only fair for some other cases. This is not unexpected because analytical solutions have some inherently simplified assumptions, which may be more valid for some cases than the others. From analytical calculations, it is possible to predict approximate seepage rates in the study domain when the assumptions underlying these analytical models are valid. The finite and infinite aquifer models and the linearized Laplace method are good for sites Pendarvis Cove and Lutz/MacMillian, but fair for the other two sites. The wave propagation model gave very good agreement in phase but only fairly agreement in magnitude for all four sites. The stratified unconfined and confined aquifer models gave similarly good agreements with measurements at three sites but poorly at the Club Med site. None of the analytical models presented here can fit the data at this site. To give better estimates at all sites numerical modeling that couple river hydraulics and groundwater flow involving less simplifications of and assumptions for the system may have to be adapted.

  18. Differential Validation of a Path Analytic Model of University Dropout.

    ERIC Educational Resources Information Center

    Winteler, Adolf

    Tinto's conceptual schema of college dropout forms the theoretical framework for the development of a model of university student dropout intention. This study validated Tinto's model in two different departments within a single university. Analyses were conducted on a sample of 684 college freshmen in the Education and Economics Department. A…

  19. Improved partition equilibrium model for predicting analyte response in electrospray ionization mass spectrometry.

    PubMed

    Du, Lihong; White, Robert L

    2009-02-01

    A previously proposed partition equilibrium model for quantitative prediction of analyte response in electrospray ionization mass spectrometry is modified to yield an improved linear relationship. Analyte mass spectrometer response is modeled by a competition mechanism between analyte and background electrolytes that is based on partition equilibrium considerations. The correlation between analyte response and solution composition is described by the linear model over a wide concentration range and the improved model is shown to be valid for a wide range of experimental conditions. The behavior of an analyte in a salt solution, which could not be explained by the original model, is correctly predicted. The ion suppression effects of 16:0 lysophosphatidylcholine (LPC) on analyte signals are attributed to a combination of competition for excess charge and reduction of total charge due to surface tension effects. In contrast to the complicated mathematical forms that comprise the original model, the simplified model described here can more easily be employed to predict analyte mass spectrometer responses for solutions containing multiple components. Copyright (c) 2008 John Wiley & Sons, Ltd.

  20. Analytical Performance Modeling and Validation of Intel’s Xeon Phi Architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chunduri, Sudheer; Balaprakash, Prasanna; Morozov, Vitali

    Modeling the performance of scientific applications on emerging hardware plays a central role in achieving extreme-scale computing goals. Analytical models that capture the interaction between applications and hardware characteristics are attractive because even a reasonably accurate model can be useful for performance tuning before the hardware is made available. In this paper, we develop a hardware model for Intel’s second-generation Xeon Phi architecture code-named Knights Landing (KNL) for the SKOPE framework. We validate the KNL hardware model by projecting the performance of mini-benchmarks and application kernels. The results show that our KNL model can project the performance with prediction errorsmore » of 10% to 20%. The hardware model also provides informative recommendations for code transformations and tuning.« less

  1. Analytical Plan for Roman Glasses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strachan, Denis M.; Buck, Edgar C.; Mueller, Karl T.

    Roman glasses that have been in the sea or underground for about 1800 years can serve as the independent “experiment” that is needed for validation of codes and models that are used in performance assessment. Two sets of Roman-era glasses have been obtained for this purpose. One set comes from the sunken vessel the Iulia Felix; the second from recently excavated glasses from a Roman villa in Aquileia, Italy. The specimens contain glass artifacts and attached sediment or soil. In the case of the Iulia Felix glasses quite a lot of analytical work has been completed at the University ofmore » Padova, but from an archaeological perspective. The glasses from Aquileia have not been so carefully analyzed, but they are similar to other Roman glasses. Both glass and sediment or soil need to be analyzed and are the subject of this analytical plan. The glasses need to be analyzed with the goal of validating the model used to describe glass dissolution. The sediment and soil need to be analyzed to determine the profile of elements released from the glass. This latter need represents a significant analytical challenge because of the trace quantities that need to be analyzed. Both pieces of information will yield important information useful in the validation of the glass dissolution model and the chemical transport code(s) used to determine the migration of elements once released from the glass. In this plan, we outline the analytical techniques that should be useful in obtaining the needed information and suggest a useful starting point for this analytical effort.« less

  2. Standards and Guidelines for Numerical Models for Tsunami Hazard Mitigation

    NASA Astrophysics Data System (ADS)

    Titov, V.; Gonzalez, F.; Kanoglu, U.; Yalciner, A.; Synolakis, C. E.

    2006-12-01

    An increased number of nations around the workd need to develop tsunami mitigation plans which invariably involve inundation maps for warning guidance and evacuation planning. There is the risk that inundation maps may be produced with older or untested methodology, as there are currently no standards for modeling tools. In the aftermath of the 2004 megatsunami, some models were used to model inundation for Cascadia events with results much larger than sediment records and existing state-of-the-art studies suggest leading to confusion among emergency management. Incorrectly assessing tsunami impact is hazardous, as recent events in 2006 in Tonga, Kythira, Greece and Central Java have suggested (Synolakis and Bernard, 2006). To calculate tsunami currents, forces and runup on coastal structures, and inundation of coastlines one must calculate the evolution of the tsunami wave from the deep ocean to its target site, numerically. No matter what the numerical model, validation (the process of ensuring that the model solves the parent equations of motion accurately) and verification (the process of ensuring that the model used represents geophysical reality appropriately) both are an essential. Validation ensures that the model performs well in a wide range of circumstances and is accomplished through comparison with analytical solutions. Verification ensures that the computational code performs well over a range of geophysical problems. A few analytic solutions have been validated themselves with laboratory data. Even fewer existing numerical models have been both validated with the analytical solutions and verified with both laboratory measurements and field measurements, thus establishing a gold standard for numerical codes for inundation mapping. While there is in principle no absolute certainty that a numerical code that has performed well in all the benchmark tests will also produce correct inundation predictions with any given source motions, validated codes reduce the level of uncertainty in their results to the uncertainty in the geophysical initial conditions. Further, when coupled with real--time free--field tsunami measurements from tsunameters, validated codes are the only choice for realistic forecasting of inundation; the consequences of failure are too ghastly to take chances with numerical procedures that have not been validated. We discuss a ten step process of benchmark tests for models used for inundation mapping. The associated methodology and algorithmes have to first be validated with analytical solutions, then verified with laboratory measurements and field data. The models need to be published in the scientific literature in peer-review journals indexed by ISI. While this process may appear onerous, it reflects our state of knowledge, and is the only defensible methodology when human lives are at stake. Synolakis, C.E., and Bernard, E.N, Tsunami science before and beyond Boxing Day 2004, Phil. Trans. R. Soc. A 364 1845, 2231--2263, 2005.

  3. Thermodynamics of Gas Turbine Cycles with Analytic Derivatives in OpenMDAO

    NASA Technical Reports Server (NTRS)

    Gray, Justin; Chin, Jeffrey; Hearn, Tristan; Hendricks, Eric; Lavelle, Thomas; Martins, Joaquim R. R. A.

    2016-01-01

    A new equilibrium thermodynamics analysis tool was built based on the CEA method using the OpenMDAO framework. The new tool provides forward and adjoint analytic derivatives for use with gradient based optimization algorithms. The new tool was validated against the original CEA code to ensure an accurate analysis and the analytic derivatives were validated against finite-difference approximations. Performance comparisons between analytic and finite difference methods showed a significant speed advantage for the analytic methods. To further test the new analysis tool, a sample optimization was performed to find the optimal air-fuel equivalence ratio, , maximizing combustion temperature for a range of different pressures. Collectively, the results demonstrate the viability of the new tool to serve as the thermodynamic backbone for future work on a full propulsion modeling tool.

  4. Algebraic approach to small-world network models

    NASA Astrophysics Data System (ADS)

    Rudolph-Lilith, Michelle; Muller, Lyle E.

    2014-01-01

    We introduce an analytic model for directed Watts-Strogatz small-world graphs and deduce an algebraic expression of its defining adjacency matrix. The latter is then used to calculate the small-world digraph's asymmetry index and clustering coefficient in an analytically exact fashion, valid nonasymptotically for all graph sizes. The proposed approach is general and can be applied to all algebraically well-defined graph-theoretical measures, thus allowing for an analytical investigation of finite-size small-world graphs.

  5. On the analytical modeling of the nonlinear vibrations of pretensioned space structures

    NASA Technical Reports Server (NTRS)

    Housner, J. M.; Belvin, W. K.

    1983-01-01

    Pretensioned structures are receiving considerable attention as candidate large space structures. A typical example is a hoop-column antenna. The large number of preloaded members requires efficient analytical methods for concept validation and design. Validation through analyses is especially important since ground testing may be limited due to gravity effects and structural size. The present investigation has the objective to present an examination of the analytical modeling of pretensioned members undergoing nonlinear vibrations. Two approximate nonlinear analysis are developed to model general structural arrangements which include beam-columns and pretensioned cables attached to a common nucleus, such as may occur at a joint of a pretensioned structure. Attention is given to structures undergoing nonlinear steady-state oscillations due to sinusoidal excitation forces. Three analyses, linear, quasi-linear, and nonlinear are conducted and applied to study the response of a relatively simple cable stiffened structure.

  6. Separation of very hydrophobic analytes by micellar electrokinetic chromatography IV. Modeling of the effective electrophoretic mobility from carbon number equivalents and octanol-water partition coefficients.

    PubMed

    Huhn, Carolin; Pyell, Ute

    2008-07-11

    It is investigated whether those relationships derived within an optimization scheme developed previously to optimize separations in micellar electrokinetic chromatography can be used to model effective electrophoretic mobilities of analytes strongly differing in their properties (polarity and type of interaction with the pseudostationary phase). The modeling is based on two parameter sets: (i) carbon number equivalents or octanol-water partition coefficients as analyte descriptors and (ii) four coefficients describing properties of the separation electrolyte (based on retention data for a homologous series of alkyl phenyl ketones used as reference analytes). The applicability of the proposed model is validated comparing experimental and calculated effective electrophoretic mobilities. The results demonstrate that the model can effectively be used to predict effective electrophoretic mobilities of neutral analytes from the determined carbon number equivalents or from octanol-water partition coefficients provided that the solvation parameters of the analytes of interest are similar to those of the reference analytes.

  7. Experimental investigation and numerical simulation of 3He gas diffusion in simple geometries: implications for analytical models of 3He MR lung morphometry.

    PubMed

    Parra-Robles, J; Ajraoui, S; Deppe, M H; Parnell, S R; Wild, J M

    2010-06-01

    Models of lung acinar geometry have been proposed to analytically describe the diffusion of (3)He in the lung (as measured with pulsed gradient spin echo (PGSE) methods) as a possible means of characterizing lung microstructure from measurement of the (3)He ADC. In this work, major limitations in these analytical models are highlighted in simple diffusion weighted experiments with (3)He in cylindrical models of known geometry. The findings are substantiated with numerical simulations based on the same geometry using finite difference representation of the Bloch-Torrey equation. The validity of the existing "cylinder model" is discussed in terms of the physical diffusion regimes experienced and the basic reliance of the cylinder model and other ADC-based approaches on a Gaussian diffusion behaviour is highlighted. The results presented here demonstrate that physical assumptions of the cylinder model are not valid for large diffusion gradient strengths (above approximately 15 mT/m), which are commonly used for (3)He ADC measurements in human lungs. (c) 2010 Elsevier Inc. All rights reserved.

  8. Highlights of Transient Plume Impingement Model Validation and Applications

    NASA Technical Reports Server (NTRS)

    Woronowicz, Michael

    2011-01-01

    This paper describes highlights of an ongoing validation effort conducted to assess the viability of applying a set of analytic point source transient free molecule equations to model behavior ranging from molecular effusion to rocket plumes. The validation effort includes encouraging comparisons to both steady and transient studies involving experimental data and direct simulation Monte Carlo results. Finally, this model is applied to describe features of two exotic transient scenarios involving NASA Goddard Space Flight Center satellite programs.

  9. Analytical model for tilting proprotor aircraft dynamics, including blade torsion and coupled bending modes, and conversion mode operation

    NASA Technical Reports Server (NTRS)

    Johnson, W.

    1974-01-01

    An analytical model is developed for proprotor aircraft dynamics. The rotor model includes coupled flap-lag bending modes, and blade torsion degrees of freedom. The rotor aerodynamic model is generally valid for high and low inflow, and for axial and nonaxial flight. For the rotor support, a cantilever wing is considered; incorporation of a more general support with this rotor model will be a straight-forward matter.

  10. Thermal conductivity of microporous layers: Analytical modeling and experimental validation

    NASA Astrophysics Data System (ADS)

    Andisheh-Tadbir, Mehdi; Kjeang, Erik; Bahrami, Majid

    2015-11-01

    A new compact relationship is developed for the thermal conductivity of the microporous layer (MPL) used in polymer electrolyte fuel cells as a function of pore size distribution, porosity, and compression pressure. The proposed model is successfully validated against experimental data obtained from a transient plane source thermal constants analyzer. The thermal conductivities of carbon paper samples with and without MPL were measured as a function of load (1-6 bars) and the MPL thermal conductivity was found between 0.13 and 0.17 W m-1 K-1. The proposed analytical model predicts the experimental thermal conductivities within 5%. A correlation generated from the analytical model was used in a multi objective genetic algorithm to predict the pore size distribution and porosity for an MPL with optimized thermal conductivity and mass diffusivity. The results suggest that an optimized MPL, in terms of heat and mass transfer coefficients, has an average pore size of 122 nm and 63% porosity.

  11. A New Model for Temperature Jump at a Fluid-Solid Interface

    PubMed Central

    Shu, Jian-Jun; Teo, Ji Bin Melvin; Chan, Weng Kong

    2016-01-01

    The problem presented involves the development of a new analytical model for the general fluid-solid temperature jump. To the best of our knowledge, there are no analytical models that provide the accurate predictions of the temperature jump for both gas and liquid systems. In this paper, a unified model for the fluid-solid temperature jump has been developed based on our adsorption model of the interfacial interactions. Results obtained from this model are validated with available results from the literature. PMID:27764230

  12. Analytical calculation of vibrations of electromagnetic origin in electrical machines

    NASA Astrophysics Data System (ADS)

    McCloskey, Alex; Arrasate, Xabier; Hernández, Xabier; Gómez, Iratxo; Almandoz, Gaizka

    2018-01-01

    Electrical motors are widely used and are often required to satisfy comfort specifications. Thus, vibration response estimations are necessary to reach optimum machine designs. This work presents an improved analytical model to calculate vibration response of an electrical machine. The stator and windings are modelled as a double circular cylindrical shell. As the stator is a laminated structure, orthotropic properties are applied to it. The values of those material properties are calculated according to the characteristics of the motor and the known material properties taken from previous works. Therefore, the model proposed takes into account the axial direction, so that length is considered, and also the contribution of windings, which differs from one machine to another. These aspects make the model valuable for a wide range of electrical motor types. In order to validate the analytical calculation, natural frequencies are calculated and compared to those obtained by Finite Element Method (FEM), giving relative errors below 10% for several circumferential and axial mode order combinations. It is also validated the analytical vibration calculation with acceleration measurements in a real machine. The comparison shows good agreement for the proposed model, being the most important frequency components in the same magnitude order. A simplified two dimensional model is also applied and the results obtained are not so satisfactory.

  13. Numerical investigation of band gaps in 3D printed cantilever-in-mass metamaterials

    NASA Astrophysics Data System (ADS)

    Qureshi, Awais; Li, Bing; Tan, K. T.

    2016-06-01

    In this research, the negative effective mass behavior of elastic/mechanical metamaterials is exhibited by a cantilever-in-mass structure as a proposed design for creating frequency stopping band gaps, based on local resonance of the internal structure. The mass-in-mass unit cell model is transformed into a cantilever-in-mass model using the Bernoulli-Euler beam theory. An analytical model of the cantilever-in-mass structure is derived and the effects of geometrical dimensions and material parameters to create frequency band gaps are examined. A two-dimensional finite element model is created to validate the analytical results, and excellent agreement is achieved. The analytical model establishes an easily tunable metamaterial design to realize wave attenuation based on locally resonant frequency. To demonstrate feasibility for 3D printing, the analytical model is employed to design and fabricate 3D printable mechanical metamaterial. A three-dimensional numerical experiment is performed using COMSOL Multiphysics to validate the wave attenuation performance. Results show that the cantilever-in-mass metamaterial is capable of mitigating stress waves at the desired resonance frequency. Our study successfully presents the use of one constituent material to create a 3D printed cantilever-in-mass metamaterial with negative effective mass density for stress wave mitigation purposes.

  14. Sharing the Data along with the Responsibility: Examining an Analytic Scale-Based Model for Assessing School Climate.

    ERIC Educational Resources Information Center

    Shindler, John; Taylor, Clint; Cadenas, Herminia; Jones, Albert

    This study was a pilot effort to examine the efficacy of an analytic trait scale school climate assessment instrument and democratic change system in two urban high schools. Pilot study results indicate that the instrument shows promising soundness in that it exhibited high levels of validity and reliability. In addition, the analytic trait format…

  15. Using meta-differential evolution to enhance a calculation of a continuous blood glucose level.

    PubMed

    Koutny, Tomas

    2016-09-01

    We developed a new model of glucose dynamics. The model calculates blood glucose level as a function of transcapillary glucose transport. In previous studies, we validated the model with animal experiments. We used analytical method to determine model parameters. In this study, we validate the model with subjects with type 1 diabetes. In addition, we combine the analytic method with meta-differential evolution. To validate the model with human patients, we obtained a data set of type 1 diabetes study that was coordinated by Jaeb Center for Health Research. We calculated a continuous blood glucose level from continuously measured interstitial fluid glucose level. We used 6 different scenarios to ensure robust validation of the calculation. Over 96% of calculated blood glucose levels fit A+B zones of the Clarke Error Grid. No data set required any correction of model parameters during the time course of measuring. We successfully verified the possibility of calculating a continuous blood glucose level of subjects with type 1 diabetes. This study signals a successful transition of our research from an animal experiment to a human patient. Researchers can test our model with their data on-line at https://diabetes.zcu.cz. Copyright © 2016 The Author. Published by Elsevier Ireland Ltd.. All rights reserved.

  16. Experimental, numerical, and analytical studies on the seismic response of steel-plate concrete (SC) composite shear walls

    NASA Astrophysics Data System (ADS)

    Epackachi, Siamak

    The seismic performance of rectangular steel-plate concrete (SC) composite shear walls is assessed for application to buildings and mission-critical infrastructure. The SC walls considered in this study were composed of two steel faceplates and infill concrete. The steel faceplates were connected together and to the infill concrete using tie rods and headed studs, respectively. The research focused on the in-plane behavior of flexure- and flexure-shear-critical SC walls. An experimental program was executed in the NEES laboratory at the University at Buffalo and was followed by numerical and analytical studies. In the experimental program, four large-size specimens were tested under displacement-controlled cyclic loading. The design variables considered in the testing program included wall thickness, reinforcement ratio, and slenderness ratio. The aspect ratio (height-to-length) of the four walls was 1.0. Each SC wall was installed on top of a re-usable foundation block. A bolted baseplate to RC foundation connection was used for all four walls. The walls were identified to be flexure- and flexure-shear critical. The progression of damage in the four walls was identical, namely, cracking and crushing of the infill concrete at the toes of the walls, outward buckling and yielding of the steel faceplates near the base of the wall, and tearing of the faceplates at their junctions with the baseplate. A robust finite element model was developed in LS-DYNA for nonlinear cyclic analysis of the flexure- and flexure-shear-critical SC walls. The DYNA model was validated using the results of the cyclic tests of the four SC walls. The validated and benchmarked models were then used to conduct a parametric study, which investigated the effects of wall aspect ratio, reinforcement ratio, wall thickness, and uniaxial concrete compressive strength on the in-plane response of SC walls. Simplified analytical models, suitable for preliminary analysis and design of SC walls, were developed, validated, and implemented in MATLAB. Analytical models were proposed for monotonic and cyclic simulations of the in-plane response of flexure- and flexure-shear-critical SC wall piers. The model for cyclic analysis was developed by modifying the Ibarra-Krawinler Pinching (IKP) model. The analytical models were verified using the results of the parametric study and validated using the test data.

  17. L-shaped piezoelectric motor--part II: analytical modeling.

    PubMed

    Avirovik, Dragan; Karami, M Amin; Inman, Daniel; Priya, Shashank

    2012-01-01

    This paper develops an analytical model for an L-shaped piezoelectric motor. The motor structure has been described in detail in Part I of this study. The coupling of the bending vibration mode of the bimorphs results in an elliptical motion at the tip. The emphasis of this paper is on the development of a precise analytical model which can predict the dynamic behavior of the motor based on its geometry. The motor was first modeled mechanically to identify the natural frequencies and mode shapes of the structure. Next, an electromechanical model of the motor was developed to take into account the piezoelectric effect, and dynamics of L-shaped piezoelectric motor were obtained as a function of voltage and frequency. Finally, the analytical model was validated by comparing it to experiment results and the finite element method (FEM). © 2012 IEEE

  18. Model performance evaluation (validation and calibration) in model-based studies of therapeutic interventions for cardiovascular diseases : a review and suggested reporting framework.

    PubMed

    Haji Ali Afzali, Hossein; Gray, Jodi; Karnon, Jonathan

    2013-04-01

    Decision analytic models play an increasingly important role in the economic evaluation of health technologies. Given uncertainties around the assumptions used to develop such models, several guidelines have been published to identify and assess 'best practice' in the model development process, including general modelling approach (e.g., time horizon), model structure, input data and model performance evaluation. This paper focuses on model performance evaluation. In the absence of a sufficient level of detail around model performance evaluation, concerns regarding the accuracy of model outputs, and hence the credibility of such models, are frequently raised. Following presentation of its components, a review of the application and reporting of model performance evaluation is presented. Taking cardiovascular disease as an illustrative example, the review investigates the use of face validity, internal validity, external validity, and cross model validity. As a part of the performance evaluation process, model calibration is also discussed and its use in applied studies investigated. The review found that the application and reporting of model performance evaluation across 81 studies of treatment for cardiovascular disease was variable. Cross-model validation was reported in 55 % of the reviewed studies, though the level of detail provided varied considerably. We found that very few studies documented other types of validity, and only 6 % of the reviewed articles reported a calibration process. Considering the above findings, we propose a comprehensive model performance evaluation framework (checklist), informed by a review of best-practice guidelines. This framework provides a basis for more accurate and consistent documentation of model performance evaluation. This will improve the peer review process and the comparability of modelling studies. Recognising the fundamental role of decision analytic models in informing public funding decisions, the proposed framework should usefully inform guidelines for preparing submissions to reimbursement bodies.

  19. Semi-analytical Model for Estimating Absorption Coefficients of Optically Active Constituents in Coastal Waters

    NASA Astrophysics Data System (ADS)

    Wang, D.; Cui, Y.

    2015-12-01

    The objectives of this paper are to validate the applicability of a multi-band quasi-analytical algorithm (QAA) in retrieval absorption coefficients of optically active constituents in turbid coastal waters, and to further improve the model using a proposed semi-analytical model (SAA). The ap(531) and ag(531) semi-analytically derived using SAA model are quite different from the retrievals procedures of QAA model that ap(531) and ag(531) are semi-analytically derived from the empirical retrievals results of a(531) and a(551). The two models are calibrated and evaluated against datasets taken from 19 independent cruises in West Florida Shelf in 1999-2003, provided by SeaBASS. The results indicate that the SAA model produces a superior performance to QAA model in absorption retrieval. Using of the SAA model in retrieving absorption coefficients of optically active constituents from West Florida Shelf decreases the random uncertainty of estimation by >23.05% from the QAA model. This study demonstrates the potential of the SAA model in absorption coefficients of optically active constituents estimating even in turbid coastal waters. Keywords: Remote sensing; Coastal Water; Absorption Coefficient; Semi-analytical Model

  20. A fast analytical undulator model for realistic high-energy FEL simulations

    NASA Astrophysics Data System (ADS)

    Tatchyn, R.; Cremer, T.

    1997-02-01

    A number of leading FEL simulation codes used for modeling gain in the ultralong undulators required for SASE saturation in the <100 Å range employ simplified analytical models both for field and error representations. Although it is recognized that both the practical and theoretical validity of such codes could be enhanced by incorporating realistic undulator field calculations, the computational cost of doing this can be prohibitive, especially for point-to-point integration of the equations of motion through each undulator period. In this paper we describe a simple analytical model suitable for modeling realistic permanent magnet (PM), hybrid/PM, and non-PM undulator structures, and discuss selected techniques for minimizing computation time.

  1. An electromechanical coupling model of a bending vibration type piezoelectric ultrasonic transducer.

    PubMed

    Zhang, Qiang; Shi, Shengjun; Chen, Weishan

    2016-03-01

    An electromechanical coupling model of a bending vibration type piezoelectric ultrasonic transducer is proposed. The transducer is a Langevin type transducer which is composed of an exponential horn, four groups of PZT ceramics and a back beam. The exponential horn can focus the vibration energy, and can enlarge vibration amplitude and velocity efficiently. A bending vibration model of the transducer is first constructed, and subsequently an electromechanical coupling model is constructed based on the vibration model. In order to obtain the most suitable excitation position of the PZT ceramics, the effective electromechanical coupling coefficient is optimized by means of the quadratic interpolation method. When the effective electromechanical coupling coefficient reaches the peak value of 42.59%, the optimal excitation position (L1=22.52 mm) is found. The FEM method and the experimental method are used to validate the developed analytical model. Two groups of the FEM model (the Group A center bolt is not considered, and but the Group B center bolt is considered) are constructed and separately compared with the analytical model and the experimental model. Four prototype transducers around the peak value are fabricated and tested to validate the analytical model. A scanning laser Doppler vibrometer is employed to test the bending vibration shape and resonance frequency. Finally, the electromechanical coupling coefficient is tested indirectly through an impedance analyzer. Comparisons of the analytical results, FEM results and experiment results are presented, and the results show good agreement. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Cross-stream diffusion under pressure-driven flow in microchannels with arbitrary aspect ratios: a phase diagram study using a three-dimensional analytical model

    PubMed Central

    Song, Hongjun; Wang, Yi; Pant, Kapil

    2011-01-01

    This article presents a three-dimensional analytical model to investigate cross-stream diffusion transport in rectangular microchannels with arbitrary aspect ratios under pressure-driven flow. The Fourier series solution to the three-dimensional convection–diffusion equation is obtained using a double integral transformation method and associated eigensystem calculation. A phase diagram derived from the dimensional analysis is presented to thoroughly interrogate the characteristics in various transport regimes and examine the validity of the model. The analytical model is verified against both experimental and numerical models in terms of the concentration profile, diffusion scaling law, and mixing efficiency with excellent agreement (with <0.5% relative error). Quantitative comparison against other prior analytical models in extensive parameter space is also performed, which demonstrates that the present model accommodates much broader transport regimes with significantly enhanced applicability. PMID:22247719

  3. Cross-stream diffusion under pressure-driven flow in microchannels with arbitrary aspect ratios: a phase diagram study using a three-dimensional analytical model.

    PubMed

    Song, Hongjun; Wang, Yi; Pant, Kapil

    2012-01-01

    This article presents a three-dimensional analytical model to investigate cross-stream diffusion transport in rectangular microchannels with arbitrary aspect ratios under pressure-driven flow. The Fourier series solution to the three-dimensional convection-diffusion equation is obtained using a double integral transformation method and associated eigensystem calculation. A phase diagram derived from the dimensional analysis is presented to thoroughly interrogate the characteristics in various transport regimes and examine the validity of the model. The analytical model is verified against both experimental and numerical models in terms of the concentration profile, diffusion scaling law, and mixing efficiency with excellent agreement (with <0.5% relative error). Quantitative comparison against other prior analytical models in extensive parameter space is also performed, which demonstrates that the present model accommodates much broader transport regimes with significantly enhanced applicability.

  4. On the performance of piezoelectric harvesters loaded by finite width impulses

    NASA Astrophysics Data System (ADS)

    Doria, A.; Medè, C.; Desideri, D.; Maschio, A.; Codecasa, L.; Moro, F.

    2018-02-01

    The response of cantilevered piezoelectric harvesters loaded by finite width impulses of base acceleration is studied analytically in the frequency domain in order to identify the parameters that influence the generated voltage. Experimental tests are then performed on harvesters loaded by hammer impacts. The latter are used to confirm analytical results and to validate a linear finite element (FE) model of a unimorph harvester. The FE model is, in turn, used to extend analytical results to more general harvesters (tapered, inverse tapered, triangular) and to more general impulses (heel strike in human gait). From analytical and numerical results design criteria for improving harvester performance are obtained.

  5. Experimental, Numerical and Analytical Characterization of Slosh Dynamics Applied to In-Space Propellant Storage, Management and Transfer

    NASA Technical Reports Server (NTRS)

    Storey, Jedediah M.; Kirk, Daniel; Gutierrez, Hector; Marsell, Brandon; Schallhorn, Paul; Lapilli, Gabriel D.

    2015-01-01

    Experimental and numerical results are presented from a new cryogenic fluid slosh program at the Florida Institute of Technology (FIT). Water and cryogenic liquid nitrogen are used in various ground-based tests with an approximately 30 cm diameter spherical tank to characterize damping, slosh mode frequencies, and slosh forces. The experimental results are compared to a computational fluid dynamics (CFD) model for validation. An analytical model is constructed from prior work for comparison. Good agreement is seen between experimental, numerical, and analytical results.

  6. Frequency Response Function Based Damage Identification for Aerospace Structures

    NASA Astrophysics Data System (ADS)

    Oliver, Joseph Acton

    Structural health monitoring technologies continue to be pursued for aerospace structures in the interests of increased safety and, when combined with health prognosis, efficiency in life-cycle management. The current dissertation develops and validates damage identification technology as a critical component for structural health monitoring of aerospace structures and, in particular, composite unmanned aerial vehicles. The primary innovation is a statistical least-squares damage identification algorithm based in concepts of parameter estimation and model update. The algorithm uses frequency response function based residual force vectors derived from distributed vibration measurements to update a structural finite element model through statistically weighted least-squares minimization producing location and quantification of the damage, estimation uncertainty, and an updated model. Advantages compared to other approaches include robust applicability to systems which are heavily damped, large, and noisy, with a relatively low number of distributed measurement points compared to the number of analytical degrees-of-freedom of an associated analytical structural model (e.g., modal finite element model). Motivation, research objectives, and a dissertation summary are discussed in Chapter 1 followed by a literature review in Chapter 2. Chapter 3 gives background theory and the damage identification algorithm derivation followed by a study of fundamental algorithm behavior on a two degree-of-freedom mass-spring system with generalized damping. Chapter 4 investigates the impact of noise then successfully proves the algorithm against competing methods using an analytical eight degree-of-freedom mass-spring system with non-proportional structural damping. Chapter 5 extends use of the algorithm to finite element models, including solutions for numerical issues, approaches for modeling damping approximately in reduced coordinates, and analytical validation using a composite sandwich plate model. Chapter 6 presents the final extension to experimental systems-including methods for initial baseline correlation and data reduction-and validates the algorithm on an experimental composite plate with impact damage. The final chapter deviates from development and validation of the primary algorithm to discuss development of an experimental scaled-wing test bed as part of a collaborative effort for developing structural health monitoring and prognosis technology. The dissertation concludes with an overview of technical conclusions and recommendations for future work.

  7. Validation of chemistry models employed in a particle simulation method

    NASA Technical Reports Server (NTRS)

    Haas, Brian L.; Mcdonald, Jeffrey D.

    1991-01-01

    The chemistry models employed in a statistical particle simulation method, as implemented in the Intel iPSC/860 multiprocessor computer, are validated and applied. Chemical relaxation of five-species air in these reservoirs involves 34 simultaneous dissociation, recombination, and atomic-exchange reactions. The reaction rates employed in the analytic solutions are obtained from Arrhenius experimental correlations as functions of temperature for adiabatic gas reservoirs in thermal equilibrium. Favorable agreement with the analytic solutions validates the simulation when applied to relaxation of O2 toward equilibrium in reservoirs dominated by dissociation and recombination, respectively, and when applied to relaxation of air in the temperature range 5000 to 30,000 K. A flow of O2 over a circular cylinder at high Mach number is simulated to demonstrate application of the method to multidimensional reactive flows.

  8. Analytical modeling and experimental validation of a magnetorheological mount

    NASA Astrophysics Data System (ADS)

    Nguyen, The; Ciocanel, Constantin; Elahinia, Mohammad

    2009-03-01

    Magnetorheological (MR) fluid has been increasingly researched and applied in vibration isolation devices. To date, the suspension system of several high performance vehicles has been equipped with MR fluid based dampers and research is ongoing to develop MR fluid based mounts for engine and powertrain isolation. MR fluid based devices have received attention due to the MR fluid's capability to change its properties in the presence of a magnetic field. This characteristic places MR mounts in the class of semiactive isolators making them a desirable substitution for the passive hydraulic mounts. In this research, an analytical model of a mixed-mode MR mount was constructed. The magnetorheological mount employs flow (valve) mode and squeeze mode. Each mode is powered by an independent electromagnet, so one mode does not affect the operation of the other. The analytical model was used to predict the performance of the MR mount with different sets of parameters. Furthermore, in order to produce the actual prototype, the analytical model was used to identify the optimal geometry of the mount. The experimental phase of this research was carried by fabricating and testing the actual MR mount. The manufactured mount was tested to evaluate the effectiveness of each mode individually and in combination. The experimental results were also used to validate the ability of the analytical model in predicting the response of the MR mount. Based on the observed response of the mount a suitable controller can be designed for it. However, the control scheme is not addressed in this study.

  9. Airport Facility Queuing Model Validation

    DOT National Transportation Integrated Search

    1977-05-01

    Criteria are presented for selection of analytic models to represent waiting times due to queuing processes. An existing computer model by M.F. Neuts which assumes general nonparametric distributions of arrivals per unit time and service times for a ...

  10. Design and Analysis of a Preconcentrator for the ChemLab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    WONG,CHUNGNIN C.; FLEMMING,JEB H.; MANGINELL,RONALD P.

    2000-07-17

    Preconcentration is a critical analytical procedure when designing a microsystem for trace chemical detection, because it can purify a sample mixture and boost the small analyte concentration to a much higher level allowing a better analysis. This paper describes the development of a micro-fabricated planar preconcentrator for the {mu}ChemLab{trademark} at Sandia. To guide the design, an analytical model to predict the analyte transport, adsorption and resorption process in the preconcentrator has been developed. Experiments have also been conducted to analyze the adsorption and resorption process and to validate the model. This combined effort of modeling, simulation, and testing has ledmore » us to build a reliable, efficient preconcentrator with good performance.« less

  11. Electromagnetic Compatibility Testing Studies

    NASA Technical Reports Server (NTRS)

    Trost, Thomas F.; Mitra, Atindra K.

    1996-01-01

    This report discusses the results on analytical models and measurement and simulation of statistical properties from a study of microwave reverberation (mode-stirred) chambers performed at Texas Tech University. Two analytical models of power transfer vs. frequency in a chamber, one for antenna-to-antenna transfer and the other for antenna to D-dot sensor, were experimentally validated in our chamber. Two examples are presented of the measurement and calculation of chamber Q, one for each of the models. Measurements of EM power density validate a theoretical probability distribution on and away from the chamber walls and also yield a distribution with larger standard deviation at frequencies below the range of validity of the theory. Measurements of EM power density at pairs of points which validate a theoretical spatial correlation function on the chamber walls and also yield a correlation function with larger correlation length, R(sub corr), at frequencies below the range of validity of the theory. A numerical simulation, employing a rectangular cavity with a moving wall shows agreement with the measurements. The determination that the lowest frequency at which the theoretical spatial correlation function is valid in our chamber is considerably higher than the lowest frequency recommended by current guidelines for utilizing reverberation chambers in EMC testing. Two suggestions have been made for future studies related to EMC testing.

  12. Flexural testing on carbon fibre laminates taking into account their different behaviour under tension and compression

    NASA Astrophysics Data System (ADS)

    Serna Moreno, M. C.; Romero Gutierrez, A.; Martínez Vicente, J. L.

    2016-07-01

    An analytical model has been derived for describing the results of three-point-bending tests in materials with different behaviour under tension and compression. The shift of the neutral plane and the damage initiation mode and its location have been defined. The validity of the equations has been reviewed by testing carbon fibre-reinforced polymers (CFRP), typically employed in different weight-critical applications. Both unidirectional and cross-ply laminates have been studied. The initial failure mode produced depends directly on the beam span- thickness relation. Therefore, specimens with different thicknesses have been analysed for examining the damage initiation due to either the bending moment or the out-of-plane shear load. The experimental description of the damage initiation and evolution has been shown by means of optical microscopy. The good agreement between the analytical estimations and the experimental results shows the validity of the analytical model exposed.

  13. Validation of a BOTDR-based system for the detection of smuggling tunnels

    NASA Astrophysics Data System (ADS)

    Elkayam, Itai; Klar, Assaf; Linker, Raphael; Marshall, Alec M.

    2010-04-01

    Cross-border smuggling tunnels enable unmonitored movement of people, drugs and weapons and pose a very serious threat to homeland security. Recently, Klar and Linker (2009) [SPIE paper No. 731603] presented an analytical study of the feasibility of a Brillouin Optical Time Domain Reflectometry (BOTDR) based system for the detection of small sized smuggling tunnels. The current study extends this work by validating the analytical models against real strain measurements in soil obtained from small scale experiments in a geotechnical centrifuge. The soil strains were obtained using an image analysis method that tracked the displacement of discrete patches of soil through a sequence of digital images of the soil around the tunnel during the centrifuge test. The results of the present study are in agreement with those of a previous study which was based on synthetic signals generated using empirical and analytical models from the literature.

  14. Development and validation of a LC-MS/MS assay for quantitation of plasma citrulline for application to animal models of the acute radiation syndrome across multiple species.

    PubMed

    Jones, Jace W; Tudor, Gregory; Bennett, Alexander; Farese, Ann M; Moroni, Maria; Booth, Catherine; MacVittie, Thomas J; Kane, Maureen A

    2014-07-01

    The potential risk of a radiological catastrophe highlights the need for identifying and validating potential biomarkers that accurately predict radiation-induced organ damage. A key target organ that is acutely sensitive to the effects of irradiation is the gastrointestinal (GI) tract, referred to as the GI acute radiation syndrome (GI-ARS). Recently, citrulline has been identified as a potential circulating biomarker for radiation-induced GI damage. Prior to biologically validating citrulline as a biomarker for radiation-induced GI injury, there is the important task of developing and validating a quantitation assay for citrulline detection within the radiation animal models used for biomarker validation. Herein, we describe the analytical development and validation of citrulline detection using a liquid chromatography tandem mass spectrometry assay that incorporates stable-label isotope internal standards. Analytical validation for specificity, linearity, lower limit of quantitation, accuracy, intra- and interday precision, extraction recovery, matrix effects, and stability was performed under sample collection and storage conditions according to the Guidance for Industry, Bioanalytical Methods Validation issued by the US Food and Drug Administration. In addition, the method was biologically validated using plasma from well-characterized mouse, minipig, and nonhuman primate GI-ARS models. The results demonstrated that circulating citrulline can be confidently quantified from plasma. Additionally, circulating citrulline displayed a time-dependent response for radiological doses covering GI-ARS across multiple species.

  15. An analytical model of leakage neutron equivalent dose for passively-scattered proton radiotherapy and validation with measurements.

    PubMed

    Schneider, Christopher; Newhauser, Wayne; Farah, Jad

    2015-05-18

    Exposure to stray neutrons increases the risk of second cancer development after proton therapy. Previously reported analytical models of this exposure were difficult to configure and had not been investigated below 100 MeV proton energy. The purposes of this study were to test an analytical model of neutron equivalent dose per therapeutic absorbed dose  at 75 MeV and to improve the model by reducing the number of configuration parameters and making it continuous in proton energy from 100 to 250 MeV. To develop the analytical model, we used previously published H/D values in water from Monte Carlo simulations of a general-purpose beamline for proton energies from 100 to 250 MeV. We also configured and tested the model on in-air neutron equivalent doses measured for a 75 MeV ocular beamline. Predicted H/D values from the analytical model and Monte Carlo agreed well from 100 to 250 MeV (10% average difference). Predicted H/D values from the analytical model also agreed well with measurements at 75 MeV (15% average difference). The results indicate that analytical models can give fast, reliable calculations of neutron exposure after proton therapy. This ability is absent in treatment planning systems but vital to second cancer risk estimation.

  16. The space shuttle payload planning working groups. Volume 8: Earth and ocean physics

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The findings and recommendations of the Earth and Ocean Physics working group of the space shuttle payload planning activity are presented. The requirements for the space shuttle mission are defined as: (1) precision measurement for earth and ocean physics experiments, (2) development and demonstration of new and improved sensors and analytical techniques, (3) acquisition of surface truth data for evaluation of new measurement techniques, (4) conduct of critical experiments to validate geophysical phenomena and instrumental results, and (5) development and validation of analytical/experimental models for global ocean dynamics and solid earth dynamics/earthquake prediction. Tables of data are presented to show the flight schedule estimated costs, and the mission model.

  17. Validation of an assay for quantification of alpha-amylase in saliva of sheep

    PubMed Central

    Fuentes-Rubio, Maria; Fuentes, Francisco; Otal, Julio; Quiles, Alberto; Hevia, María Luisa

    2016-01-01

    The objective of this study was to develop a time-resolved immunofluorometric assay (TR-IFMA) for quantification of salivary alpha-amylase in sheep. For that purpose, after the design of the assay, an analytical and a clinical validation were carried out. The analytical validation of the assay showed intra- and inter-assay coefficients of variation (CVs) of 6.1% and 10.57%, respectively and an analytical limit of detection of 0.09 ng/mL. The assay also demonstrated a high level of accuracy, as determined by linearity under dilution. For clinical validation, a model of acute stress testing was conducted to determine whether expected significant changes in alpha-amylase were picked up in the newly developed assay. In that model, 11 sheep were immobilized and confronted with a sheepdog to induce stress. Saliva samples were obtained before stress induction and 15, 30, and 60 min afterwards. Salivary cortisol was measured as a reference of stress level. The results of TR-IFMA showed a significant increase (P < 0.01) in the concentration of alpha-amylase in saliva after stress induction. The assay developed in this study could be used to measure salivary alpha-amylase in the saliva of sheep and this enzyme could be a possible noninvasive biomarker of stress in sheep. PMID:27408332

  18. Sedimentary Geothermal Feasibility Study: October 2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Augustine, Chad; Zerpa, Luis

    The objective of this project is to analyze the feasibility of commercial geothermal projects using numerical reservoir simulation, considering a sedimentary reservoir with low permeability that requires productivity enhancement. A commercial thermal reservoir simulator (STARS, from Computer Modeling Group, CMG) is used in this work for numerical modeling. In the first stage of this project (FY14), a hypothetical numerical reservoir model was developed, and validated against an analytical solution. The following model parameters were considered to obtain an acceptable match between the numerical and analytical solutions: grid block size, time step and reservoir areal dimensions; the latter related to boundarymore » effects on the numerical solution. Systematic model runs showed that insufficient grid sizing generates numerical dispersion that causes the numerical model to underestimate the thermal breakthrough time compared to the analytic model. As grid sizing is decreased, the model results converge on a solution. Likewise, insufficient reservoir model area introduces boundary effects in the numerical solution that cause the model results to differ from the analytical solution.« less

  19. Modeling and Validation of a Navy A6-Intruder Actively Controlled Landing Gear System

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Daugherty, Robert H.; Martinson, Veloria J.

    1999-01-01

    Concepts for long-range air travel are characterized by airframe designs with long, slender, relatively flexible fuselages. One aspect often overlooked is ground-induced vibration of these aircraft. This paper presents an analytical and experimental study of reducing ground-induced aircraft vibration loads by using actively controlled landing gear. A facility has been developed to test various active landing gear control concepts and their performance, The facility uses a Navy A6 Intruder landing gear fitted with an auxiliary hydraulic supply electronically controlled by servo valves. An analytical model of the gear is presented, including modifications to actuate the gear externally, and test data are used to validate the model. The control design is described and closed-loop test and analysis comparisons are presented.

  20. Infrared Imagery of Solid Rocket Exhaust Plumes

    NASA Technical Reports Server (NTRS)

    Moran, Robert P.; Houston, Janice D.

    2011-01-01

    The Ares I Scale Model Acoustic Test program consisted of a series of 18 solid rocket motor static firings, simulating the liftoff conditions of the Ares I five-segment Reusable Solid Rocket Motor Vehicle. Primary test objectives included acquiring acoustic and pressure data which will be used to validate analytical models for the prediction of Ares 1 liftoff acoustics and ignition overpressure environments. The test article consisted of a 5% scale Ares I vehicle and launch tower mounted on the Mobile Launch Pad. The testing also incorporated several Water Sound Suppression Systems. Infrared imagery was employed during the solid rocket testing to support the validation or improvement of analytical models, and identify corollaries between rocket plume size or shape and the accompanying measured level of noise suppression obtained by water sound suppression systems.

  1. A carrier-based analytical theory for negative capacitance symmetric double-gate field effect transistors and its simulation verification

    NASA Astrophysics Data System (ADS)

    Jiang, Chunsheng; Liang, Renrong; Wang, Jing; Xu, Jun

    2015-09-01

    A carrier-based analytical drain current model for negative capacitance symmetric double-gate field effect transistors (NC-SDG FETs) is proposed by solving the differential equation of the carrier, the Pao-Sah current formulation, and the Landau-Khalatnikov equation. The carrier equation is derived from Poisson’s equation and the Boltzmann distribution law. According to the model, an amplified semiconductor surface potential and a steeper subthreshold slope could be obtained with suitable thicknesses of the ferroelectric film and insulator layer at room temperature. Results predicted by the analytical model agree well with those of the numerical simulation from a 2D simulator without any fitting parameters. The analytical model is valid for all operation regions and captures the transitions between them without any auxiliary variables or functions. This model can be used to explore the operating mechanisms of NC-SDG FETs and to optimize device performance.

  2. An analytical method for designing low noise helicopter transmissions

    NASA Technical Reports Server (NTRS)

    Bossler, R. B., Jr.; Bowes, M. A.; Royal, A. C.

    1978-01-01

    The development and experimental validation of a method for analytically modeling the noise mechanism in the helicopter geared power transmission systems is described. This method can be used within the design process to predict interior noise levels and to investigate the noise reducing potential of alternative transmission design details. Examples are discussed.

  3. Experimental Validation of the Transverse Shear Behavior of a Nomex Core for Sandwich Panels

    NASA Astrophysics Data System (ADS)

    Farooqi, M. I.; Nasir, M. A.; Ali, H. M.; Ali, Y.

    2017-05-01

    This work deals with determination of the transverse shear moduli of a Nomex® honeycomb core of sandwich panels. Their out-of-plane shear characteristics depend on the transverse shear moduli of the honeycomb core. These moduli were determined experimentally, numerically, and analytically. Numerical simulations were performed by using a unit cell model and three analytical approaches. Analytical calculations showed that two of the approaches provided reasonable predictions for the transverse shear modulus as compared with experimental results. However, the approach based upon the classical lamination theory showed large deviations from experimental data. Numerical simulations also showed a trend similar to that resulting from the analytical models.

  4. A Criterion-Related Validation Study of the Army Core Leader Competency Model

    DTIC Science & Technology

    2007-04-01

    2004). Transformational and transactional leadership: A meta-analytic test of their relative validity. Journal of Applied Psychology , 89, 755- 768...performance criteria in an attempt to adjust ratings for this influence. Leader survey materials were developed and pilot tested at Ft. Drum and Ft... psychological constructs in the behavioral science realm. Numerous theories, popular literature, websites, assessments, and competency models are

  5. Study on bending behaviour of nickel–titanium rotary endodontic instruments by analytical and numerical analyses

    PubMed Central

    Tsao, C C; Liou, J U; Wen, P H; Peng, C C; Liu, T S

    2013-01-01

    Aim To develop analytical models and analyse the stress distribution and flexibility of nickel–titanium (NiTi) instruments subject to bending forces. Methodology The analytical method was used to analyse the behaviours of NiTi instruments under bending forces. Two NiTi instruments (RaCe and Mani NRT) with different cross-sections and geometries were considered. Analytical results were derived using Euler–Bernoulli nonlinear differential equations that took into account the screw pitch variation of these NiTi instruments. In addition, the nonlinear deformation analysis based on the analytical model and the finite element nonlinear analysis was carried out. Numerical results are obtained by carrying out a finite element method. Results According to analytical results, the maximum curvature of the instrument occurs near the instrument tip. Results of the finite element analysis revealed that the position of maximum von Mises stress was near the instrument tip. Therefore, the proposed analytical model can be used to predict the position of maximum curvature in the instrument where fracture may occur. Finally, results of analytical and numerical models were compatible. Conclusion The proposed analytical model was validated by numerical results in analysing bending deformation of NiTi instruments. The analytical model is useful in the design and analysis of instruments. The proposed theoretical model is effective in studying the flexibility of NiTi instruments. Compared with the finite element method, the analytical model can deal conveniently and effectively with the subject of bending behaviour of rotary NiTi endodontic instruments. PMID:23173762

  6. Design and analysis of tubular permanent magnet linear generator for small-scale wave energy converter

    NASA Astrophysics Data System (ADS)

    Kim, Jeong-Man; Koo, Min-Mo; Jeong, Jae-Hoon; Hong, Keyyong; Cho, Il-Hyoung; Choi, Jang-Young

    2017-05-01

    This paper reports the design and analysis of a tubular permanent magnet linear generator (TPMLG) for a small-scale wave-energy converter. The analytical field computation is performed by applying a magnetic vector potential and a 2-D analytical model to determine design parameters. Based on analytical solutions, parametric analysis is performed to meet the design specifications of a wave-energy converter (WEC). Then, 2-D FEA is employed to validate the analytical method. Finally, the experimental result confirms the predictions of the analytical and finite element analysis (FEA) methods under regular and irregular wave conditions.

  7. A Derivation of the Analytical Relationship between the Projected Albedo-Area Product of a Space Object and its Aggregate Photometric Measurements

    DTIC Science & Technology

    2013-09-01

    model , they are, for all intents and purposes, simply unit-less linear weights. Although this equation is technically valid for a Lambertian... modeled as a single flat facet, the same model cannot be assumed equally valid for the body. The body, after all, is a complex, three dimensional...facet (termed the “body”) and the solar tracking parts of the object as another facet (termed the solar panels). This comprises the two-facet model

  8. Pulsed plane wave analytic solutions for generic shapes and the validation of Maxwell's equations solvers

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Vastano, John A.; Lomax, Harvard

    1992-01-01

    Generic shapes are subjected to pulsed plane waves of arbitrary shape. The resulting scattered electromagnetic fields are determined analytically. These fields are then computed efficiently at field locations for which numerically determined EM fields are required. Of particular interest are the pulsed waveform shapes typically utilized by radar systems. The results can be used to validate the accuracy of finite difference time domain Maxwell's equations solvers. A two-dimensional solver which is second- and fourth-order accurate in space and fourth-order accurate in time is examined. Dielectric media properties are modeled by a ramping technique which simplifies the associated gridding of body shapes. The attributes of the ramping technique are evaluated by comparison with the analytic solutions.

  9. MetaKTSP: a meta-analytic top scoring pair method for robust cross-study validation of omics prediction analysis.

    PubMed

    Kim, SungHwan; Lin, Chien-Wei; Tseng, George C

    2016-07-01

    Supervised machine learning is widely applied to transcriptomic data to predict disease diagnosis, prognosis or survival. Robust and interpretable classifiers with high accuracy are usually favored for their clinical and translational potential. The top scoring pair (TSP) algorithm is an example that applies a simple rank-based algorithm to identify rank-altered gene pairs for classifier construction. Although many classification methods perform well in cross-validation of single expression profile, the performance usually greatly reduces in cross-study validation (i.e. the prediction model is established in the training study and applied to an independent test study) for all machine learning methods, including TSP. The failure of cross-study validation has largely diminished the potential translational and clinical values of the models. The purpose of this article is to develop a meta-analytic top scoring pair (MetaKTSP) framework that combines multiple transcriptomic studies and generates a robust prediction model applicable to independent test studies. We proposed two frameworks, by averaging TSP scores or by combining P-values from individual studies, to select the top gene pairs for model construction. We applied the proposed methods in simulated data sets and three large-scale real applications in breast cancer, idiopathic pulmonary fibrosis and pan-cancer methylation. The result showed superior performance of cross-study validation accuracy and biomarker selection for the new meta-analytic framework. In conclusion, combining multiple omics data sets in the public domain increases robustness and accuracy of the classification model that will ultimately improve disease understanding and clinical treatment decisions to benefit patients. An R package MetaKTSP is available online. (http://tsenglab.biostat.pitt.edu/software.htm). ctseng@pitt.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  10. The Measure of Adolescent Heterosocial Competence: Development and Initial Validation

    ERIC Educational Resources Information Center

    Grover, Rachel L.; Nangle, Douglas W.; Zeff, Karen R.

    2005-01-01

    We developed and began construct validation of the Measure of Adolescent Heterosocial Competence (MAHC), a self-report instrument assessing the ability to negotiate effectively a range of challenging other-sex social interactions. Development followed the Goldfried and D'Zurilla (1969) behavioral-analytic model for assessing competence.…

  11. An Analytical Approach to Salary Evaluation for Educational Personnel

    ERIC Educational Resources Information Center

    Bruno, James Edward

    1969-01-01

    "In this study a linear programming model for determining an 'optimal' salary schedule was derived then applied to an educational salary structure. The validity of the model and the effectiveness of the approach were established. (Author)

  12. Multi-Evaporator Miniature Loop Heat Pipe for Small Spacecraft Thermal Control. Part 2; Validation Results

    NASA Technical Reports Server (NTRS)

    Ku, Jentung; Ottenstein, Laura; Douglas, Donya; Hoang, Triem

    2010-01-01

    Under NASA s New Millennium Program Space Technology 8 (ST 8) Project, Goddard Space Fight Center has conducted a Thermal Loop experiment to advance the maturity of the Thermal Loop technology from proof of concept to prototype demonstration in a relevant environment , i.e. from a technology readiness level (TRL) of 3 to a level of 6. The thermal Loop is an advanced thermal control system consisting of a miniature loop heat pipe (MLHP) with multiple evaporators and multiple condensers designed for future small system applications requiring low mass, low power, and compactness. The MLHP retains all features of state-of-the-art loop heat pipes (LHPs) and offers additional advantages to enhance the functionality, performance, versatility, and reliability of the system. An MLHP breadboard was built and tested in the laboratory and thermal vacuum environments for the TRL 4 and TRL 5 validations, respectively, and an MLHP proto-flight unit was built and tested in a thermal vacuum chamber for the TRL 6 validation. In addition, an analytical model was developed to simulate the steady state and transient behaviors of the MLHP during various validation tests. The MLHP demonstrated excellent performance during experimental tests and the analytical model predictions agreed very well with experimental data. All success criteria at various TRLs were met. Hence, the Thermal Loop technology has reached a TRL of 6. This paper presents the validation results, both experimental and analytical, of such a technology development effort.

  13. Analytic drain current model for III-V cylindrical nanowire transistors

    NASA Astrophysics Data System (ADS)

    Marin, E. G.; Ruiz, F. G.; Schmidt, V.; Godoy, A.; Riel, H.; Gámiz, F.

    2015-07-01

    An analytical model is proposed to determine the drain current of III-V cylindrical nanowires (NWs). The model uses the gradual channel approximation and takes into account the complete analytical solution of the Poisson and Schrödinger equations for the Γ-valley and for an arbitrary number of subbands. Fermi-Dirac statistics are considered to describe the 1D electron gas in the NWs, being the resulting recursive Fermi-Dirac integral of order -1/2 successfully integrated under reasonable assumptions. The model has been validated against numerical simulations showing excellent agreement for different semiconductor materials, diameters up to 40 nm, gate overdrive biases up to 0.7 V, and densities of interface states up to 1013eV-1cm-2 .

  14. Investigation of the short argon arc with hot anode. II. Analytical model

    NASA Astrophysics Data System (ADS)

    Khrabry, A.; Kaganovich, I. D.; Nemchinsky, V.; Khodak, A.

    2018-01-01

    A short atmospheric pressure argon arc is studied numerically and analytically. In a short arc with an inter-electrode gap of several millimeters, non-equilibrium effects in plasma play an important role in operation of the arc. High anode temperature leads to electron emission and intensive radiation from its surface. A complete, self-consistent analytical model of the whole arc comprising of models for near-electrode regions, arc column, and a model of heat transfer in cylindrical electrodes was developed. The model predicts the width of non-equilibrium layers and arc column, voltages and plasma profiles in these regions, and heat and ion fluxes to the electrodes. Parametric studies of the arc have been performed for a range of the arc current densities, inter-electrode gap widths, and gas pressures. The model was validated against experimental data and verified by comparison with numerical solution. Good agreement between the analytical model and simulations and reasonable agreement with experimental data were obtained.

  15. Investigation of the short argon arc with hot anode. II. Analytical model

    DOE PAGES

    Khrabry, A.; Kaganovich, I. D.; Nemchinsky, V.; ...

    2018-01-22

    A short atmospheric pressure argon arc is studied numerically and analytically. In a short arc with an inter-electrode gap of several millimeters, non-equilibrium effects in plasma play an important role in operation of the arc. High anode temperature leads to electron emission and intensive radiation from its surface. A complete, self-consistent analytical model of the whole arc comprising of models for near-electrode regions, arc column, and a model of heat transfer in cylindrical electrodes was developed. The model predicts the width of non-equilibrium layers and arc column, voltages and plasma profiles in these regions, and heat and ion fluxes tomore » the electrodes. Parametric studies of the arc have been performed for a range of the arc current densities, inter-electrode gap widths, and gas pressures. The model was validated against experimental data and verified by comparison with numerical solution. In conclusion, good agreement between the analytical model and simulations and reasonable agreement with experimental data were obtained.« less

  16. Investigation of the short argon arc with hot anode. II. Analytical model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khrabry, A.; Kaganovich, I. D.; Nemchinsky, V.

    A short atmospheric pressure argon arc is studied numerically and analytically. In a short arc with an inter-electrode gap of several millimeters, non-equilibrium effects in plasma play an important role in operation of the arc. High anode temperature leads to electron emission and intensive radiation from its surface. A complete, self-consistent analytical model of the whole arc comprising of models for near-electrode regions, arc column, and a model of heat transfer in cylindrical electrodes was developed. The model predicts the width of non-equilibrium layers and arc column, voltages and plasma profiles in these regions, and heat and ion fluxes tomore » the electrodes. Parametric studies of the arc have been performed for a range of the arc current densities, inter-electrode gap widths, and gas pressures. The model was validated against experimental data and verified by comparison with numerical solution. In conclusion, good agreement between the analytical model and simulations and reasonable agreement with experimental data were obtained.« less

  17. Computational Simulation of Acoustic Modes in Rocket Combustors

    NASA Technical Reports Server (NTRS)

    Harper, Brent (Technical Monitor); Merkle, C. L.; Sankaran, V.; Ellis, M.

    2004-01-01

    A combination of computational fluid dynamic analysis and analytical solutions is being used to characterize the dominant modes in liquid rocket engines in conjunction with laboratory experiments. The analytical solutions are based on simplified geometries and flow conditions and are used for careful validation of the numerical formulation. The validated computational model is then extended to realistic geometries and flow conditions to test the effects of various parameters on chamber modes, to guide and interpret companion laboratory experiments in simplified combustors, and to scale the measurements to engine operating conditions. In turn, the experiments are used to validate and improve the model. The present paper gives an overview of the numerical and analytical techniques along with comparisons illustrating the accuracy of the computations as a function of grid resolution. A representative parametric study of the effect of combustor mean flow Mach number and combustor aspect ratio on the chamber modes is then presented for both transverse and longitudinal modes. The results show that higher mean flow Mach numbers drive the modes to lower frequencies. Estimates of transverse wave mechanics in a high aspect ratio combustor are then contrasted with longitudinal modes in a long and narrow combustor to provide understanding of potential experimental simulations.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Safari, L., E-mail: laleh.safari@ist.ac.at; Department of Physics, University of Oulu, Box 3000, FI-90014 Oulu; Santos, J. P.

    Atomic form factors are widely used for the characterization of targets and specimens, from crystallography to biology. By using recent mathematical results, here we derive an analytical expression for the atomic form factor within the independent particle model constructed from nonrelativistic screened hydrogenic wave functions. The range of validity of this analytical expression is checked by comparing the analytically obtained form factors with the ones obtained within the Hartee-Fock method. As an example, we apply our analytical expression for the atomic form factor to evaluate the differential cross section for Rayleigh scattering off neutral atoms.

  19. Manufacturing data analytics using a virtual factory representation.

    PubMed

    Jain, Sanjay; Shao, Guodong; Shin, Seung-Jun

    2017-01-01

    Large manufacturers have been using simulation to support decision-making for design and production. However, with the advancement of technologies and the emergence of big data, simulation can be utilised to perform and support data analytics for associated performance gains. This requires not only significant model development expertise, but also huge data collection and analysis efforts. This paper presents an approach within the frameworks of Design Science Research Methodology and prototyping to address the challenge of increasing the use of modelling, simulation and data analytics in manufacturing via reduction of the development effort. The use of manufacturing simulation models is presented as data analytics applications themselves and for supporting other data analytics applications by serving as data generators and as a tool for validation. The virtual factory concept is presented as the vehicle for manufacturing modelling and simulation. Virtual factory goes beyond traditional simulation models of factories to include multi-resolution modelling capabilities and thus allowing analysis at varying levels of detail. A path is proposed for implementation of the virtual factory concept that builds on developments in technologies and standards. A virtual machine prototype is provided as a demonstration of the use of a virtual representation for manufacturing data analytics.

  20. Analytic model for ultrasound energy receivers and their optimal electric loads II: Experimental validation

    NASA Astrophysics Data System (ADS)

    Gorostiaga, M.; Wapler, M. C.; Wallrabe, U.

    2017-10-01

    In this paper, we verify the two optimal electric load concepts based on the zero reflection condition and on the power maximization approach for ultrasound energy receivers. We test a high loss 1-3 composite transducer, and find that the measurements agree very well with the predictions of the analytic model for plate transducers that we have developed previously. Additionally, we also confirm that the power maximization and zero reflection loads are very different when the losses in the receiver are high. Finally, we compare the optimal load predictions by the KLM and the analytic models with frequency dependent attenuation to evaluate the influence of the viscosity.

  1. Transient Thermal Model and Analysis of the Lunar Surface and Regolith for Cryogenic Fluid Storage

    NASA Technical Reports Server (NTRS)

    Christie, Robert J.; Plachta, David W.; Yasan, Mohammad M.

    2008-01-01

    A transient thermal model of the lunar surface and regolith was developed along with analytical techniques which will be used to evaluate the storage of cryogenic fluids at equatorial and polar landing sites. The model can provide lunar surface and subsurface temperatures as a function of latitude and time throughout the lunar cycle and season. It also accounts for the presence of or lack of the undisturbed fluff layer on the lunar surface. The model was validated with Apollo 15 and Clementine data and shows good agreement with other analytical models.

  2. Extension of the Hugoniot and analytical release model of α-quartz to 0.2–3 TPa

    DOE PAGES

    Desjarlais, M. P.; Knudson, M. D.; Cochrane, K. R.

    2017-07-21

    In recent years, α-quartz has been used prolifically as an impedance matching standard in shock wave experiments in the multi-Mbar regime (1 Mbar = 100 GPa = 0.1 TPa). This is due to the fact that above ~90–100 GPa along the principal Hugoniot α-quartz becomes reflective, and thus, shock velocities can be measured to high precision using velocity interferometry. The Hugoniot and release of α-quartz have been studied extensively, enabling the development of an analytical release model for use in impedance matching. However, this analytical release model has only been validated over a range of 300–1200 GPa (0.3–1.2 TPa). Furthermore,more » we extend this analytical model to 200–3000 GPa (0.2–3 TPa) through additional α-quartz Hugoniot and release measurements, as well as first-principles molecular dynamics calculations.« less

  3. Design of permanent magnet eddy current brake for a small scaled electromagnetic launch model

    NASA Astrophysics Data System (ADS)

    Zhou, Shigui; Yu, Haitao; Hu, Minqiang; Huang, Lei

    2012-04-01

    A variable pole-pitch double-sided permanent magnet (PM) linear eddy current brake (LECB) is proposed for a small scaled electromagnetic launch model. A two-dimensional (2D) analytical steady state model is presented for the double-sided PM-LECB, and the expression for the braking force is derived. Based on the analytical model, the material and eddy current skin effect of the conducting plate are analyzed. Moreover, a variable pole-pitch double-sided PM-LECB is proposed for the effective braking of the moving plate. In addition, the braking force is predicted by finite element (FE) analysis, and the simulated results are in good agreement with the analytical model. Finally, a prototype is presented to test the braking profile for validation of the proposed design.

  4. Accurate quantification of PGE2 in the polyposis in rat colon (Pirc) model by surrogate analyte-based UPLC-MS/MS.

    PubMed

    Yun, Changhong; Dashwood, Wan-Mohaiza; Kwong, Lawrence N; Gao, Song; Yin, Taijun; Ling, Qinglan; Singh, Rashim; Dashwood, Roderick H; Hu, Ming

    2018-01-30

    An accurate and reliable UPLC-MS/MS method is reported for the quantification of endogenous Prostaglandin E2 (PGE 2 ) in rat colonic mucosa and polyps. This method adopted the "surrogate analyte plus authentic bio-matrix" approach, using two different stable isotopic labeled analogs - PGE 2 -d9 as the surrogate analyte and PGE 2 -d4 as the internal standard. A quantitative standard curve was constructed with the surrogate analyte in colonic mucosa homogenate, and the method was successfully validated with the authentic bio-matrix. Concentrations of endogenous PGE 2 in both normal and inflammatory tissue homogenates were back-calculated based on the regression equation. Because of no endogenous interference on the surrogate analyte determination, the specificity was particularly good. By using authentic bio-matrix for validation, the matrix effect and exaction recovery are identically same for the quantitative standard curve and actual samples - this notably increased the assay accuracy. The method is easy, fast, robust and reliable for colon PGE 2 determination. This "surrogate analyte" approach was applied to measure the Pirc (an Apc-mutant rat kindred that models human FAP) mucosa and polyps PGE 2 , one of the strong biomarkers of colorectal cancer. A similar concept could be applied to endogenous biomarkers in other tissues. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Coherent model of L-band radar scattering by soybean plants: model development, validation and retrieval

    USDA-ARS?s Scientific Manuscript database

    An improved coherent branching model for L-band radar remote sensing of soybean is proposed by taking into account the correlated scattering among scatterers. The novel feature of the analytic coherent model consists of conditional probability functions to eliminate the overlapping effects of branc...

  6. Multi-analyte quantification in bioprocesses by Fourier-transform-infrared spectroscopy by partial least squares regression and multivariate curve resolution.

    PubMed

    Koch, Cosima; Posch, Andreas E; Goicoechea, Héctor C; Herwig, Christoph; Lendl, Bernhard

    2014-01-07

    This paper presents the quantification of Penicillin V and phenoxyacetic acid, a precursor, inline during Pencillium chrysogenum fermentations by FTIR spectroscopy and partial least squares (PLS) regression and multivariate curve resolution - alternating least squares (MCR-ALS). First, the applicability of an attenuated total reflection FTIR fiber optic probe was assessed offline by measuring standards of the analytes of interest and investigating matrix effects of the fermentation broth. Then measurements were performed inline during four fed-batch fermentations with online HPLC for the determination of Penicillin V and phenoxyacetic acid as reference analysis. PLS and MCR-ALS models were built using these data and validated by comparison of single analyte spectra with the selectivity ratio of the PLS models and the extracted spectral traces of the MCR-ALS models, respectively. The achieved root mean square errors of cross-validation for the PLS regressions were 0.22 g L(-1) for Penicillin V and 0.32 g L(-1) for phenoxyacetic acid and the root mean square errors of prediction for MCR-ALS were 0.23 g L(-1) for Penicillin V and 0.15 g L(-1) for phenoxyacetic acid. A general work-flow for building and assessing chemometric regression models for the quantification of multiple analytes in bioprocesses by FTIR spectroscopy is given. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  7. International Space Station Model Correlation Analysis

    NASA Technical Reports Server (NTRS)

    Laible, Michael R.; Fitzpatrick, Kristin; Hodge, Jennifer; Grygier, Michael

    2018-01-01

    This paper summarizes the on-orbit structural dynamic data and the related modal analysis, model validation and correlation performed for the International Space Station (ISS) configuration ISS Stage ULF7, 2015 Dedicated Thruster Firing (DTF). The objective of this analysis is to validate and correlate the analytical models used to calculate the ISS internal dynamic loads and compare the 2015 DTF with previous tests. During the ISS configurations under consideration, on-orbit dynamic measurements were collected using the three main ISS instrumentation systems; Internal Wireless Instrumentation System (IWIS), External Wireless Instrumentation System (EWIS) and the Structural Dynamic Measurement System (SDMS). The measurements were recorded during several nominal on-orbit DTF tests on August 18, 2015. Experimental modal analyses were performed on the measured data to extract modal parameters including frequency, damping, and mode shape information. Correlation and comparisons between test and analytical frequencies and mode shapes were performed to assess the accuracy of the analytical models for the configurations under consideration. These mode shapes were also compared to earlier tests. Based on the frequency comparisons, the accuracy of the mathematical models is assessed and model refinement recommendations are given. In particular, results of the first fundamental mode will be discussed, nonlinear results will be shown, and accelerometer placement will be assessed.

  8. Contact-coupled impact of slender rods: analysis and experimental validation

    PubMed Central

    Tibbitts, Ira B.; Kakarla, Deepika; Siskey, Stephanie; Ochoa, Jorge A.; Ong, Kevin L.; Brannon, Rebecca M.

    2013-01-01

    To validate models of contact mechanics in low speed structural impact, slender rods were impacted in a drop tower, and measurements of the contact and vibration were compared to analytical and finite element (FE) models. The contact area was recorded using a novel thin-film transfer technique, and the contact duration was measured using electrical continuity. Strain gages recorded the vibratory strain in one rod, and a laser Doppler vibrometer measured speed. The experiment was modeled analytically on a one-dimensional spatial domain using a quasi-static Hertzian contact law and a system of delay differential equations. The three-dimensional FE model used hexahedral elements, a penalty contact algorithm, and explicit time integration. A small submodel taken from the initial global FE model economically refined the analysis in the small contact region. Measured contact areas were within 6% of both models’ predictions, peak speeds within 2%, cyclic strains within 12 με (RMS value), and contact durations within 2 μs. The global FE model and the measurements revealed small disturbances, not predicted by the analytical model, believed to be caused by interactions of the non-planar stress wavefront with the rod’s ends. The accuracy of the predictions for this simple test, as well as the versatility of the diagnostic tools, validates the theoretical and computational models, corroborates instrument calibration, and establishes confidence that the same methods may be used in experimental and computational study of contact mechanics during impact of more complicated structures. Recommendations are made for applying the methods to a particular biomechanical problem: the edge-loading of a loose prosthetic hip joint which can lead to premature wear and prosthesis failure. PMID:24729630

  9. [Validation of an in-house method for the determination of zinc in serum: Meeting the requirements of ISO 17025].

    PubMed

    Llorente Ballesteros, M T; Navarro Serrano, I; López Colón, J L

    2015-01-01

    The aim of this report is to propose a scheme for validation of an analytical technique according to ISO 17025. According to ISO 17025, the fundamental parameters tested were: selectivity, calibration model, precision, accuracy, uncertainty of measurement, and analytical interference. A protocol has been developed that has been applied successfully to quantify zinc in serum by atomic absorption spectrometry. It is demonstrated that our method is selective, linear, accurate, and precise, making it suitable for use in routine diagnostics. Copyright © 2015 SECA. Published by Elsevier Espana. All rights reserved.

  10. Analytical model for the density distribution in the Io plasma torus

    NASA Technical Reports Server (NTRS)

    Mei, YI; Thorne, Richard M.; Bagenal, Fran

    1995-01-01

    An analytical model is developed for the diffusive equilibrium plasma density distribution in the Io plasma torus. The model has been employed successfully to follow the ray path of plasma waves in the multi-ion Jovian magnetosphere; it would also be valuable for other studies of the Io torus that require a smooth and continuous description of the plasma density and its gradients. Validity of the analytical treatment requires that the temperature of thermal electrons be much lower than the ion temperature and that superthermal electrons be much less abundant than the thermal electrons; these two conditions are satisfied in the warm outer region of the Io torus from L = 6 to L = 10. The analytical solutions agree well with exact numerical calculations for the most dense portion of the Io torus within 30 deg of the equator.

  11. A comparison of finite element and analytic models of acoustic scattering from rough poroelastic interfaces.

    PubMed

    Bonomo, Anthony L; Isakson, Marcia J; Chotiros, Nicholas P

    2015-04-01

    The finite element method is used to model acoustic scattering from rough poroelastic surfaces. Both monostatic and bistatic scattering strengths are calculated and compared with three analytic models: Perturbation theory, the Kirchhoff approximation, and the small-slope approximation. It is found that the small-slope approximation is in very close agreement with the finite element results for all cases studied and that perturbation theory and the Kirchhoff approximation can be considered valid in those instances where their predictions match those given by the small-slope approximation.

  12. Analytic uncertainty and sensitivity analysis of models with input correlations

    NASA Astrophysics Data System (ADS)

    Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu

    2018-03-01

    Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.

  13. The clinical effectiveness and cost-effectiveness of testing for cytochrome P450 polymorphisms in patients with schizophrenia treated with antipsychotics: a systematic review and economic evaluation.

    PubMed

    Fleeman, N; McLeod, C; Bagust, A; Beale, S; Boland, A; Dundar, Y; Jorgensen, A; Payne, K; Pirmohamed, M; Pushpakom, S; Walley, T; de Warren-Penny, P; Dickson, R

    2010-01-01

    To determine whether testing for cytochrome P450 (CYP) polymorphisms in adults entering antipsychotic treatment for schizophrenia leads to improvement in outcomes, is useful in medical, personal or public health decision-making, and is a cost-effective use of health-care resources. The following electronic databases were searched for relevant published literature: Cochrane Controlled Trials Register, Cochrane Database of Systematic Reviews, Database of Abstracts of Reviews of Effectiveness, EMBASE, Health Technology Assessment database, ISI Web of Knowledge, MEDLINE, PsycINFO, NHS Economic Evaluation Database, Health Economic Evaluation Database, Cost-effectiveness Analysis (CEA) Registry and the Centre for Health Economics website. In addition, publicly available information on various genotyping tests was sought from the internet and advisory panel members. A systematic review of analytical validity, clinical validity and clinical utility of CYP testing was undertaken. Data were extracted into structured tables and narratively discussed, and meta-analysis was undertaken when possible. A review of economic evaluations of CYP testing in psychiatry and a review of economic models related to schizophrenia were also carried out. For analytical validity, 46 studies of a range of different genotyping tests for 11 different CYP polymorphisms (most commonly CYP2D6) were included. Sensitivity and specificity were high (99-100%). For clinical validity, 51 studies were found. In patients tested for CYP2D6, an association between genotype and tardive dyskinesia (including Abnormal Involuntary Movement Scale scores) was found. The only other significant finding linked the CYP2D6 genotype to parkinsonism. One small unpublished study met the inclusion criteria for clinical utility. One economic evaluation assessing the costs and benefits of CYP testing for prescribing antidepressants and 28 economic models of schizophrenia were identified; none was suitable for developing a model to examine the cost-effectiveness of CYP testing. Tests for determining genotypes appear to be accurate although not all aspects of analytical validity were reported. Given the absence of convincing evidence from clinical validity studies, the lack of clinical utility and economic studies, and the unsuitability of published schizophrenia models, no model was developed; instead key features and data requirements for economic modelling are presented. Recommendations for future research cover both aspects of research quality and data that will be required to inform the development of future economic models.

  14. Development and in-line validation of a Process Analytical Technology to facilitate the scale up of coating processes.

    PubMed

    Wirges, M; Funke, A; Serno, P; Knop, K; Kleinebudde, P

    2013-05-05

    Incorporation of an active pharmaceutical ingredient (API) into the coating layer of film-coated tablets is a method mainly used to formulate fixed-dose combinations. Uniform and precise spray-coating of an API represents a substantial challenge, which could be overcome by applying Raman spectroscopy as process analytical tool. In pharmaceutical industry, Raman spectroscopy is still mainly used as a bench top laboratory analytical method and usually not implemented in the production process. Concerning the application in the production process, a lot of scientific approaches stop at the level of feasibility studies and do not manage the step to production scale and process applications. The present work puts the scale up of an active coating process into focus, which is a step of highest importance during the pharmaceutical development. Active coating experiments were performed at lab and production scale. Using partial least squares (PLS), a multivariate model was constructed by correlating in-line measured Raman spectral data with the coated amount of API. By transferring this model, being implemented for a lab scale process, to a production scale process, the robustness of this analytical method and thus its applicability as a Process Analytical Technology (PAT) tool for the correct endpoint determination in pharmaceutical manufacturing could be shown. Finally, this method was validated according to the European Medicine Agency (EMA) guideline with respect to the special requirements of the applied in-line model development strategy. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. The MCNP6 Analytic Criticality Benchmark Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling)more » and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.« less

  16. Contrasting analytical and data-driven frameworks for radiogenomic modeling of normal tissue toxicities in prostate cancer.

    PubMed

    Coates, James; Jeyaseelan, Asha K; Ybarra, Norma; David, Marc; Faria, Sergio; Souhami, Luis; Cury, Fabio; Duclos, Marie; El Naqa, Issam

    2015-04-01

    We explore analytical and data-driven approaches to investigate the integration of genetic variations (single nucleotide polymorphisms [SNPs] and copy number variations [CNVs]) with dosimetric and clinical variables in modeling radiation-induced rectal bleeding (RB) and erectile dysfunction (ED) in prostate cancer patients. Sixty-two patients who underwent curative hypofractionated radiotherapy (66 Gy in 22 fractions) between 2002 and 2010 were retrospectively genotyped for CNV and SNP rs5489 in the xrcc1 DNA repair gene. Fifty-four patients had full dosimetric profiles. Two parallel modeling approaches were compared to assess the risk of severe RB (Grade⩾3) and ED (Grade⩾1); Maximum likelihood estimated generalized Lyman-Kutcher-Burman (LKB) and logistic regression. Statistical resampling based on cross-validation was used to evaluate model predictive power and generalizability to unseen data. Integration of biological variables xrcc1 CNV and SNP improved the fit of the RB and ED analytical and data-driven models. Cross-validation of the generalized LKB models yielded increases in classification performance of 27.4% for RB and 14.6% for ED when xrcc1 CNV and SNP were included, respectively. Biological variables added to logistic regression modeling improved classification performance over standard dosimetric models by 33.5% for RB and 21.2% for ED models. As a proof-of-concept, we demonstrated that the combination of genetic and dosimetric variables can provide significant improvement in NTCP prediction using analytical and data-driven approaches. The improvement in prediction performance was more pronounced in the data driven approaches. Moreover, we have shown that CNVs, in addition to SNPs, may be useful structural genetic variants in predicting radiation toxicities. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  17. Experimental and Analytical Study of Erosive Burning of Solid Propellants

    DTIC Science & Technology

    1981-06-01

    Identity by block number) Experirnert~iI - d analytical, *bdeling studies of the erosive burning ,..solfd propel l;n!t; w’r(, ,conducted at Atlantic Research...is approved for public ’release IAVV AFR 190-12 (Tb). Distribuiiou is unlitited. A. D . HLOSE Z Tecuhtgal Ina’o ’nation Offo icer - 3. Conduct...roughness. 8. Extend the erosive burning model from flat-plate geometry to axisymmetric flow. 9. Validate the 2- D model of erosive burning by experimental

  18. Analytical Solution for the Anisotropic Rabi Model: Effects of Counter-Rotating Terms

    NASA Astrophysics Data System (ADS)

    Zhang, Guofeng; Zhu, Hanjie

    2015-03-01

    The anisotropic Rabi model, which was proposed recently, differs from the original Rabi model: the rotating and counter-rotating terms are governed by two different coupling constants. This feature allows us to vary the counter-rotating interaction independently and explore the effects of it on some quantum properties. In this paper, we eliminate the counter-rotating terms approximately and obtain the analytical energy spectrums and wavefunctions. These analytical results agree well with the numerical calculations in a wide range of the parameters including the ultrastrong coupling regime. In the weak counter-rotating coupling limit we find out that the counter-rotating terms can be considered as the shifts to the parameters of the Jaynes-Cummings model. This modification shows the validness of the rotating-wave approximation on the assumption of near-resonance and relatively weak coupling. Moreover, the analytical expressions of several physics quantities are also derived, and the results show the break-down of the U(1)-symmetry and the deviation from the Jaynes-Cummings model.

  19. Analytical solution for the anisotropic Rabi model: effects of counter-rotating terms.

    PubMed

    Zhang, Guofeng; Zhu, Hanjie

    2015-03-04

    The anisotropic Rabi model, which was proposed recently, differs from the original Rabi model: the rotating and counter-rotating terms are governed by two different coupling constants. This feature allows us to vary the counter-rotating interaction independently and explore the effects of it on some quantum properties. In this paper, we eliminate the counter-rotating terms approximately and obtain the analytical energy spectrums and wavefunctions. These analytical results agree well with the numerical calculations in a wide range of the parameters including the ultrastrong coupling regime. In the weak counter-rotating coupling limit we find out that the counter-rotating terms can be considered as the shifts to the parameters of the Jaynes-Cummings model. This modification shows the validness of the rotating-wave approximation on the assumption of near-resonance and relatively weak coupling. Moreover, the analytical expressions of several physics quantities are also derived, and the results show the break-down of the U(1)-symmetry and the deviation from the Jaynes-Cummings model.

  20. Analytical Solution for the Anisotropic Rabi Model: Effects of Counter-Rotating Terms

    PubMed Central

    Zhang, Guofeng; Zhu, Hanjie

    2015-01-01

    The anisotropic Rabi model, which was proposed recently, differs from the original Rabi model: the rotating and counter-rotating terms are governed by two different coupling constants. This feature allows us to vary the counter-rotating interaction independently and explore the effects of it on some quantum properties. In this paper, we eliminate the counter-rotating terms approximately and obtain the analytical energy spectrums and wavefunctions. These analytical results agree well with the numerical calculations in a wide range of the parameters including the ultrastrong coupling regime. In the weak counter-rotating coupling limit we find out that the counter-rotating terms can be considered as the shifts to the parameters of the Jaynes-Cummings model. This modification shows the validness of the rotating-wave approximation on the assumption of near-resonance and relatively weak coupling. Moreover, the analytical expressions of several physics quantities are also derived, and the results show the break-down of the U(1)-symmetry and the deviation from the Jaynes-Cummings model. PMID:25736827

  1. Construct Validation of Analytic Rating Scales in a Speaking Assessment: Reporting a Score Profile and a Composite

    ERIC Educational Resources Information Center

    Sawaki, Yasuyo

    2007-01-01

    This is a construct validation study of a second language speaking assessment that reported a language profile based on analytic rating scales and a composite score. The study addressed three key issues: score dependability, convergent/discriminant validity of analytic rating scales and the weighting of analytic ratings in the composite score.…

  2. Modeling and simulation of a 2-DOF bidirectional electrothermal microactuator

    NASA Astrophysics Data System (ADS)

    Topaloglu, N.; Elbuken, C.; Nieva, P. M.; Yavuz, M.; Huissoon, J. P.

    2008-03-01

    In this paper we present the modeling and simulation of a 2 degree-of-freedom (DOF) bidirectional electrothermal actuator. The four arm microactuator was designed to move in both the horizontal and vertical axes. By tailoring the geometrical parameters of the design, the in-plane and out-of-plane motions were decoupled, resulting in enhanced mobility in both directions. The motion of the actuator was modeled analytically using an electro-thermo-mechanical analysis. To validate the analytical model, finite element simulations were performed using ANSYS. The microactuators were fabricated using PolyMUMPS process and experimental results show good agreement with both the analytical model and the simulations. We demonstrated that the 2-DOF bidirectional electrothermal actuator can achieve 3.7 μm in-plane and 13.3 μm out-of-plane deflections with an input voltage of 10 V.

  3. Fast analytical model of MZI micro-opto-mechanical pressure sensor

    NASA Astrophysics Data System (ADS)

    Rochus, V.; Jansen, R.; Goyvaerts, J.; Neutens, P.; O’Callaghan, J.; Rottenberg, X.

    2018-06-01

    This paper presents a fast analytical procedure in order to design a micro-opto-mechanical pressure sensor (MOMPS) taking into account the mechanical nonlinearity and the optical losses. A realistic model of the photonic MZI is proposed, strongly coupled to a nonlinear mechanical model of the membrane. Based on the membrane dimensions, the residual stress, the position of the waveguide, the optical wavelength and the phase variation due to the opto-mechanical coupling, we derive an analytical model which allows us to predict the response of the total system. The effect of the nonlinearity and the losses on the total performance are carefully studied and measurements on fabricated devices are used to validate the model. Finally, a design procedure is proposed in order to realize fast design of this new type of pressure sensor.

  4. Theoretical and experimental investigation of architected core materials incorporating negative stiffness elements

    NASA Astrophysics Data System (ADS)

    Chang, Chia-Ming; Keefe, Andrew; Carter, William B.; Henry, Christopher P.; McKnight, Geoff P.

    2014-04-01

    Structural assemblies incorporating negative stiffness elements have been shown to provide both tunable damping properties and simultaneous high stiffness and damping over prescribed displacement regions. In this paper we explore the design space for negative stiffness based assemblies using analytical modeling combined with finite element analysis. A simplified spring model demonstrates the effects of element stiffness, geometry, and preloads on the damping and stiffness performance. Simplified analytical models were validated for realistic structural implementations through finite element analysis. A series of complementary experiments was conducted to compare with modeling and determine the effects of each element on the system response. The measured damping performance follows the theoretical predictions obtained by analytical modeling. We applied these concepts to a novel sandwich core structure that exhibited combined stiffness and damping properties 8 times greater than existing foam core technologies.

  5. Probability of identification: a statistical model for the validation of qualitative botanical identification methods.

    PubMed

    LaBudde, Robert A; Harnly, James M

    2012-01-01

    A qualitative botanical identification method (BIM) is an analytical procedure that returns a binary result (1 = Identified, 0 = Not Identified). A BIM may be used by a buyer, manufacturer, or regulator to determine whether a botanical material being tested is the same as the target (desired) material, or whether it contains excessive nontarget (undesirable) material. The report describes the development and validation of studies for a BIM based on the proportion of replicates identified, or probability of identification (POI), as the basic observed statistic. The statistical procedures proposed for data analysis follow closely those of the probability of detection, and harmonize the statistical concepts and parameters between quantitative and qualitative method validation. Use of POI statistics also harmonizes statistical concepts for botanical, microbiological, toxin, and other analyte identification methods that produce binary results. The POI statistical model provides a tool for graphical representation of response curves for qualitative methods, reporting of descriptive statistics, and application of performance requirements. Single collaborator and multicollaborative study examples are given.

  6. NASA Occupant Protection Standards Development

    NASA Technical Reports Server (NTRS)

    Somers, Jeffrey; Gernhardt, Michael; Lawrence, Charles

    2012-01-01

    Historically, spacecraft landing systems have been tested with human volunteers, because analytical methods for estimating injury risk were insufficient. These tests were conducted with flight-like suits and seats to verify the safety of the landing systems. Currently, NASA uses the Brinkley Dynamic Response Index to estimate injury risk, although applying it to the NASA environment has drawbacks: (1) Does not indicate severity or anatomical location of injury (2) Unclear if model applies to NASA applications. Because of these limitations, a new validated, analytical approach was desired. Leveraging off of the current state of the art in automotive safety and racing, a new approach was developed. The approach has several aspects: (1) Define the acceptable level of injury risk by injury severity (2) Determine the appropriate human surrogate for testing and modeling (3) Mine existing human injury data to determine appropriate Injury Assessment Reference Values (IARV). (4) Rigorously Validate the IARVs with sub-injurious human testing (5) Use validated IARVs to update standards and vehicle requirement

  7. Validating Analytical Methods

    ERIC Educational Resources Information Center

    Ember, Lois R.

    1977-01-01

    The procedures utilized by the Association of Official Analytical Chemists (AOAC) to develop, evaluate, and validate analytical methods for the analysis of chemical pollutants are detailed. Methods validated by AOAC are used by the EPA and FDA in their enforcement programs and are granted preferential treatment by the courts. (BT)

  8. Experimental validation of analytical models for a rapid determination of cycle parameters in thermoplastic injection molding

    NASA Astrophysics Data System (ADS)

    Pignon, Baptiste; Sobotka, Vincent; Boyard, Nicolas; Delaunay, Didier

    2017-10-01

    Two different analytical models were presented to determine cycle parameters of thermoplastics injection process. The aim of these models was to provide quickly a first set of data for mold temperature and cooling time. The first model is specific to amorphous polymers and the second one is dedicated to semi-crystalline polymers taking the crystallization into account. In both cases, the nature of the contact between the polymer and the mold could be considered as perfect or not (thermal contact resistance was considered). Results from models are compared with experimental data obtained with an instrumented mold for an acrylonitrile butadiene styrene (ABS) and a polypropylene (PP). Good agreements were obtained for mold temperature variation and for heat flux. In the case of the PP, the analytical crystallization times were compared with those given by a coupled model between heat transfer and crystallization kinetics.

  9. Analytical study of the heat loss attenuation by clothing on thermal manikins under radiative heat loads.

    PubMed

    Den Hartog, Emiel A; Havenith, George

    2010-01-01

    For wearers of protective clothing in radiation environments there are no quantitative guidelines available for the effect of a radiative heat load on heat exchange. Under the European Union funded project ThermProtect an analytical effort was defined to address the issue of radiative heat load while wearing protective clothing. As within the ThermProtect project much information has become available from thermal manikin experiments in thermal radiation environments, these sets of experimental data are used to verify the analytical approach. The analytical approach provided a good prediction of the heat loss in the manikin experiments, 95% of the variance was explained by the model. The model has not yet been validated at high radiative heat loads and neglects some physical properties of the radiation emissivity. Still, the analytical approach provides a pragmatic approach and may be useful for practical implementation in protective clothing standards for moderate thermal radiation environments.

  10. Accurate Biomass Estimation via Bayesian Adaptive Sampling

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin R.; Knuth, Kevin H.; Castle, Joseph P.; Lvov, Nikolay

    2005-01-01

    The following concepts were introduced: a) Bayesian adaptive sampling for solving biomass estimation; b) Characterization of MISR Rahman model parameters conditioned upon MODIS landcover. c) Rigorous non-parametric Bayesian approach to analytic mixture model determination. d) Unique U.S. asset for science product validation and verification.

  11. Predictive analytics and child protection: constraints and opportunities.

    PubMed

    Russell, Jesse

    2015-08-01

    This paper considers how predictive analytics might inform, assist, and improve decision making in child protection. Predictive analytics represents recent increases in data quantity and data diversity, along with advances in computing technology. While the use of data and statistical modeling is not new to child protection decision making, its use in child protection is experiencing growth, and efforts to leverage predictive analytics for better decision-making in child protection are increasing. Past experiences, constraints and opportunities are reviewed. For predictive analytics to make the most impact on child protection practice and outcomes, it must embrace established criteria of validity, equity, reliability, and usefulness. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. A high-performance spatial database based approach for pathology imaging algorithm evaluation

    PubMed Central

    Wang, Fusheng; Kong, Jun; Gao, Jingjing; Cooper, Lee A.D.; Kurc, Tahsin; Zhou, Zhengwen; Adler, David; Vergara-Niedermayr, Cristobal; Katigbak, Bryan; Brat, Daniel J.; Saltz, Joel H.

    2013-01-01

    Background: Algorithm evaluation provides a means to characterize variability across image analysis algorithms, validate algorithms by comparison with human annotations, combine results from multiple algorithms for performance improvement, and facilitate algorithm sensitivity studies. The sizes of images and image analysis results in pathology image analysis pose significant challenges in algorithm evaluation. We present an efficient parallel spatial database approach to model, normalize, manage, and query large volumes of analytical image result data. This provides an efficient platform for algorithm evaluation. Our experiments with a set of brain tumor images demonstrate the application, scalability, and effectiveness of the platform. Context: The paper describes an approach and platform for evaluation of pathology image analysis algorithms. The platform facilitates algorithm evaluation through a high-performance database built on the Pathology Analytic Imaging Standards (PAIS) data model. Aims: (1) Develop a framework to support algorithm evaluation by modeling and managing analytical results and human annotations from pathology images; (2) Create a robust data normalization tool for converting, validating, and fixing spatial data from algorithm or human annotations; (3) Develop a set of queries to support data sampling and result comparisons; (4) Achieve high performance computation capacity via a parallel data management infrastructure, parallel data loading and spatial indexing optimizations in this infrastructure. Materials and Methods: We have considered two scenarios for algorithm evaluation: (1) algorithm comparison where multiple result sets from different methods are compared and consolidated; and (2) algorithm validation where algorithm results are compared with human annotations. We have developed a spatial normalization toolkit to validate and normalize spatial boundaries produced by image analysis algorithms or human annotations. The validated data were formatted based on the PAIS data model and loaded into a spatial database. To support efficient data loading, we have implemented a parallel data loading tool that takes advantage of multi-core CPUs to accelerate data injection. The spatial database manages both geometric shapes and image features or classifications, and enables spatial sampling, result comparison, and result aggregation through expressive structured query language (SQL) queries with spatial extensions. To provide scalable and efficient query support, we have employed a shared nothing parallel database architecture, which distributes data homogenously across multiple database partitions to take advantage of parallel computation power and implements spatial indexing to achieve high I/O throughput. Results: Our work proposes a high performance, parallel spatial database platform for algorithm validation and comparison. This platform was evaluated by storing, managing, and comparing analysis results from a set of brain tumor whole slide images. The tools we develop are open source and available to download. Conclusions: Pathology image algorithm validation and comparison are essential to iterative algorithm development and refinement. One critical component is the support for queries involving spatial predicates and comparisons. In our work, we develop an efficient data model and parallel database approach to model, normalize, manage and query large volumes of analytical image result data. Our experiments demonstrate that the data partitioning strategy and the grid-based indexing result in good data distribution across database nodes and reduce I/O overhead in spatial join queries through parallel retrieval of relevant data and quick subsetting of datasets. The set of tools in the framework provide a full pipeline to normalize, load, manage and query analytical results for algorithm evaluation. PMID:23599905

  13. Distributed parameter modeling to prevent charge cancellation for discrete thickness piezoelectric energy harvester

    NASA Astrophysics Data System (ADS)

    Krishnasamy, M.; Qian, Feng; Zuo, Lei; Lenka, T. R.

    2018-03-01

    The charge cancellation due to the change of strain along single continuous piezoelectric layer can remarkably affect the performance of a cantilever based harvester. In this paper, analytical models using distributed parameters are developed with some extent of averting the charge cancellation in cantilever piezoelectric transducer where the piezoelectric layers are segmented at strain nodes of concerned vibration mode. The electrode of piezoelectric segments are parallelly connected with a single external resistive load in the 1st model (Model 1). While each bimorph piezoelectric layers are connected in parallel to a resistor to form an independent circuit in the 2nd model (Model 2). The analytical expressions of the closed-form electromechanical coupling responses in frequency domain under harmonic base excitation are derived based on the Euler-Bernoulli beam assumption for both models. The developed analytical models are validated by COMSOL and experimental results. The results demonstrate that the energy harvesting performance of the developed segmented piezoelectric layer models is better than the traditional model of continuous piezoelectric layer.

  14. Analytical solutions by squeezing to the anisotropic Rabi model in the nonperturbative deep-strong-coupling regime

    NASA Astrophysics Data System (ADS)

    Zhang, Yu-Yu; Chen, Xiang-You

    2017-12-01

    An unexplored nonperturbative deep strong coupling (npDSC) achieved in superconducting circuits has been studied in the anisotropic Rabi model by the generalized squeezing rotating-wave approximation. Energy levels are evaluated analytically from the reformulated Hamiltonian and agree well with numerical ones in a wide range of coupling strength. Such improvement ascribes to deformation effects in the displaced-squeezed state presented by the squeezed momentum variance, which are omitted in previous displaced states. The atom population dynamics confirms the validity of our approach for the npDSC strength. Our approach offers the possibility to explore interesting phenomena analytically in the npDSC regime in qubit-oscillator experiments.

  15. Analytical modeling and analysis of magnetic field and torque for novel axial flux eddy current couplers with PM excitation

    NASA Astrophysics Data System (ADS)

    Li, Zhao; Wang, Dazhi; Zheng, Di; Yu, Linxin

    2017-10-01

    Rotational permanent magnet eddy current couplers are promising devices for torque and speed transmission without any mechanical contact. In this study, flux-concentration disk-type permanent magnet eddy current couplers with double conductor rotor are investigated. Given the drawback of the accurate three-dimensional finite element method, this paper proposes a mixed two-dimensional analytical modeling approach. Based on this approach, the closed-form expressions of magnetic field, eddy current, electromagnetic force and torque for such devices are obtained. Finally, a three-dimensional finite element method is employed to validate the analytical results. Besides, a prototype is manufactured and tested for the torque-speed characteristic.

  16. Low velocity impact analysis of composite laminated plates

    NASA Astrophysics Data System (ADS)

    Zheng, Daihua

    2007-12-01

    In the past few decades polymer composites have been utilized more in structures where high strength and light weight are major concerns, e.g., aircraft, high-speed boats and sports supplies. It is well known that they are susceptible to damage resulting from lateral impact by foreign objects, such as dropped tools, hail and debris thrown up from the runway. The impact response of the structures depends not only on the material properties but also on the dynamic behavior of the impacted structure. Although commercial software is capable of analyzing such impact processes, it often requires extensive expertise and rigorous training for design and analysis. Analytical models are useful as they allow parametric studies and provide a foundation for validating the numerical results from large-scale commercial software. Therefore, it is necessary to develop analytical or semi-analytical models to better understand the behaviors of composite structures under impact and their associated failure process. In this study, several analytical models are proposed in order to analyze the impact response of composite laminated plates. Based on Meyer's Power Law, a semi-analytical model is obtained for small mass impact response of infinite composite laminates by the method of asymptotic expansion. The original nonlinear second-order ordinary differential equation is transformed into two linear ordinary differential equations. This is achieved by neglecting high-order terms in the asymptotic expansion. As a result, the semi-analytical solution of the overall impact response can be applied to contact laws with varying coefficients. Then an analytical model accounting for permanent deformation based on an elasto-plastic contact law is proposed to obtain the closed-form solutions of the wave-controlled impact responses of composite laminates. The analytical model is also used to predict the threshold velocity for delamination onset by combining with an existing quasi-static delamination criterion. The predictions are compared with experimental data and explicit finite element LS-DYNA simulation. The comparisons show reasonable agreement. Furthermore, an analytical model is developed to evaluate the combined effects of prestresses and permanent deformation based on the linearized elasto-plastic contact law and the Laplace Transform technique. It is demonstrated that prestresses do not have noticeable effects on the time history of contact force and strains, but they have significant consequences on the plate central displacement. For a impacted composite laminate with the presence of prestresses, the contact force increases with the increasing of the mass of impactor, thickness and interlaminar shear strength of the laminate. The combined analytical and numerical investigations provide validated models for elastic and elasto-plastic impact analysis of composite structures and shed light on the design of impact-resistant composite systems.

  17. Crew appliance computer program manual, volume 1

    NASA Technical Reports Server (NTRS)

    Russell, D. J.

    1975-01-01

    Trade studies of numerous appliance concepts for advanced spacecraft galley, personal hygiene, housekeeping, and other areas were made to determine which best satisfy the space shuttle orbiter and modular space station mission requirements. Analytical models of selected appliance concepts not currently included in the G-189A Generalized Environmental/Thermal Control and Life Support Systems (ETCLSS) Computer Program subroutine library were developed. The new appliance subroutines are given along with complete analytical model descriptions, solution methods, user's input instructions, and validation run results. The appliance components modeled were integrated with G-189A ETCLSS models for shuttle orbiter and modular space station, and results from computer runs of these systems are presented.

  18. Monitoring by forward scatter radar techniques: an improved second-order analytical model

    NASA Astrophysics Data System (ADS)

    Falconi, Marta Tecla; Comite, Davide; Galli, Alessandro; Marzano, Frank S.; Pastina, Debora; Lombardo, Pierfrancesco

    2017-10-01

    In this work, a second-order phase approximation is introduced to provide an improved analytical model of the signal received in forward scatter radar systems. A typical configuration with a rectangular metallic object illuminated while crossing the baseline, in far- or near-field conditions, is considered. An improved second-order model is compared with a simplified one already proposed by the authors and based on a paraxial approximation. A phase error analysis is carried out to investigate benefits and limitations of the second-order modeling. The results are validated by developing full-wave numerical simulations implementing the relevant scattering problem on a commercial tool.

  19. FEMFLOW3D; a finite-element program for the simulation of three-dimensional aquifers; version 1.0

    USGS Publications Warehouse

    Durbin, Timothy J.; Bond, Linda D.

    1998-01-01

    This document also includes model validation, source code, and example input and output files. Model validation was performed using four test problems. For each test problem, the results of a model simulation with FEMFLOW3D were compared with either an analytic solution or the results of an independent numerical approach. The source code, written in the ANSI x3.9-1978 FORTRAN standard, and the complete input and output of an example problem are listed in the appendixes.

  20. Mathematical model of water transport in Bacon and alkaline matrix-type hydrogen-oxygen fuel cells

    NASA Technical Reports Server (NTRS)

    Prokopius, P. R.; Easter, R. W.

    1972-01-01

    Based on general mass continuity and diffusive transport equations, a mathematical model was developed that simulates the transport of water in Bacon and alkaline-matrix fuel cells. The derived model was validated by using it to analytically reproduce various Bacon and matrix-cell experimental water transport transients.

  1. MODEL CORRELATION STUDY OF A RETRACTABLE BOOM FOR A SOLAR SAIL SPACECRAFT

    NASA Technical Reports Server (NTRS)

    Adetona, O.; Keel, L. H.; Oakley, J. D.; Kappus, K.; Whorton, M. S.; Kim, Y. K.; Rakpczy, J. M.

    2005-01-01

    To realize design concepts, predict dynamic behavior and develop appropriate control strategies for high performance operation of a solar-sail spacecraft, we developed a simple analytical model that represents dynamic behavior of spacecraft with various sizes. Since motion of the vehicle is dominated by retractable booms that support the structure, our study concentrates on developing and validating a dynamic model of a long retractable boom. Extensive tests with various configurations were conducted for the 30 Meter, light-weight, retractable, lattice boom at NASA MSFC that is structurally and dynamically similar to those of a solar-sail spacecraft currently under construction. Experimental data were then compared with the corresponding response of the analytical model. Though mixed results were obtained, the analytical model emulates several key characteristics of the boom. The paper concludes with a detailed discussion of issues observed during the study.

  2. Piezoresistive Cantilever Performance—Part I: Analytical Model for Sensitivity

    PubMed Central

    Park, Sung-Jin; Doll, Joseph C.; Pruitt, Beth L.

    2010-01-01

    An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors. PMID:20336183

  3. Piezoresistive Cantilever Performance-Part I: Analytical Model for Sensitivity.

    PubMed

    Park, Sung-Jin; Doll, Joseph C; Pruitt, Beth L

    2010-02-01

    An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors.

  4. Improved modeling of GaN HEMTs for predicting thermal and trapping-induced-kink effects

    NASA Astrophysics Data System (ADS)

    Jarndal, Anwar; Ghannouchi, Fadhel M.

    2016-09-01

    In this paper, an improved modeling approach has been developed and validated for GaN high electron mobility transistors (HEMTs). The proposed analytical model accurately simulates the drain current and its inherent trapping and thermal effects. Genetic-algorithm-based procedure is developed to automatically find the fitting parameters of the model. The developed modeling technique is implemented on a packaged GaN-on-Si HEMT and validated by DC and small-/large-signal RF measurements. The model is also employed for designing and realizing a switch-mode inverse class-F power amplifier. The amplifier simulations showed a very good agreement with RF large-signal measurements.

  5. Validation of Slosh Modeling Approach Using STAR-CCM+

    NASA Technical Reports Server (NTRS)

    Benson, David J.; Ng, Wanyi

    2018-01-01

    Without an adequate understanding of propellant slosh, the spacecraft attitude control system may be inadequate to control the spacecraft or there may be an unexpected loss of science observation time due to higher slosh settling times. Computational fluid dynamics (CFD) is used to model propellant slosh. STAR-CCM+ is a commercially available CFD code. This paper seeks to validate the CFD modeling approach via a comparison between STAR-CCM+ liquid slosh modeling results and experimental, empirically, and analytically derived results. The geometries examined are a bare right cylinder tank and a right cylinder with a single ring baffle.

  6. Validation of a numerical method for interface-resolving simulation of multicomponent gas-liquid mass transfer and evaluation of multicomponent diffusion models

    NASA Astrophysics Data System (ADS)

    Woo, Mino; Wörner, Martin; Tischer, Steffen; Deutschmann, Olaf

    2018-03-01

    The multicomponent model and the effective diffusivity model are well established diffusion models for numerical simulation of single-phase flows consisting of several components but are seldom used for two-phase flows so far. In this paper, a specific numerical model for interfacial mass transfer by means of a continuous single-field concentration formulation is combined with the multicomponent model and effective diffusivity model and is validated for multicomponent mass transfer. For this purpose, several test cases for one-dimensional physical or reactive mass transfer of ternary mixtures are considered. The numerical results are compared with analytical or numerical solutions of the Maxell-Stefan equations and/or experimental data. The composition-dependent elements of the diffusivity matrix of the multicomponent and effective diffusivity model are found to substantially differ for non-dilute conditions. The species mole fraction or concentration profiles computed with both diffusion models are, however, for all test cases very similar and in good agreement with the analytical/numerical solutions or measurements. For practical computations, the effective diffusivity model is recommended due to its simplicity and lower computational costs.

  7. New Coke, Rosetta Stones, and Functional Data Analysis: Recommendations for Developing and Validating New Measures of Depression

    ERIC Educational Resources Information Center

    Santor, Darcy A.

    2006-01-01

    In this article, the author outlines six recommendations that may guide the continued development and validation of measures of depression. These are (a) articulate and revise a formal theory of signs and symptoms; (b) differentiate complex theoretical goals from pragmatic evaluation needs; (c) invest heavily in new methods and analytic models;…

  8. Validating Analytical Protocols to Determine Selected Pesticides and PCBs Using Routine Samples.

    PubMed

    Pindado Jiménez, Oscar; García Alonso, Susana; Pérez Pastor, Rosa María

    2017-01-01

    This study aims at providing recommendations concerning the validation of analytical protocols by using routine samples. It is intended to provide a case-study on how to validate the analytical methods in different environmental matrices. In order to analyze the selected compounds (pesticides and polychlorinated biphenyls) in two different environmental matrices, the current work has performed and validated two analytical procedures by GC-MS. A description is given of the validation of the two protocols by the analysis of more than 30 samples of water and sediments collected along nine months. The present work also scopes the uncertainty associated with both analytical protocols. In detail, uncertainty of water sample was performed through a conventional approach. However, for the sediments matrices, the estimation of proportional/constant bias is also included due to its inhomogeneity. Results for the sediment matrix are reliable, showing a range 25-35% of analytical variability associated with intermediate conditions. The analytical methodology for the water matrix determines the selected compounds with acceptable recoveries and the combined uncertainty ranges between 20 and 30%. Analyzing routine samples is rarely applied to assess trueness of novel analytical methods and up to now this methodology was not focused on organochlorine compounds in environmental matrices.

  9. Multi-Evaporator Miniature Loop Heat Pipe for Small Spacecraft Thermal Control. Part 1; New Technologies and Validation Approach

    NASA Technical Reports Server (NTRS)

    Ku, Jentung; Ottenstein, Laura; Douglas, Donya; Hoang, Triem

    2010-01-01

    Under NASA s New Millennium Program Space Technology 8 (ST 8) Project, four experiments Thermal Loop, Dependable Microprocessor, SAILMAST, and UltraFlex - were conducted to advance the maturity of individual technologies from proof of concept to prototype demonstration in a relevant environment , i.e. from a technology readiness level (TRL) of 3 to a level of 6. This paper presents the new technologies and validation approach of the Thermal Loop experiment. The Thermal Loop is an advanced thermal control system consisting of a miniature loop heat pipe (MLHP) with multiple evaporators and multiple condensers designed for future small system applications requiring low mass, low power, and compactness. The MLHP retains all features of state-of-the-art loop heat pipes (LHPs) and offers additional advantages to enhance the functionality, performance, versatility, and reliability of the system. Details of the thermal loop concept, technical advances, benefits, objectives, level 1 requirements, and performance characteristics are described. Also included in the paper are descriptions of the test articles and mathematical modeling used for the technology validation. An MLHP breadboard was built and tested in the laboratory and thermal vacuum environments for TRL 4 and TRL 5 validations, and an MLHP proto-flight unit was built and tested in a thermal vacuum chamber for the TRL 6 validation. In addition, an analytical model was developed to simulate the steady state and transient behaviors of the MLHP during various validation tests. Capabilities and limitations of the analytical model are also addressed.

  10. Analytical modelling of Halbach linear generator incorporating pole shifting and piece-wise spring for ocean wave energy harvesting

    NASA Astrophysics Data System (ADS)

    Tan, Yimin; Lin, Kejian; Zu, Jean W.

    2018-05-01

    Halbach permanent magnet (PM) array has attracted tremendous research attention in the development of electromagnetic generators for its unique properties. This paper has proposed a generalized analytical model for linear generators. The slotted stator pole-shifting and implementation of Halbach array have been combined for the first time. Initially, the magnetization components of the Halbach array have been determined using Fourier decomposition. Then, based on the magnetic scalar potential method, the magnetic field distribution has been derived employing specially treated boundary conditions. FEM analysis has been conducted to verify the analytical model. A slotted linear PM generator with Halbach PM has been constructed to validate the model and further improved using piece-wise springs to trigger full range reciprocating motion. A dynamic model has been developed to characterize the dynamic behavior of the slider. This analytical method provides an effective tool in development and optimization of Halbach PM generator. The experimental results indicate that piece-wise springs can be employed to improve generator performance under low excitation frequency.

  11. Pattern Storage, Bifurcations, and Groupwise Correlation Structure of an Exactly Solvable Asymmetric Neural Network Model.

    PubMed

    Fasoli, Diego; Cattani, Anna; Panzeri, Stefano

    2018-05-01

    Despite their biological plausibility, neural network models with asymmetric weights are rarely solved analytically, and closed-form solutions are available only in some limiting cases or in some mean-field approximations. We found exact analytical solutions of an asymmetric spin model of neural networks with arbitrary size without resorting to any approximation, and we comprehensively studied its dynamical and statistical properties. The network had discrete time evolution equations and binary firing rates, and it could be driven by noise with any distribution. We found analytical expressions of the conditional and stationary joint probability distributions of the membrane potentials and the firing rates. By manipulating the conditional probability distribution of the firing rates, we extend to stochastic networks the associating learning rule previously introduced by Personnaz and coworkers. The new learning rule allowed the safe storage, under the presence of noise, of point and cyclic attractors, with useful implications for content-addressable memories. Furthermore, we studied the bifurcation structure of the network dynamics in the zero-noise limit. We analytically derived examples of the codimension 1 and codimension 2 bifurcation diagrams of the network, which describe how the neuronal dynamics changes with the external stimuli. This showed that the network may undergo transitions among multistable regimes, oscillatory behavior elicited by asymmetric synaptic connections, and various forms of spontaneous symmetry breaking. We also calculated analytically groupwise correlations of neural activity in the network in the stationary regime. This revealed neuronal regimes where, statistically, the membrane potentials and the firing rates are either synchronous or asynchronous. Our results are valid for networks with any number of neurons, although our equations can be realistically solved only for small networks. For completeness, we also derived the network equations in the thermodynamic limit of infinite network size and we analytically studied their local bifurcations. All the analytical results were extensively validated by numerical simulations.

  12. Shielded-Twisted-Pair Cable Model for Chafe Fault Detection via Time-Domain Reflectometry

    NASA Technical Reports Server (NTRS)

    Schuet, Stefan R.; Timucin, Dogan A.; Wheeler, Kevin R.

    2012-01-01

    This report details the development, verification, and validation of an innovative physics-based model of electrical signal propagation through shielded-twisted-pair cable, which is commonly found on aircraft and offers an ideal proving ground for detection of small holes in a shield well before catastrophic damage occurs. The accuracy of this model is verified through numerical electromagnetic simulations using a commercially available software tool. The model is shown to be representative of more realistic (analytically intractable) cable configurations as well. A probabilistic framework is developed for validating the model accuracy with reflectometry data obtained from real aircraft-grade cables chafed in the laboratory.

  13. Experimental, Numerical, and Analytical Slosh Dynamics of Water and Liquid Nitrogen in a Spherical Tank

    NASA Technical Reports Server (NTRS)

    Storey, Jedediah Morse

    2016-01-01

    Understanding, predicting, and controlling fluid slosh dynamics is critical to safety and improving performance of space missions when a significant percentage of the spacecraft's mass is a liquid. Computational fluid dynamics simulations can be used to predict the dynamics of slosh, but these programs require extensive validation. Many experimental and numerical studies of water slosh have been conducted. However, slosh data for cryogenic liquids is lacking. Water and cryogenic liquid nitrogen are used in various ground-based tests with a spherical tank to characterize damping, slosh mode frequencies, and slosh forces. A single ring baffle is installed in the tank for some of the tests. Analytical models for slosh modes, slosh forces, and baffle damping are constructed based on prior work. Select experiments are simulated using a commercial CFD software, and the numerical results are compared to the analytical and experimental results for the purposes of validation and methodology-improvement.

  14. On Nomological Validity and Auxiliary Assumptions: The Importance of Simultaneously Testing Effects in Social Cognitive Theories Applied to Health Behavior and Some Guidelines

    PubMed Central

    Hagger, Martin S.; Gucciardi, Daniel F.; Chatzisarantis, Nikos L. D.

    2017-01-01

    Tests of social cognitive theories provide informative data on the factors that relate to health behavior, and the processes and mechanisms involved. In the present article, we contend that tests of social cognitive theories should adhere to the principles of nomological validity, defined as the degree to which predictions in a formal theoretical network are confirmed. We highlight the importance of nomological validity tests to ensure theory predictions can be disconfirmed through observation. We argue that researchers should be explicit on the conditions that lead to theory disconfirmation, and identify any auxiliary assumptions on which theory effects may be conditional. We contend that few researchers formally test the nomological validity of theories, or outline conditions that lead to model rejection and the auxiliary assumptions that may explain findings that run counter to hypotheses, raising potential for ‘falsification evasion.’ We present a brief analysis of studies (k = 122) testing four key social cognitive theories in health behavior to illustrate deficiencies in reporting theory tests and evaluations of nomological validity. Our analysis revealed that few articles report explicit statements suggesting that their findings support or reject the hypotheses of the theories tested, even when findings point to rejection. We illustrate the importance of explicit a priori specification of fundamental theory hypotheses and associated auxiliary assumptions, and identification of the conditions which would lead to rejection of theory predictions. We also demonstrate the value of confirmatory analytic techniques, meta-analytic structural equation modeling, and Bayesian analyses in providing robust converging evidence for nomological validity. We provide a set of guidelines for researchers on how to adopt and apply the nomological validity approach to testing health behavior models. PMID:29163307

  15. Emittance preservation in plasma-based accelerators with ion motion

    DOE PAGES

    Benedetti, C.; Schroeder, C. B.; Esarey, E.; ...

    2017-11-01

    In a plasma-accelerator-based linear collider, the density of matched, low-emittance, high-energy particle bunches required for collider applications can be orders of magnitude above the background ion density, leading to ion motion, perturbation of the focusing fields, and, hence, to beam emittance growth. By analyzing the response of the background ions to an ultrahigh density beam, analytical expressions, valid for nonrelativistic ion motion, are derived for the transverse wakefield and for the final (i.e., after saturation) bunch emittance. Analytical results are validated against numerical modeling. Initial beam distributions are derived that are equilibrium solutions, which require head-to-tail bunch shaping, enabling emittancemore » preservation with ion motion.« less

  16. Structural Design Optimization of Doubly-Fed Induction Generators Using GeneratorSE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sethuraman, Latha; Fingersh, Lee J; Dykes, Katherine L

    2017-11-13

    A wind turbine with a larger rotor swept area can generate more electricity, however, this increases costs disproportionately for manufacturing, transportation, and installation. This poster presents analytical models for optimizing doubly-fed induction generators (DFIGs), with the objective of reducing the costs and mass of wind turbine drivetrains. The structural design for the induction machine includes models for the casing, stator, rotor, and high-speed shaft developed within the DFIG module in the National Renewable Energy Laboratory's wind turbine sizing tool, GeneratorSE. The mechanical integrity of the machine is verified by examining stresses, structural deflections, and modal properties. The optimization results aremore » then validated using finite element analysis (FEA). The results suggest that our analytical model correlates with the FEA in some areas, such as radial deflection, differing by less than 20 percent. But the analytical model requires further development for axial deflections, torsional deflections, and stress calculations.« less

  17. Verification and Validation of EnergyPlus Phase Change Material Model for Opaque Wall Assemblies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tabares-Velasco, P. C.; Christensen, C.; Bianchi, M.

    2012-08-01

    Phase change materials (PCMs) represent a technology that may reduce peak loads and HVAC energy consumption in buildings. A few building energy simulation programs have the capability to simulate PCMs, but their accuracy has not been completely tested. This study shows the procedure used to verify and validate the PCM model in EnergyPlus using a similar approach as dictated by ASHRAE Standard 140, which consists of analytical verification, comparative testing, and empirical validation. This process was valuable, as two bugs were identified and fixed in the PCM model, and version 7.1 of EnergyPlus will have a validated PCM model. Preliminarymore » results using whole-building energy analysis show that careful analysis should be done when designing PCMs in homes, as their thermal performance depends on several variables such as PCM properties and location in the building envelope.« less

  18. Thermodynamic analysis and subscale modeling of space-based orbit transfer vehicle cryogenic propellant resupply

    NASA Technical Reports Server (NTRS)

    Defelice, David M.; Aydelott, John C.

    1987-01-01

    The resupply of the cryogenic propellants is an enabling technology for spacebased orbit transfer vehicles. As part of the NASA Lewis ongoing efforts in microgravity fluid management, thermodynamic analysis and subscale modeling techniques were developed to support an on-orbit test bed for cryogenic fluid management technologies. Analytical results have shown that subscale experimental modeling of liquid resupply can be used to validate analytical models when the appropriate target temperature is selected to relate the model to its prototype system. Further analyses were used to develop a thermodynamic model of the tank chilldown process which is required prior to the no-vent fill operation. These efforts were incorporated into two FORTRAN programs which were used to present preliminary analyticl results.

  19. Predictive Big Data Analytics: A Study of Parkinson's Disease Using Large, Complex, Heterogeneous, Incongruent, Multi-Source and Incomplete Observations.

    PubMed

    Dinov, Ivo D; Heavner, Ben; Tang, Ming; Glusman, Gustavo; Chard, Kyle; Darcy, Mike; Madduri, Ravi; Pa, Judy; Spino, Cathie; Kesselman, Carl; Foster, Ian; Deutsch, Eric W; Price, Nathan D; Van Horn, John D; Ames, Joseph; Clark, Kristi; Hood, Leroy; Hampstead, Benjamin M; Dauer, William; Toga, Arthur W

    2016-01-01

    A unique archive of Big Data on Parkinson's Disease is collected, managed and disseminated by the Parkinson's Progression Markers Initiative (PPMI). The integration of such complex and heterogeneous Big Data from multiple sources offers unparalleled opportunities to study the early stages of prevalent neurodegenerative processes, track their progression and quickly identify the efficacies of alternative treatments. Many previous human and animal studies have examined the relationship of Parkinson's disease (PD) risk to trauma, genetics, environment, co-morbidities, or life style. The defining characteristics of Big Data-large size, incongruency, incompleteness, complexity, multiplicity of scales, and heterogeneity of information-generating sources-all pose challenges to the classical techniques for data management, processing, visualization and interpretation. We propose, implement, test and validate complementary model-based and model-free approaches for PD classification and prediction. To explore PD risk using Big Data methodology, we jointly processed complex PPMI imaging, genetics, clinical and demographic data. Collective representation of the multi-source data facilitates the aggregation and harmonization of complex data elements. This enables joint modeling of the complete data, leading to the development of Big Data analytics, predictive synthesis, and statistical validation. Using heterogeneous PPMI data, we developed a comprehensive protocol for end-to-end data characterization, manipulation, processing, cleaning, analysis and validation. Specifically, we (i) introduce methods for rebalancing imbalanced cohorts, (ii) utilize a wide spectrum of classification methods to generate consistent and powerful phenotypic predictions, and (iii) generate reproducible machine-learning based classification that enables the reporting of model parameters and diagnostic forecasting based on new data. We evaluated several complementary model-based predictive approaches, which failed to generate accurate and reliable diagnostic predictions. However, the results of several machine-learning based classification methods indicated significant power to predict Parkinson's disease in the PPMI subjects (consistent accuracy, sensitivity, and specificity exceeding 96%, confirmed using statistical n-fold cross-validation). Clinical (e.g., Unified Parkinson's Disease Rating Scale (UPDRS) scores), demographic (e.g., age), genetics (e.g., rs34637584, chr12), and derived neuroimaging biomarker (e.g., cerebellum shape index) data all contributed to the predictive analytics and diagnostic forecasting. Model-free Big Data machine learning-based classification methods (e.g., adaptive boosting, support vector machines) can outperform model-based techniques in terms of predictive precision and reliability (e.g., forecasting patient diagnosis). We observed that statistical rebalancing of cohort sizes yields better discrimination of group differences, specifically for predictive analytics based on heterogeneous and incomplete PPMI data. UPDRS scores play a critical role in predicting diagnosis, which is expected based on the clinical definition of Parkinson's disease. Even without longitudinal UPDRS data, however, the accuracy of model-free machine learning based classification is over 80%. The methods, software and protocols developed here are openly shared and can be employed to study other neurodegenerative disorders (e.g., Alzheimer's, Huntington's, amyotrophic lateral sclerosis), as well as for other predictive Big Data analytics applications.

  20. Validation by simulation of a clinical trial model using the standardized mean and variance criteria.

    PubMed

    Abbas, Ismail; Rovira, Joan; Casanovas, Josep

    2006-12-01

    To develop and validate a model of a clinical trial that evaluates the changes in cholesterol level as a surrogate marker for lipodystrophy in HIV subjects under alternative antiretroviral regimes, i.e., treatment with Protease Inhibitors vs. a combination of nevirapine and other antiretroviral drugs. Five simulation models were developed based on different assumptions, on treatment variability and pattern of cholesterol reduction over time. The last recorded cholesterol level, the difference from the baseline, the average difference from the baseline and level evolution, are the considered endpoints. Specific validation criteria based on a 10% minus or plus standardized distance in means and variances were used to compare the real and the simulated data. The validity criterion was met by all models for considered endpoints. However, only two models met the validity criterion when all endpoints were considered. The model based on the assumption that within-subjects variability of cholesterol levels changes over time is the one that minimizes the validity criterion, standardized distance equal to or less than 1% minus or plus. Simulation is a useful technique for calibration, estimation, and evaluation of models, which allows us to relax the often overly restrictive assumptions regarding parameters required by analytical approaches. The validity criterion can also be used to select the preferred model for design optimization, until additional data are obtained allowing an external validation of the model.

  1. Nonlinear dynamics of planetary gears using analytical and finite element models

    NASA Astrophysics Data System (ADS)

    Ambarisha, Vijaya Kumar; Parker, Robert G.

    2007-05-01

    Vibration-induced gear noise and dynamic loads remain key concerns in many transmission applications that use planetary gears. Tooth separations at large vibrations introduce nonlinearity in geared systems. The present work examines the complex, nonlinear dynamic behavior of spur planetary gears using two models: (i) a lumped-parameter model, and (ii) a finite element model. The two-dimensional (2D) lumped-parameter model represents the gears as lumped inertias, the gear meshes as nonlinear springs with tooth contact loss and periodically varying stiffness due to changing tooth contact conditions, and the supports as linear springs. The 2D finite element model is developed from a unique finite element-contact analysis solver specialized for gear dynamics. Mesh stiffness variation excitation, corner contact, and gear tooth contact loss are all intrinsically considered in the finite element analysis. The dynamics of planetary gears show a rich spectrum of nonlinear phenomena. Nonlinear jumps, chaotic motions, and period-doubling bifurcations occur when the mesh frequency or any of its higher harmonics are near a natural frequency of the system. Responses from the dynamic analysis using analytical and finite element models are successfully compared qualitatively and quantitatively. These comparisons validate the effectiveness of the lumped-parameter model to simulate the dynamics of planetary gears. Mesh phasing rules to suppress rotational and translational vibrations in planetary gears are valid even when nonlinearity from tooth contact loss occurs. These mesh phasing rules, however, are not valid in the chaotic and period-doubling regions.

  2. Analytical and numerical solution for wave reflection from a porous wave absorber

    NASA Astrophysics Data System (ADS)

    Magdalena, Ikha; Roque, Marian P.

    2018-03-01

    In this paper, wave reflection from a porous wave absorber is investigated theoretically and numerically. The equations that we used are based on shallow water type model. Modification of motion inside the absorber is by including linearized friction term in momentum equation and introducing a filtered velocity. Here, an analytical solution for wave reflection coefficient from a porous wave absorber over a flat bottom is derived. Numerically, we solve the equations using the finite volume method on a staggered grid. To validate our numerical model, comparison of the numerical reflection coefficient is made against the analytical solution. Further, we implement our numerical scheme to study the evolution of surface waves pass through a porous absorber over varied bottom topography.

  3. Modeling of optical mirror and electromechanical behavior

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Lu, Chao; Liu, Zishun; Liu, Ai Q.; Zhang, Xu M.

    2001-10-01

    This paper presents finite element (FE) simulation and theoretical analysis of novel MEMS fiber-optical switches actuated by electrostatic attraction. FE simulation for the switches under static and dynamic loading are first carried out to reveal the mechanical characteristics of the minimum or critical switching voltages, the natural frequencies, mode shapes and response under different levels of electrostatic attraction load. To validate the FE simulation results, a theoretical (or analytical) model is then developed for one specific switch, i.e., Plate_40_104. Good agreement is found between the FE simulation and the analytical results. From both FE simulation and theoretical analysis, the critical switching voltage for Plate_40_104 is derived to be 238 V for the switching angel of 12 degree(s). The critical switching on and off times are 431 microsecond(s) and 67 microsecond(s) , respectively. The present study not only develops good FE and analytical models, but also demonstrates step by step a method to simplify a real optical switch structure with reference to the FE simulation results for analytical purpose. With the FE and analytical models, it is easy to obtain any information about the mechanical behaviors of the optical switches, which are helpful in yielding optimized design.

  4. THz spectroscopy: An emerging technology for pharmaceutical development and pharmaceutical Process Analytical Technology (PAT) applications

    NASA Astrophysics Data System (ADS)

    Wu, Huiquan; Khan, Mansoor

    2012-08-01

    As an emerging technology, THz spectroscopy has gained increasing attention in the pharmaceutical area during the last decade. This attention is due to the fact that (1) it provides a promising alternative approach for in-depth understanding of both intermolecular interaction among pharmaceutical molecules and pharmaceutical product quality attributes; (2) it provides a promising alternative approach for enhanced process understanding of certain pharmaceutical manufacturing processes; and (3) the FDA pharmaceutical quality initiatives, most noticeably, the Process Analytical Technology (PAT) initiative. In this work, the current status and progress made so far on using THz spectroscopy for pharmaceutical development and pharmaceutical PAT applications are reviewed. In the spirit of demonstrating the utility of first principles modeling approach for addressing model validation challenge and reducing unnecessary model validation "burden" for facilitating THz pharmaceutical PAT applications, two scientific case studies based on published THz spectroscopy measurement results are created and discussed. Furthermore, other technical challenges and opportunities associated with adapting THz spectroscopy as a pharmaceutical PAT tool are highlighted.

  5. Radiant Energy Measurements from a Scaled Jet Engine Axisymmetric Exhaust Nozzle for a Baseline Code Validation Case

    NASA Technical Reports Server (NTRS)

    Baumeister, Joseph F.

    1994-01-01

    A non-flowing, electrically heated test rig was developed to verify computer codes that calculate radiant energy propagation from nozzle geometries that represent aircraft propulsion nozzle systems. Since there are a variety of analysis tools used to evaluate thermal radiation propagation from partially enclosed nozzle surfaces, an experimental benchmark test case was developed for code comparison. This paper briefly describes the nozzle test rig and the developed analytical nozzle geometry used to compare the experimental and predicted thermal radiation results. A major objective of this effort was to make available the experimental results and the analytical model in a format to facilitate conversion to existing computer code formats. For code validation purposes this nozzle geometry represents one validation case for one set of analysis conditions. Since each computer code has advantages and disadvantages based on scope, requirements, and desired accuracy, the usefulness of this single nozzle baseline validation case can be limited for some code comparisons.

  6. Multi-analyte validation in heterogeneous solution by ELISA.

    PubMed

    Lakshmipriya, Thangavel; Gopinath, Subash C B; Hashim, Uda; Murugaiyah, Vikneswaran

    2017-12-01

    Enzyme Linked Immunosorbent Assay (ELISA) is a standard assay that has been used widely to validate the presence of analyte in the solution. With the advancement of ELISA, different strategies have shown and became a suitable immunoassay for a wide range of analytes. Herein, we attempted to provide additional evidence with ELISA, to show its suitability for multi-analyte detection. To demonstrate, three clinically relevant targets have been chosen, which include 16kDa protein from Mycobacterium tuberculosis, human blood clotting Factor IXa and a tumour marker Squamous Cell Carcinoma antigen. Indeed, we adapted the routine steps from the conventional ELISA to validate the occurrence of analytes both in homogeneous and heterogeneous solutions. With the homogeneous and heterogeneous solutions, we could attain the sensitivity of 2, 8 and 1nM for the targets 16kDa protein, FIXa and SSC antigen, respectively. Further, the specific multi-analyte validations were evidenced with the similar sensitivities in the presence of human serum. ELISA assay in this study has proven its applicability for the genuine multiple target validation in the heterogeneous solution, can be followed for other target validations. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Validity of plant fiber length measurement : a review of fiber length measurement based on kenaf as a model

    Treesearch

    James S. Han; Theodore Mianowski; Yi-yu Lin

    1999-01-01

    The efficacy of fiber length measurement techniques such as digitizing, the Kajaani procedure, and NIH Image are compared in order to determine the optimal tool. Kenaf bast fibers, aspen, and red pine fibers were collected from different anatomical parts, and the fiber lengths were compared using various analytical tools. A statistical analysis on the validity of the...

  8. Analytic model for ultrasound energy receivers and their optimal electric loads

    NASA Astrophysics Data System (ADS)

    Gorostiaga, M.; Wapler, M. C.; Wallrabe, U.

    2017-08-01

    In this paper, we present an analytic model for thickness resonating plate ultrasound energy receivers, which we have derived from the piezoelectric and the wave equations and, in which we have included dielectric, viscosity and acoustic attenuation losses. Afterwards, we explore the optimal electric load predictions by the zero reflection and power maximization approaches present in the literature with different acoustic boundary conditions, and discuss their limitations. To validate our model, we compared our expressions with the KLM model solved numerically with very good agreement. Finally, we discuss the differences between the zero reflection and power maximization optimal electric loads, which start to differ as losses in the receiver increase.

  9. Teaching Analytical Method Transfer through Developing and Validating Then Transferring Dissolution Testing Methods for Pharmaceuticals

    ERIC Educational Resources Information Center

    Kimaru, Irene; Koether, Marina; Chichester, Kimberly; Eaton, Lafayette

    2017-01-01

    Analytical method transfer (AMT) and dissolution testing are important topics required in industry that should be taught in analytical chemistry courses. Undergraduate students in senior level analytical chemistry laboratory courses at Kennesaw State University (KSU) and St. John Fisher College (SJFC) participated in development, validation, and…

  10. LOX/hydrocarbon rocket engine analytical design methodology development and validation. Volume 1: Executive summary and technical narrative

    NASA Technical Reports Server (NTRS)

    Pieper, Jerry L.; Walker, Richard E.

    1993-01-01

    During the past three decades, an enormous amount of resources were expended in the design and development of Liquid Oxygen/Hydrocarbon and Hydrogen (LOX/HC and LOX/H2) rocket engines. A significant portion of these resources were used to develop and demonstrate the performance and combustion stability for each new engine. During these efforts, many analytical and empirical models were developed that characterize design parameters and combustion processes that influence performance and stability. Many of these models are suitable as design tools, but they have not been assembled into an industry-wide usable analytical design methodology. The objective of this program was to assemble existing performance and combustion stability models into a usable methodology capable of producing high performing and stable LOX/hydrocarbon and LOX/hydrogen propellant booster engines.

  11. Prediction of turning stability using receptance coupling

    NASA Astrophysics Data System (ADS)

    Jasiewicz, Marcin; Powałka, Bartosz

    2018-01-01

    This paper presents an issue of machining stability prediction of dynamic "lathe - workpiece" system evaluated using receptance coupling method. Dynamic properties of the lathe components (the spindle and the tailstock) are assumed to be constant and can be determined experimentally based on the results of the impact test. Hence, the variable of the system "machine tool - holder - workpiece" is the machined part, which can be easily modelled analytically. The method of receptance coupling enables a synthesis of experimental (spindle, tailstock) and analytical (machined part) models, so impact testing of the entire system becomes unnecessary. The paper presents methodology of analytical and experimental models synthesis, evaluation of the stability lobes and experimental validation procedure involving both the determination of the dynamic properties of the system and cutting tests. In the summary the experimental verification results would be presented and discussed.

  12. A two-dimensional analytical model of vapor intrusion involving vertical heterogeneity.

    PubMed

    Yao, Yijun; Verginelli, Iason; Suuberg, Eric M

    2017-05-01

    In this work, we present an analytical chlorinated vapor intrusion (CVI) model that can estimate source-to-indoor air concentration attenuation by simulating two-dimensional (2-D) vapor concentration profile in vertically heterogeneous soils overlying a homogenous vapor source. The analytical solution describing the 2-D soil gas transport was obtained by applying a modified Schwarz-Christoffel mapping method. A partial field validation showed that the developed model provides results (especially in terms of indoor emission rates) in line with the measured data from a case involving a building overlying a layered soil. In further testing, it was found that the new analytical model can very closely replicate the results of three-dimensional (3-D) numerical models at steady state in scenarios involving layered soils overlying homogenous groundwater sources. By contrast, by adopting a two-layer approach (capillary fringe and vadose zone) as employed in the EPA implementation of the Johnson and Ettinger model, the spatially and temporally averaged indoor concentrations in the case of groundwater sources can be higher than the ones estimated by the numerical model up to two orders of magnitude. In short, the model proposed in this work can represent an easy-to-use tool that can simulate the subsurface soil gas concentration in layered soils overlying a homogenous vapor source while keeping the simplicity of an analytical approach that requires much less computational effort.

  13. Dual metal gate tunneling field effect transistors based on MOSFETs: A 2-D analytical approach

    NASA Astrophysics Data System (ADS)

    Ramezani, Zeinab; Orouji, Ali A.

    2018-01-01

    A novel 2-D analytical drain current model of novel Dual Metal Gate Tunnel Field Effect Transistors Based on MOSFETs (DMG-TFET) is presented in this paper. The proposed Tunneling FET is extracted from a MOSFET structure by employing an additional electrode in the source region with an appropriate work function to induce holes in the N+ source region and hence makes it as a P+ source region. The electric field is derived which is utilized to extract the expression of the drain current by analytically integrating the band to band tunneling generation rate in the tunneling region based on the potential profile by solving the Poisson's equation. Through this model, the effects of the thin film thickness and gate voltage on the potential, the electric field, and the effects of the thin film thickness on the tunneling current can be studied. To validate our present model we use SILVACO ATLAS device simulator and the analytical results have been compared with it and found a good agreement.

  14. Modeling of phonon scattering in n-type nanowire transistors using one-shot analytic continuation technique

    NASA Astrophysics Data System (ADS)

    Bescond, Marc; Li, Changsheng; Mera, Hector; Cavassilas, Nicolas; Lannoo, Michel

    2013-10-01

    We present a one-shot current-conserving approach to model the influence of electron-phonon scattering in nano-transistors using the non-equilibrium Green's function formalism. The approach is based on the lowest order approximation (LOA) to the current and its simplest analytic continuation (LOA+AC). By means of a scaling argument, we show how both LOA and LOA+AC can be easily obtained from the first iteration of the usual self-consistent Born approximation (SCBA) algorithm. Both LOA and LOA+AC are then applied to model n-type silicon nanowire field-effect-transistors and are compared to SCBA current characteristics. In this system, the LOA fails to describe electron-phonon scattering, mainly because of the interactions with acoustic phonons at the band edges. In contrast, the LOA+AC still well approximates the SCBA current characteristics, thus demonstrating the power of analytic continuation techniques. The limits of validity of LOA+AC are also discussed, and more sophisticated and general analytic continuation techniques are suggested for more demanding cases.

  15. Analytical methodology for determination of helicopter IFR precision approach requirements. [pilot workload and acceptance level

    NASA Technical Reports Server (NTRS)

    Phatak, A. V.

    1980-01-01

    A systematic analytical approach to the determination of helicopter IFR precision approach requirements is formulated. The approach is based upon the hypothesis that pilot acceptance level or opinion rating of a given system is inversely related to the degree of pilot involvement in the control task. A nonlinear simulation of the helicopter approach to landing task incorporating appropriate models for UH-1H aircraft, the environmental disturbances and the human pilot was developed as a tool for evaluating the pilot acceptance hypothesis. The simulated pilot model is generic in nature and includes analytical representation of the human information acquisition, processing, and control strategies. Simulation analyses in the flight director mode indicate that the pilot model used is reasonable. Results of the simulation are used to identify candidate pilot workload metrics and to test the well known performance-work-load relationship. A pilot acceptance analytical methodology is formulated as a basis for further investigation, development and validation.

  16. Analytical Formulation for Sizing and Estimating the Dimensions and Weight of Wind Turbine Hub and Drivetrain Components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Y.; Parsons, T.; King, R.

    This report summarizes the theory, verification, and validation of a new sizing tool for wind turbine drivetrain components, the Drivetrain Systems Engineering (DriveSE) tool. DriveSE calculates the dimensions and mass properties of the hub, main shaft, main bearing(s), gearbox, bedplate, transformer if up-tower, and yaw system. The level of fi¬ delity for each component varies depending on whether semiempirical parametric or physics-based models are used. The physics-based models have internal iteration schemes based on system constraints and design criteria. Every model is validated against available industry data or finite-element analysis. The verification and validation results show that the models reasonablymore » capture primary drivers for the sizing and design of major drivetrain components.« less

  17. Riemannian geometry of Hamiltonian chaos: hints for a general theory.

    PubMed

    Cerruti-Sola, Monica; Ciraolo, Guido; Franzosi, Roberto; Pettini, Marco

    2008-10-01

    We aim at assessing the validity limits of some simplifying hypotheses that, within a Riemmannian geometric framework, have provided an explanation of the origin of Hamiltonian chaos and have made it possible to develop a method of analytically computing the largest Lyapunov exponent of Hamiltonian systems with many degrees of freedom. Therefore, a numerical hypotheses testing has been performed for the Fermi-Pasta-Ulam beta model and for a chain of coupled rotators. These models, for which analytic computations of the largest Lyapunov exponents have been carried out in the mentioned Riemannian geometric framework, appear as paradigmatic examples to unveil the reason why the main hypothesis of quasi-isotropy of the mechanical manifolds sometimes breaks down. The breakdown is expected whenever the topology of the mechanical manifolds is nontrivial. This is an important step forward in view of developing a geometric theory of Hamiltonian chaos of general validity.

  18. Nonlinear modelling of high-speed catenary based on analytical expressions of cable and truss elements

    NASA Astrophysics Data System (ADS)

    Song, Yang; Liu, Zhigang; Wang, Hongrui; Lu, Xiaobing; Zhang, Jing

    2015-10-01

    Due to the intrinsic nonlinear characteristics and complex structure of the high-speed catenary system, a modelling method is proposed based on the analytical expressions of nonlinear cable and truss elements. The calculation procedure for solving the initial equilibrium state is proposed based on the Newton-Raphson iteration method. The deformed configuration of the catenary system as well as the initial length of each wire can be calculated. Its accuracy and validity of computing the initial equilibrium state are verified by comparison with the separate model method, absolute nodal coordinate formulation and other methods in the previous literatures. Then, the proposed model is combined with a lumped pantograph model and a dynamic simulation procedure is proposed. The accuracy is guaranteed by the multiple iterative calculations in each time step. The dynamic performance of the proposed model is validated by comparison with EN 50318, the results of the finite element method software and SIEMENS simulation report, respectively. At last, the influence of the catenary design parameters (such as the reserved sag and pre-tension) on the dynamic performance is preliminarily analysed by using the proposed model.

  19. Numerical modeling and analytical modeling of cryogenic carbon capture in a de-sublimating heat exchanger

    NASA Astrophysics Data System (ADS)

    Yu, Zhitao; Miller, Franklin; Pfotenhauer, John M.

    2017-12-01

    Both a numerical and analytical model of the heat and mass transfer processes in a CO2, N2 mixture gas de-sublimating cross-flow finned duct heat exchanger system is developed to predict the heat transferred from a mixture gas to liquid nitrogen and the de-sublimating rate of CO2 in the mixture gas. The mixture gas outlet temperature, liquid nitrogen outlet temperature, CO2 mole fraction, temperature distribution and de-sublimating rate of CO2 through the whole heat exchanger was computed using both the numerical and analytic model. The numerical model is built using EES [1] (engineering equation solver). According to the simulation, a cross-flow finned duct heat exchanger can be designed and fabricated to validate the models. The performance of the heat exchanger is evaluated as functions of dimensionless variables, such as the ratio of the mass flow rate of liquid nitrogen to the mass flow rate of inlet flue gas.

  20. Sample distribution in peak mode isotachophoresis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rubin, Shimon; Schwartz, Ortal; Bercovici, Moran, E-mail: mberco@technion.ac.il

    We present an analytical study of peak mode isotachophoresis (ITP), and provide closed form solutions for sample distribution and electric field, as well as for leading-, trailing-, and counter-ion concentration profiles. Importantly, the solution we present is valid not only for the case of fully ionized species, but also for systems of weak electrolytes which better represent real buffer systems and for multivalent analytes such as proteins and DNA. The model reveals two major scales which govern the electric field and buffer distributions, and an additional length scale governing analyte distribution. Using well-controlled experiments, and numerical simulations, we verify andmore » validate the model and highlight its key merits as well as its limitations. We demonstrate the use of the model for determining the peak concentration of focused sample based on known buffer and analyte properties, and show it differs significantly from commonly used approximations based on the interface width alone. We further apply our model for studying reactions between multiple species having different effective mobilities yet co-focused at a single ITP interface. We find a closed form expression for an effective-on rate which depends on reactants distributions, and derive the conditions for optimizing such reactions. Interestingly, the model reveals that maximum reaction rate is not necessarily obtained when the concentration profiles of the reacting species perfectly overlap. In addition to the exact solutions, we derive throughout several closed form engineering approximations which are based on elementary functions and are simple to implement, yet maintain the interplay between the important scales. Both the exact and approximate solutions provide insight into sample focusing and can be used to design and optimize ITP-based assays.« less

  1. A two-dimensional analytical model and experimental validation of garter stitch knitted shape memory alloy actuator architecture

    NASA Astrophysics Data System (ADS)

    Abel, Julianna; Luntz, Jonathan; Brei, Diann

    2012-08-01

    Active knits are a unique architectural approach to meeting emerging smart structure needs for distributed high strain actuation with simultaneous force generation. This paper presents an analytical state-based model for predicting the actuation response of a shape memory alloy (SMA) garter knit textile. Garter knits generate significant contraction against moderate to large loads when heated, due to the continuous interlocked network of loops of SMA wire. For this knit architecture, the states of operation are defined on the basis of the thermal and mechanical loading of the textile, the resulting phase change of the SMA, and the load path followed to that state. Transitions between these operational states induce either stick or slip frictional forces depending upon the state and path, which affect the actuation response. A load-extension model of the textile is derived for each operational state using elastica theory and Euler-Bernoulli beam bending for the large deformations within a loop of wire based on the stress-strain behavior of the SMA material. This provides kinematic and kinetic relations which scale to form analytical transcendental expressions for the net actuation motion against an external load. This model was validated experimentally for an SMA garter knit textile over a range of applied forces with good correlation for both the load-extension behavior in each state as well as the net motion produced during the actuation cycle (250% recoverable strain and over 50% actuation). The two-dimensional analytical model of the garter stitch active knit provides the ability to predict the kinetic actuation performance, providing the basis for the design and synthesis of large stroke, large force distributed actuators that employ this novel architecture.

  2. Numerical evaluation of heating in the human head due to magnetic resonance imaging (MRI)

    NASA Astrophysics Data System (ADS)

    Nguyen, Uyen; Brown, Steve; Chang, Isaac; Krycia, Joe; Mirotznik, Mark S.

    2003-06-01

    In this paper we present a numerical model for evaluating tissue heating during magnetic resonance imaging (MRI). Our method, which included a detailed anatomical model of a human head, calculated both the electromagnetic power deposition and the associated temperature elevations during a MRI head examination. Numerical studies were conducted using a realistic birdcage coil excited at frequencies ranging from 63 MHz to 500 MHz. The model was validated both experimentally and analytically. The experimental validation was performed at the MR test facility located at the FDA's Center for Devices and Radiological Health (CDRH).

  3. Validation of the SURE Program, phase 1

    NASA Technical Reports Server (NTRS)

    Dotson, Kelly J.

    1987-01-01

    Presented are the results of the first phase in the validation of the SURE (Semi-Markov Unreliability Range Evaluator) program. The SURE program gives lower and upper bounds on the death-state probabilities of a semi-Markov model. With these bounds, the reliability of a semi-Markov model of a fault-tolerant computer system can be analyzed. For the first phase in the validation, fifteen semi-Markov models were solved analytically for the exact death-state probabilities and these solutions compared to the corresponding bounds given by SURE. In every case, the SURE bounds covered the exact solution. The bounds, however, had a tendency to separate in cases where the recovery rate was slow or the fault arrival rate was fast.

  4. Filament winding cylinders. II - Validation of the process model

    NASA Technical Reports Server (NTRS)

    Calius, Emilio P.; Lee, Soo-Yong; Springer, George S.

    1990-01-01

    Analytical and experimental studies were performed to validate the model developed by Lee and Springer for simulating the manufacturing process of filament wound composite cylinders. First, results calculated by the Lee-Springer model were compared to results of the Calius-Springer thin cylinder model. Second, temperatures and strains calculated by the Lee-Springer model were compared to data. The data used in these comparisons were generated during the course of this investigation with cylinders made of Hercules IM-6G/HBRF-55 and Fiberite T-300/976 graphite-epoxy tows. Good agreement was found between the calculated and measured stresses and strains, indicating that the model is a useful representation of the winding and curing processes.

  5. Experimental and analytical investigation of a modified ring cusp NSTAR engine

    NASA Technical Reports Server (NTRS)

    Sengupta, Anita

    2005-01-01

    A series of experimental measurements on a modified laboratory NSTAR engine were used to validate a zero dimensional analytical discharge performance model of a ring cusp ion thruster. The model predicts the discharge performance of a ring cusp NSTAR thruster as a function the magnetic field configuration, thruster geometry, and throttle level. Analytical formalisms for electron and ion confinement are used to predict the ionization efficiency for a given thruster design. Explicit determination of discharge loss and volume averaged plasma parameters are also obtained. The model was used to predict the performance of the nominal and modified three and four ring cusp 30-cm ion thruster configurations operating at the full power (2.3 kW) NSTAR throttle level. Experimental measurements of the modified engine configuration discharge loss compare well with the predicted value for propellant utilizations from 80 to 95%. The theory, as validated by experiment, indicates that increasing the magnetic strength of the minimum closed reduces maxwellian electron diffusion and electrostatically confines the ion population and subsequent loss to the anode wall. The theory also indicates that increasing the cusp strength and minimizing the cusp area improves primary electron confinement increasing the probability of an ionization collision prior to loss at the cusp.

  6. Palm: Easing the Burden of Analytical Performance Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tallent, Nathan R.; Hoisie, Adolfy

    2014-06-01

    Analytical (predictive) application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult because they must be both accurate and concise. To ease the burden of performance modeling, we developed Palm, a modeling tool that combines top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. To express insight, Palm defines a source code modeling annotation language. By coordinating models and source code, Palm's models are `first-class' and reproducible. Unlike prior work, Palm formally links models, functions, and measurements. As a result, Palm (a) uses functions to either abstract or express complexitymore » (b) generates hierarchical models (representing an application's static and dynamic structure); and (c) automatically incorporates measurements to focus attention, represent constant behavior, and validate models. We discuss generating models for three different applications.« less

  7. On the nonlinear dynamics of trolling-mode AFM: Analytical solution using multiple time scales method

    NASA Astrophysics Data System (ADS)

    Sajjadi, Mohammadreza; Pishkenari, Hossein Nejat; Vossoughi, Gholamreza

    2018-06-01

    Trolling mode atomic force microscopy (TR-AFM) has resolved many imaging problems by a considerable reduction of the liquid-resonator interaction forces in liquid environments. The present study develops a nonlinear model of the meniscus force exerted to the nanoneedle of TR-AFM and presents an analytical solution to the distributed-parameter model of TR-AFM resonator utilizing multiple time scales (MTS) method. Based on the developed analytical solution, the frequency-response curves of the resonator operation in air and liquid (for different penetration length of the nanoneedle) are obtained. The closed-form analytical solution and the frequency-response curves are validated by the comparison with both the finite element solution of the main partial differential equations and the experimental observations. The effect of excitation angle of the resonator on horizontal oscillation of the probe tip and the effect of different parameters on the frequency-response of the system are investigated.

  8. Turbomachinery noise

    NASA Astrophysics Data System (ADS)

    Groeneweg, John F.; Sofrin, Thomas G.; Rice, Edward J.; Gliebe, Phillip R.

    1991-08-01

    Summarized here are key advances in experimental techniques and theoretical applications which point the way to a broad understanding and control of turbomachinery noise. On the experimental side, the development of effective inflow control techniques makes it possible to conduct, in ground based facilities, definitive experiments in internally controlled blade row interactions. Results can now be valid indicators of flight behavior and can provide a firm base for comparison with analytical results. Inflow control coupled with detailed diagnostic tools such as blade pressure measurements can be used to uncover the more subtle mechanisms such as rotor strut interaction, which can set tone levels for some engine configurations. Initial mappings of rotor wake-vortex flow fields have provided a data base for a first generation semiempirical flow disturbance model. Laser velocimetry offers a nonintrusive method for validating and improving the model. Digital data systems and signal processing algorithms are bringing mode measurement closer to a working tool that can be frequently applied to a real machine such as a turbofan engine. On the analytical side, models of most of the links in the chain from turbomachine blade source to far field observation point have been formulated. Three dimensional lifting surface theory for blade rows, including source noncompactness and cascade effects, blade row transmission models incorporating mode and frequency scattering, and modal radiation calculations, including hybrid numerical-analytical approaches, are tools which await further application.

  9. Analytical model of tilted driver–pickup coils for eddy current nondestructive evaluation

    NASA Astrophysics Data System (ADS)

    Cao, Bing-Hua; Li, Chao; Fan, Meng-Bao; Ye, Bo; Tian, Gui-Yun

    2018-03-01

    A driver-pickup probe possesses better sensitivity and flexibility due to individual optimization of a coil. It is frequently observed in an eddy current (EC) array probe. In this work, a tilted non-coaxial driver-pickup probe above a multilayered conducting plate is analytically modeled with spatial transformation for eddy current nondestructive evaluation. Basically, the core of the formulation is to obtain the projection of magnetic vector potential (MVP) from the driver coil onto the vector along the tilted pickup coil, which is divided into two key steps. The first step is to make a projection of MVP along the pickup coil onto a horizontal plane, and the second one is to build the relationship between the projected MVP and the MVP along the driver coil. Afterwards, an analytical model for the case of a layered plate is established with the reflection and transmission theory of electromagnetic fields. The calculated values from the resulting model indicate good agreement with those from the finite element model (FEM) and experiments, which validates the developed analytical model. Project supported by the National Natural Science Foundation of China (Grant Nos. 61701500, 51677187, and 51465024).

  10. Applicability of a 1D Analytical Model for Pulse Thermography of Laterally Heterogeneous Semitransparent Materials

    NASA Astrophysics Data System (ADS)

    Bernegger, R.; Altenburg, S. J.; Röllig, M.; Maierhofer, C.

    2018-03-01

    Pulse thermography (PT) has proven to be a valuable non-destructive testing method to identify and quantify defects in fiber-reinforced polymers. To perform a quantitative defect characterization, the heat diffusion within the material as well as the material parameters must be known. The heterogeneous material structure of glass fiber-reinforced polymers (GFRP) as well as the semitransparency of the material for optical excitation sources of PT is still challenging. For homogeneous semitransparent materials, 1D analytical models describing the temperature distribution are available. Here, we present an analytical approach to model PT for laterally inhomogeneous semitransparent materials. We show the validity of the model by considering different configurations of the optical heating source, the IR camera, and the differently coated GFRP sample. The model considers the lateral inhomogeneity of the semitransparency by an additional absorption coefficient. It includes additional effects such as thermal losses at the samples surfaces, multilayer systems with thermal contact resistance, and a finite duration of the heating pulse. By using a sufficient complexity of the analytical model, similar values of the material parameters were found for all six investigated configurations by numerical fitting.

  11. Heuristic and analytic processes in reasoning: an event-related potential study of belief bias.

    PubMed

    Banks, Adrian P; Hope, Christopher

    2014-03-01

    Human reasoning involves both heuristic and analytic processes. This study of belief bias in relational reasoning investigated whether the two processes occur serially or in parallel. Participants evaluated the validity of problems in which the conclusions were either logically valid or invalid and either believable or unbelievable. Problems in which the conclusions presented a conflict between the logically valid response and the believable response elicited a more positive P3 than problems in which there was no conflict. This shows that P3 is influenced by the interaction of belief and logic rather than either of these factors on its own. These findings indicate that belief and logic influence reasoning at the same time, supporting models in which belief-based and logical evaluations occur in parallel but not theories in which belief-based heuristic evaluations precede logical analysis.

  12. The development of an integrated assessment instrument for measuring analytical thinking and science process skills

    NASA Astrophysics Data System (ADS)

    Irwanto, Rohaeti, Eli; LFX, Endang Widjajanti; Suyanta

    2017-05-01

    This research aims to develop instrument and determine the characteristics of an integrated assessment instrument. This research uses 4-D model, which includes define, design, develop, and disseminate. The primary product is validated by expert judgment, tested it's readability by students, and assessed it's feasibility by chemistry teachers. This research involved 246 students of grade XI of four senior high schools in Yogyakarta, Indonesia. Data collection techniques include interview, questionnaire, and test. Data collection instruments include interview guideline, item validation sheet, users' response questionnaire, instrument readability questionnaire, and essay test. The results show that the integrated assessment instrument has Aiken validity value of 0.95. Item reliability was 0.99 and person reliability was 0.69. Teachers' response to the integrated assessment instrument is very good. Therefore, the integrated assessment instrument is feasible to be applied to measure the students' analytical thinking and science process skills.

  13. Analytical insight into "breathing" crack-induced acoustic nonlinearity with an application to quantitative evaluation of contact cracks.

    PubMed

    Wang, Kai; Liu, Menglong; Su, Zhongqing; Yuan, Shenfang; Fan, Zheng

    2018-08-01

    To characterize fatigue cracks, in the undersized stage in particular, preferably in a quantitative and precise manner, a two-dimensional (2D) analytical model is developed for interpreting the modulation mechanism of a "breathing" crack on guided ultrasonic waves (GUWs). In conjunction with a modal decomposition method and a variational principle-based algorithm, the model is capable of analytically depicting the propagating and evanescent waves induced owing to the interaction of probing GUWs with a "breathing" crack, and further extracting linear and nonlinear wave features (e.g., reflection, transmission, mode conversion and contact acoustic nonlinearity (CAN)). With the model, a quantitative correlation between CAN embodied in acquired GUWs and crack parameters (e.g., location and severity) is obtained, whereby a set of damage indices is proposed via which the severity of the crack can be evaluated quantitatively. The evaluation, in principle, does not entail a benchmarking process against baseline signals. As validation, the results obtained from the analytical model are compared with those from finite element simulation, showing good consistency. This has demonstrated accuracy of the developed analytical model in interpreting contact crack-induced CAN, and spotlighted its application to quantitative evaluation of fatigue damage. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Validation protocol of analytical procedures for quantification of drugs in polymeric systems for parenteral administration: dexamethasone phosphate disodium microparticles.

    PubMed

    Martín-Sabroso, Cristina; Tavares-Fernandes, Daniel Filipe; Espada-García, Juan Ignacio; Torres-Suárez, Ana Isabel

    2013-12-15

    In this work a protocol to validate analytical procedures for the quantification of drug substances formulated in polymeric systems that comprise both drug entrapped into the polymeric matrix (assay:content test) and drug released from the systems (assay:dissolution test) is developed. This protocol is applied to the validation two isocratic HPLC analytical procedures for the analysis of dexamethasone phosphate disodium microparticles for parenteral administration. Preparation of authentic samples and artificially "spiked" and "unspiked" samples is described. Specificity (ability to quantify dexamethasone phosphate disodium in presence of constituents of the dissolution medium and other microparticle constituents), linearity, accuracy and precision are evaluated, in the range from 10 to 50 μg mL(-1) in the assay:content test procedure and from 0.25 to 10 μg mL(-1) in the assay:dissolution test procedure. The robustness of the analytical method to extract drug from microparticles is also assessed. The validation protocol developed allows us to conclude that both analytical methods are suitable for their intended purpose, but the lack of proportionality of the assay:dissolution analytical method should be taken into account. The validation protocol designed in this work could be applied to the validation of any analytical procedure for the quantification of drugs formulated in controlled release polymeric microparticles. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. MICROORGANISMS IN BIOSOLIDS: ANALYTICAL METHODS DEVELOPMENT, STANDARDIZATION, AND VALIDATION

    EPA Science Inventory

    The objective of this presentation is to discuss pathogens of concern in biosolids, the analytical techniques used to evaluate microorganisms in biosolids, and to discuss standardization and validation of analytical protocols for microbes within such a complex matrix. Implicatio...

  16. Unimolecular diffusion-mediated reactions with a nonrandom time-modulated absorbing barrier

    NASA Technical Reports Server (NTRS)

    Bashford, D.; Weaver, D. L.

    1986-01-01

    A diffusion-reaction model with time-dependent reactivity is formulated and applied to unimolecular reactions. The model is solved exactly numerically and approximately analytically for the unreacted fraction as a function of time. It is shown that the approximate analytical solution is valid even when the system is far from equilibrium, and when the reactivity probability is more complicated than a square-wave function of time. A discussion is also given of an approach to problems of this type using a stochastically fluctuating reactivity, and the first-passage time for a particular example is derived.

  17. Optimal design of piezoelectric transformers: a rational approach based on an analytical model and a deterministic global optimization.

    PubMed

    Pigache, Francois; Messine, Frédéric; Nogarede, Bertrand

    2007-07-01

    This paper deals with a deterministic and rational way to design piezoelectric transformers in radial mode. The proposed approach is based on the study of the inverse problem of design and on its reformulation as a mixed constrained global optimization problem. The methodology relies on the association of the analytical models for describing the corresponding optimization problem and on an exact global optimization software, named IBBA and developed by the second author to solve it. Numerical experiments are presented and compared in order to validate the proposed approach.

  18. Analytical prediction of the interior noise for cylindrical models of aircraft fuselages for prescribed exterior noise fields. Phase 1: Development and validation of preliminary analytical models

    NASA Technical Reports Server (NTRS)

    Pope, L. D.; Rennison, D. C.; Wilby, E. G.

    1980-01-01

    The basic theoretical work required to understand sound transmission into an enclosed space (that is, one closed by the transmitting structure) is developed for random pressure fields and for harmonic (tonal) excitation. The analysis is used to predict the noise reducton of unpressurized unstiffened cylinder, and also the interior response of the cylinder given a tonal (plane wave) excitation. Predictions and measurements are compared and the transmission is analyzed. In addition, results for tonal (harmonic) mechanical excitation are considered.

  19. Measurement of Plastic Stress and Strain for Analytical Method Verification (MSFC Center Director's Discretionary Fund Project No. 93-08)

    NASA Technical Reports Server (NTRS)

    Price, J. M.; Steeve, B. E.; Swanson, G. R.

    1999-01-01

    The analytical prediction of stress, strain, and fatigue life at locations experiencing local plasticity is full of uncertainties. Much of this uncertainty arises from the material models and their use in the numerical techniques used to solve plasticity problems. Experimental measurements of actual plastic strains would allow the validity of these models and solutions to be tested. This memorandum describes how experimental plastic residual strain measurements were used to verify the results of a thermally induced plastic fatigue failure analysis of a space shuttle main engine fuel pump component.

  20. Remote measurements of water pollution with a lidar polarimeter

    NASA Technical Reports Server (NTRS)

    Sheives, T. C.; Rouse, J. W., Jr.; Mayo, W. T., Jr.

    1974-01-01

    This paper examines a dual polarization laser backscatter system as a method for remote measurements of certain water quality parameters. Analytical models for describing the backscatter from turbid water and oil on turbid water are presented and compared with experimental data. Laser backscatter field measurements from natural waterways are presented and compared with simultaneous ground observations of the water quality parameters: turbidity, suspended solids, and transmittance. The results of this study show that the analytical models appear valid and that the sensor investigated is applicable to remote measurements of these water quality parameters and oil spills on water.-

  1. Analytical validation of a novel multiplex test for detection of advanced adenoma and colorectal cancer in symptomatic patients.

    PubMed

    Dillon, Roslyn; Croner, Lisa J; Bucci, John; Kairs, Stefanie N; You, Jia; Beasley, Sharon; Blimline, Mark; Carino, Rochele B; Chan, Vicky C; Cuevas, Danissa; Diggs, Jeff; Jennings, Megan; Levy, Jacob; Mina, Ginger; Yee, Alvin; Wilcox, Bruce

    2018-05-30

    Early detection of colorectal cancer (CRC) is key to reducing associated mortality. Despite the importance of early detection, approximately 40% of individuals in the United States between the ages of 50-75 have never been screened for CRC. The low compliance with colonoscopy and fecal-based screening may be addressed with a non-invasive alternative such as a blood-based test. We describe here the analytical validation of a multiplexed blood-based assay that measures the plasma concentrations of 15 proteins to assess advanced adenoma (AA) and CRC risk in symptomatic patients. The test was developed on an electrochemiluminescent immunoassay platform employing four multi-marker panels, to be implemented in the clinic as a laboratory developed test (LDT). Under the Clinical Laboratory Improvement Amendments (CLIA) and College of American Pathologists (CAP) regulations, a United States-based clinical laboratory utilizing an LDT must establish performance characteristics relating to analytical validity prior to releasing patient test results. This report describes a series of studies demonstrating the precision, accuracy, analytical sensitivity, and analytical specificity for each of the 15 assays, as required by CLIA/CAP. In addition, the report describes studies characterizing each of the assays' dynamic range, parallelism, tolerance to common interfering substances, spike recovery, and stability to sample freeze-thaw cycles. Upon completion of the analytical characterization, a clinical accuracy study was performed to evaluate concordance of AA and CRC classifier model calls using the analytical method intended for use in the clinic. Of 434 symptomatic patient samples tested, the percent agreement with original CRC and AA calls was 87% and 92% respectively. All studies followed CLSI guidelines and met the regulatory requirements for implementation of a new LDT. The results provide the analytical evidence to support the implementation of the novel multi-marker test as a clinical test for evaluating CRC and AA risk in symptomatic individuals. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Fast and accurate focusing analysis of large photon sieve using pinhole ring diffraction model.

    PubMed

    Liu, Tao; Zhang, Xin; Wang, Lingjie; Wu, Yanxiong; Zhang, Jizhen; Qu, Hemeng

    2015-06-10

    In this paper, we developed a pinhole ring diffraction model for the focusing analysis of a large photon sieve. Instead of analyzing individual pinholes, we discuss the focusing of all of the pinholes in a single ring. An explicit equation for the diffracted field of individual pinhole ring has been proposed. We investigated the validity range of this generalized model and analytically describe the sufficient conditions for the validity of this pinhole ring diffraction model. A practical example and investigation reveals the high accuracy of the pinhole ring diffraction model. This simulation method could be used for fast and accurate focusing analysis of a large photon sieve.

  3. Analytical Modeling of Triple-Metal Hetero-Dielectric DG SON TFET

    NASA Astrophysics Data System (ADS)

    Mahajan, Aman; Dash, Dinesh Kumar; Banerjee, Pritha; Sarkar, Subir Kumar

    2018-02-01

    In this paper, a 2-D analytical model of triple-metal hetero-dielectric DG TFET is presented by combining the concepts of triple material gate engineering and hetero-dielectric engineering. Three metals with different work functions are used as both front- and back gate electrodes to modulate the barrier at source/channel and channel/drain interface. In addition to this, front gate dielectric consists of high-K HfO2 at source end and low-K SiO2 at drain side, whereas back gate dielectric is replaced by air to further improve the ON current of the device. Surface potential and electric field of the proposed device are formulated solving 2-D Poisson's equation and Young's approximation. Based on this electric field expression, tunneling current is obtained by using Kane's model. Several device parameters are varied to examine the behavior of the proposed device. The analytical model is validated with TCAD simulation results for proving the accuracy of our proposed model.

  4. Analysis of the sound field in finite length infinite baffled cylindrical ducts with vibrating walls of finite impedance.

    PubMed

    Shao, Wei; Mechefske, Chris K

    2005-04-01

    This paper describes an analytical model of finite cylindrical ducts with infinite flanges. This model is used to investigate the sound radiation characteristics of the gradient coil system of a magnetic resonance imaging (MRI) scanner. The sound field in the duct satisfies both the boundary conditions at the wall and at the open ends. The vibrating cylindrical wall of the duct is assumed to be the only sound source. Different acoustic conditions for the wall (rigid and absorptive) are used in the simulations. The wave reflection phenomenon at the open ends of the finite duct is described by general radiation impedance. The analytical model is validated by the comparison with its counterpart in a commercial code based on the boundary element method (BEM). The analytical model shows significant advantages over the BEM model with better numerical efficiency and a direct relation between the design parameters and the sound field inside the duct.

  5. Population Spotting Using Big Data: Validating the Human Performance Concept of Operations Analytic Vision

    DTIC Science & Technology

    2017-01-01

    AFRL-SA-WP-SR-2017-0001 Population Spotting Using “ Big Data ”: Validating the Human Performance Concept of Operations Analytic Vision...TITLE AND SUBTITLE Population Spotting Using “ Big Data ”: Validating the Human Performance Concept of Operations Analytic Vision 5a. CONTRACT...STINFO COPY NOTICE AND SIGNATURE PAGE Using Government drawings, specifications, or other data included in this document for any

  6. An analytical poroelastic model for ultrasound elastography imaging of tumors

    NASA Astrophysics Data System (ADS)

    Tauhidul Islam, Md; Chaudhry, Anuj; Unnikrishnan, Ginu; Reddy, J. N.; Righetti, Raffaella

    2018-01-01

    The mechanical behavior of biological tissues has been studied using a number of mechanical models. Due to the relatively high fluid content and mobility, many biological tissues have been modeled as poroelastic materials. Diseases such as cancers are known to alter the poroelastic response of a tissue. Tissue poroelastic properties such as compressibility, interstitial permeability and fluid pressure also play a key role for the assessment of cancer treatments and for improved therapies. At the present time, however, a limited number of poroelastic models for soft tissues are retrievable in the literature, and the ones available are not directly applicable to tumors as they typically refer to uniform tissues. In this paper, we report the analytical poroelastic model for a non-uniform tissue under stress relaxation. Displacement, strain and fluid pressure fields in a cylindrical poroelastic sample containing a cylindrical inclusion during stress relaxation are computed. Finite element simulations are then used to validate the proposed theoretical model. Statistical analysis demonstrates that the proposed analytical model matches the finite element results with less than 0.5% error. The availability of the analytical model and solutions presented in this paper may be useful to estimate diagnostically relevant poroelastic parameters such as interstitial permeability and fluid pressure, and, in general, for a better interpretation of clinically-relevant ultrasound elastography results.

  7. Energy transmission transformer for a wireless capsule endoscope: analysis of specific absorption rate and current density in biological tissue.

    PubMed

    Shiba, Kenji; Nagato, Tomohiro; Tsuji, Toshio; Koshiji, Kohji

    2008-07-01

    This paper reports on the electromagnetic influences on the analysis of biological tissue surrounding a prototype energy transmission system for a wireless capsule endoscope. Specific absorption rate (SAR) and current density were analyzed by electromagnetic simulator in a model consisting of primary coil and a human trunk including the skin, fat, muscle, small intestine, backbone, and blood. First, electric and magnetic strength in the same conditions as the analytical model were measured and compared to the analytical values to confirm the validity of the analysis. Then, SAR and current density as a function of frequency and output power were analyzed. The validity of the analysis was confirmed by comparing the analytical values with the measured ones. The SAR was below the basic restrictions of the International Commission on Nonionizing Radiation Protection (ICNIRP). At the same time, the results for current density show that the influence on biological tissue was lowest in the 300-400 kHz range, indicating that it was possible to transmit energy safely up to 160 mW. In addition, we confirmed that the current density has decreased by reducing the primary coil's current.

  8. Application of Multivariable Analysis and FTIR-ATR Spectroscopy to the Prediction of Properties in Campeche Honey

    PubMed Central

    Pat, Lucio; Ali, Bassam; Guerrero, Armando; Córdova, Atl V.; Garduza, José P.

    2016-01-01

    Attenuated total reflectance-Fourier transform infrared spectrometry and chemometrics model was used for determination of physicochemical properties (pH, redox potential, free acidity, electrical conductivity, moisture, total soluble solids (TSS), ash, and HMF) in honey samples. The reference values of 189 honey samples of different botanical origin were determined using Association Official Analytical Chemists, (AOAC), 1990; Codex Alimentarius, 2001, International Honey Commission, 2002, methods. Multivariate calibration models were built using partial least squares (PLS) for the measurands studied. The developed models were validated using cross-validation and external validation; several statistical parameters were obtained to determine the robustness of the calibration models: (PCs) optimum number of components principal, (SECV) standard error of cross-validation, (R 2 cal) coefficient of determination of cross-validation, (SEP) standard error of validation, and (R 2 val) coefficient of determination for external validation and coefficient of variation (CV). The prediction accuracy for pH, redox potential, electrical conductivity, moisture, TSS, and ash was good, while for free acidity and HMF it was poor. The results demonstrate that attenuated total reflectance-Fourier transform infrared spectrometry is a valuable, rapid, and nondestructive tool for the quantification of physicochemical properties of honey. PMID:28070445

  9. A conceptual model for generating and validating in-session clinical judgments

    PubMed Central

    Jacinto, Sofia B.; Lewis, Cara C.; Braga, João N.; Scott, Kelli

    2016-01-01

    Objective Little attention has been paid to the nuanced and complex decisions made in the clinical session context and how these decisions influence therapy effectiveness. Despite decades of research on the dual-processing systems, it remains unclear when and how intuitive and analytical reasoning influence the direction of the clinical session. Method This paper puts forth a testable conceptual model, guided by an interdisciplinary integration of the literature, that posits that the clinical session context moderates the use of intuitive versus analytical reasoning. Results A synthesis of studies examining professional best practices in clinical decision-making, empirical evidence from clinical judgment research, and the application of decision science theories indicate that intuitive and analytical reasoning may have profoundly different impacts on clinical practice and outcomes. Conclusions The proposed model is discussed with respect to its implications for clinical practice and future research. PMID:27088962

  10. Upon the reconstruction of accidents triggered by tire explosion. Analytical model and case study

    NASA Astrophysics Data System (ADS)

    Gaiginschi, L.; Agape, I.; Talif, S.

    2017-10-01

    Accident Reconstruction is important in the general context of increasing road traffic safety. In the casuistry of traffic accidents, those caused by tire explosions are critical under the severity of consequences, because they are usually happening at high speeds. Consequently, the knowledge of the running speed of the vehicle involved at the time of the tire explosion is essential to elucidate the circumstances of the accident. The paper presents an analytical model for the kinematics of a vehicle which, after the explosion of one of its tires, begins to skid, overturns and rolls. The model consists of two concurent approaches built as applications of the momentum conservation and energy conservation principles, and allows determination of the initial speed of the vehicle involved, by running backwards the sequences of the road event. The authors also aimed to both validate the two distinct analytical approaches by calibrating the calculation algorithms on a case study

  11. A methodology to enhance electromagnetic compatibility in joint military operations

    NASA Astrophysics Data System (ADS)

    Buckellew, William R.

    The development and validation of an improved methodology to identify, characterize, and prioritize potential joint EMI (electromagnetic interference) interactions and identify and develop solutions to reduce the effects of the interference are discussed. The methodology identifies potential EMI problems using results from field operations, historical data bases, and analytical modeling. Operational expertise, engineering analysis, and testing are used to characterize and prioritize the potential EMI problems. Results can be used to resolve potential EMI during the development and acquisition of new systems and to develop engineering fixes and operational workarounds for systems already employed. The analytic modeling portion of the methodology is a predictive process that uses progressive refinement of the analysis and the operational electronic environment to eliminate noninterfering equipment pairs, defer further analysis on pairs lacking operational significance, and resolve the remaining EMI problems. Tests are conducted on equipment pairs to ensure that the analytical models provide a realistic description of the predicted interference.

  12. Model of separation performance of bilinear gradients in scanning format counter-flow gradient electrofocusing techniques.

    PubMed

    Shameli, Seyed Mostafa; Glawdel, Tomasz; Ren, Carolyn L

    2015-03-01

    Counter-flow gradient electrofocusing allows the simultaneous concentration and separation of analytes by generating a gradient in the total velocity of each analyte that is the sum of its electrophoretic velocity and the bulk counter-flow velocity. In the scanning format, the bulk counter-flow velocity is varying with time so that a number of analytes with large differences in electrophoretic mobility can be sequentially focused and passed by a single detection point. Studies have shown that nonlinear (such as a bilinear) velocity gradients along the separation channel can improve both peak capacity and separation resolution simultaneously, which cannot be realized by using a single linear gradient. Developing an effective separation system based on the scanning counter-flow nonlinear gradient electrofocusing technique usually requires extensive experimental and numerical efforts, which can be reduced significantly with the help of analytical models for design optimization and guiding experimental studies. Therefore, this study focuses on developing an analytical model to evaluate the separation performance of scanning counter-flow bilinear gradient electrofocusing methods. In particular, this model allows a bilinear gradient and a scanning rate to be optimized for the desired separation performance. The results based on this model indicate that any bilinear gradient provides a higher separation resolution (up to 100%) compared to the linear case. This model is validated by numerical studies. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Aeroelastic loads and stability investigation of a full-scale hingeless rotor

    NASA Technical Reports Server (NTRS)

    Peterson, Randall L.; Johnson, Wayne

    1991-01-01

    An analytical investigation was conducted to study the influence of various parameters on predicting the aeroelastic loads and stability of a full-scale hingeless rotor in hover and forward flight. The CAMRAD/JA (Comprehensive Analytical Model of Rotorcraft Aerodynamics and Dynamics, Johnson Aeronautics) analysis code is used to obtain the analytical predictions. Data are presented for rotor blade bending and torsional moments as well as inplane damping data obtained for rotor operation in hover at a constant rotor rotational speed of 425 rpm and thrust coefficients between 0.0 and 0.12. Experimental data are presented from a test in the wind tunnel. Validation of the rotor system structural model with experimental rotor blade loads data shows excellent correlation with analytical results. Using this analysis, the influence of different aerodynamic inflow models, the number of generalized blade and body degrees of freedom, and the control-system stiffness at predicted stability levels are shown. Forward flight predictions of the BO-105 rotor system for 1-G thrust conditions at advance ratios of 0.0 to 0.35 are presented. The influence of different aerodynamic inflow models, dynamic inflow models and shaft angle variations on predicted stability levels are shown as a function of advance ratio.

  14. Control of Wheel/Rail Noise and Vibration

    DOT National Transportation Integrated Search

    1982-04-01

    An analytical model of the generation of wheel/rail noise has been developed and validated through an extensive series of field tests carried out at the Transportation Test Center using the State of the Art Car. A sensitivity analysis has been perfor...

  15. Analytical coupled-wave model for photonic crystal surface-emitting quantum cascade lasers.

    PubMed

    Wang, Zhixin; Liang, Yong; Yin, Xuefan; Peng, Chao; Hu, Weiwei; Faist, Jérôme

    2017-05-15

    An analytical coupled-wave model is developed for surface-emitting photonic-crystal quantum cascade lasers (PhC-QCLs). This model provides an accurate and efficient analysis of full three-dimensional device structure with large-area cavity size. Various laser properties of interest including the band structure, mode frequency, cavity loss, mode intensity profile, and far field pattern (FFP), as well as their dependence on PhC structures and cavity size, are investigated. Comparison with numerical simulations confirms the accuracy and validity of our model. The calculated FFP and polarization profile well explain the previously reported experimental results. In particular, we reveal the possibility of switching the lasing modes and generating single-lobed FFP by properly tuning PhC structures.

  16. Analytical and Empirical Modeling of Wear and Forces of CBN Tool in Hard Turning - A Review

    NASA Astrophysics Data System (ADS)

    Patel, Vallabh Dahyabhai; Gandhi, Anishkumar Hasmukhlal

    2017-08-01

    Machining of steel material having hardness above 45 HRC (Hardness-Rockwell C) is referred as a hard turning. There are numerous models which should be scrutinized and implemented to gain optimum performance of hard turning. Various models in hard turning by cubic boron nitride tool have been reviewed, in attempt to utilize appropriate empirical and analytical models. Validation of steady state flank and crater wear model, Usui's wear model, forces due to oblique cutting theory, extended Lee and Shaffer's force model, chip formation and progressive flank wear have been depicted in this review paper. Effort has been made to understand the relationship between tool wear and tool force based on the different cutting conditions and tool geometries so that appropriate model can be used according to user requirement in hard turning.

  17. Introduction to Validation of Analytical Methods: Potentiometric Determination of CO[subscript 2

    ERIC Educational Resources Information Center

    Hipólito-Nájera, A. Ricardo; Moya-Hernandez, M. Rosario; Gomez-Balderas, Rodolfo; Rojas-Hernandez, Alberto; Romero-Romo, Mario

    2017-01-01

    Validation of analytical methods is a fundamental subject for chemical analysts working in chemical industries. These methods are also relevant for pharmaceutical enterprises, biotechnology firms, analytical service laboratories, government departments, and regulatory agencies. Therefore, for undergraduate students enrolled in majors in the field…

  18. Dynamic modelling and experimental validation of three wheeled tilting vehicles

    NASA Astrophysics Data System (ADS)

    Amati, Nicola; Festini, Andrea; Pelizza, Luigi; Tonoli, Andrea

    2011-06-01

    The present paper describes the study of the stability in the straight running of a three-wheeled tilting vehicle for urban and sub-urban mobility. The analysis was carried out by developing a multibody model in the Matlab/SimulinkSimMechanics environment. An Adams-Motorcycle model and an equivalent analytical model were developed for the cross-validation and for highlighting the similarities with the lateral dynamics of motorcycles. Field tests were carried out to validate the model and identify some critical parameters, such as the damping on the steering system. The stability analysis demonstrates that the lateral dynamic motions are characterised by vibration modes that are similar to that of a motorcycle. Additionally, it shows that the wobble mode is significantly affected by the castor trail, whereas it is only slightly affected by the dynamics of the front suspension. For the present case study, the frame compliance also has no influence on the weave and wobble.

  19. Development of a solvent-free analytical method for paracetamol quantitative determination in Blood Brain Barrier in vitro model.

    PubMed

    Langlois, Marie-Hélène; Vekris, Antonios; Bousses, Christine; Mordelet, Elodie; Buhannic, Nathalie; Séguard, Céline; Couraud, Pierre-Olivier; Weksler, Babette B; Petry, Klaus G; Gaudin, Karen

    2015-04-15

    A Reversed Phase-High Performance Liquid Chromatography/Diode Array Detection method was developed and validated for paracetamol quantification in cell culture fluid from an in vitro Blood Brain Barrier model. The chromatographic method and sample preparation were developed using only aqueous solvents. The column was a XTerra RP18 150 × 4.6mm, 3.5 μm with a guard column XTerra RP18 20 × 4.6 mm, 3.5 μm at 35 °C and the mobile phase was composed by 100% formate buffer 20 mM at pH 4 and flow rate was set at 1 mL/min. The detection was at 242 nm. The sample was injected at 10 μL. Validation was performed using the accuracy profile approach. The analytical procedure was validated with the acceptance limits at ± 10% over a range of concentration from 1 to 58 mg L(-1). The procedure was then used in routine to determine paracetamol concentration in a brain blood barrier in vitro model. Application of the Unither paracetamol formulation in Blood Brain Barrier model allowed the determination and comparison of the transcellular passage of paracetamol at 37 °C and 4 °C, that excludes paracellular or non specific leakage. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Choice Defines Value: A Predictive Modeling Competition in Health Preference Research.

    PubMed

    Jakubczyk, Michał; Craig, Benjamin M; Barra, Mathias; Groothuis-Oudshoorn, Catharina G M; Hartman, John D; Huynh, Elisabeth; Ramos-Goñi, Juan M; Stolk, Elly A; Rand, Kim

    2018-02-01

    To identify which specifications and approaches to model selection better predict health preferences, the International Academy of Health Preference Research (IAHPR) hosted a predictive modeling competition including 18 teams from around the world. In April 2016, an exploratory survey was fielded: 4074 US respondents completed 20 out of 1560 paired comparisons by choosing between two health descriptions (e.g., longer life span vs. better health). The exploratory data were distributed to all teams. By July, eight teams had submitted their predictions for 1600 additional pairs and described their analytical approach. After these predictions had been posted online, a confirmatory survey was fielded (4148 additional respondents). The victorious team, "Discreetly Charming Econometricians," led by Michał Jakubczyk, achieved the smallest χ 2 , 4391.54 (a predefined criterion). Its primary scientific findings were that different models performed better with different pairs, that the value of life span is not constant proportional, and that logit models have poor predictive validity in health valuation. The results demonstrated the diversity and potential of new analytical approaches in health preference research and highlighted the importance of predictive validity in health valuation. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  1. Using a matrix-analytical approach to synthesizing evidence solved incompatibility problem in the hierarchy of evidence.

    PubMed

    Walach, Harald; Loef, Martin

    2015-11-01

    The hierarchy of evidence presupposes linearity and additivity of effects, as well as commutativity of knowledge structures. It thereby implicitly assumes a classical theoretical model. This is an argumentative article that uses theoretical analysis based on pertinent literature and known facts to examine the standard view of methodology. We show that the assumptions of the hierarchical model are wrong. The knowledge structures gained by various types of studies are not sequentially indifferent, that is, do not commute. External validity and internal validity are at least partially incompatible concepts. Therefore, one needs a different theoretical structure, typical of quantum-type theories, to model this situation. The consequence of this situation is that the implicit assumptions of the hierarchical model are wrong, if generalized to the concept of evidence in total. The problem can be solved by using a matrix-analytical approach to synthesizing evidence. Here, research methods that produce different types of evidence that complement each other are synthesized to yield the full knowledge. We show by an example how this might work. We conclude that the hierarchical model should be complemented by a broader reasoning in methodology. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Harmonization of strategies for the validation of quantitative analytical procedures. A SFSTP proposal--Part I.

    PubMed

    Hubert, Ph; Nguyen-Huu, J-J; Boulanger, B; Chapuzet, E; Chiap, P; Cohen, N; Compagnon, P-A; Dewé, W; Feinberg, M; Lallier, M; Laurentie, M; Mercier, N; Muzard, G; Nivet, C; Valat, L

    2004-11-15

    This paper is the first part of a summary report of a new commission of the Société Française des Sciences et Techniques Pharmaceutiques (SFSTP). The main objective of this commission was the harmonization of approaches for the validation of quantitative analytical procedures. Indeed, the principle of the validation of theses procedures is today widely spread in all the domains of activities where measurements are made. Nevertheless, this simple question of acceptability or not of an analytical procedure for a given application, remains incompletely determined in several cases despite the various regulations relating to the good practices (GLP, GMP, ...) and other documents of normative character (ISO, ICH, FDA, ...). There are many official documents describing the criteria of validation to be tested, but they do not propose any experimental protocol and limit themselves most often to the general concepts. For those reasons, two previous SFSTP commissions elaborated validation guides to concretely help the industrial scientists in charge of drug development to apply those regulatory recommendations. If these two first guides widely contributed to the use and progress of analytical validations, they present, nevertheless, weaknesses regarding the conclusions of the performed statistical tests and the decisions to be made with respect to the acceptance limits defined by the use of an analytical procedure. The present paper proposes to review even the bases of the analytical validation for developing harmonized approach, by distinguishing notably the diagnosis rules and the decision rules. This latter rule is based on the use of the accuracy profile, uses the notion of total error and allows to simplify the approach of the validation of an analytical procedure while checking the associated risk to its usage. Thanks to this novel validation approach, it is possible to unambiguously demonstrate the fitness for purpose of a new method as stated in all regulatory documents.

  3. Experimentally validated mathematical model of analyte uptake by permeation passive samplers.

    PubMed

    Salim, F; Ioannidis, M; Górecki, T

    2017-11-15

    A mathematical model describing the sampling process in a permeation-based passive sampler was developed and evaluated numerically. The model was applied to the Waterloo Membrane Sampler (WMS), which employs a polydimethylsiloxane (PDMS) membrane as a permeation barrier, and an adsorbent as a receiving phase. Samplers of this kind are used for sampling volatile organic compounds (VOC) from air and soil gas. The model predicts the spatio-temporal variation of sorbed and free analyte concentrations within the sampler components (membrane, sorbent bed and dead volume), from which the uptake rate throughout the sampling process can be determined. A gradual decline in the uptake rate during the sampling process is predicted, which is more pronounced when sampling higher concentrations. Decline of the uptake rate can be attributed to diminishing analyte concentration gradient within the membrane, which results from resistance to mass transfer and the development of analyte concentration gradients within the sorbent bed. The effects of changing the sampler component dimensions on the rate of this decline in the uptake rate can be predicted from the model. Performance of the model was evaluated experimentally for sampling of toluene vapors under controlled conditions. The model predictions proved close to the experimental values. The model provides a valuable tool to predict changes in the uptake rate during sampling, to assign suitable exposure times at different analyte concentration levels, and to optimize the dimensions of the sampler in a manner that minimizes these changes during the sampling period.

  4. Cognitive Task Analysis of En Route Air Traffic Control: Model Extension and Validation.

    ERIC Educational Resources Information Center

    Redding, Richard E.; And Others

    Phase II of a project extended data collection and analytic procedures to develop a model of expertise and skill development for en route air traffic control (ATC). New data were collected by recording the Dynamic Simulator (DYSIM) performance of five experts with a work overload problem. Expert controllers were interviewed in depth for mental…

  5. Validation of a Four-Factor Model of Career Indecision

    ERIC Educational Resources Information Center

    Brown, Steven D.; Hacker, Jason; Abrams, Matthew; Carr, Andrea; Rector, Christopher; Lamp, Kristen; Telander, Kyle; Siena, Anne

    2012-01-01

    Two studies were designed to explore whether a meta-analytically derived four-factor model of career indecision (Brown & Rector, 2008) could be replicated at the primary and secondary data levels. In the first study, an initial pool of 167 items was written based on 35 different instruments whose scores had loaded saliently on at least one…

  6. The Internal Structure of Positive and Negative Affect: A Confirmatory Factor Analysis of the PANAS

    ERIC Educational Resources Information Center

    Tuccitto, Daniel E.; Giacobbi, Peter R., Jr.; Leite, Walter L.

    2010-01-01

    This study tested five confirmatory factor analytic (CFA) models of the Positive Affect Negative Affect Schedule (PANAS) to provide validity evidence based on its internal structure. A sample of 223 club sport athletes indicated their emotions during the past week. Results revealed that an orthogonal two-factor CFA model, specifying error…

  7. Validation of the OpCost logging cost model using contractor surveys

    Treesearch

    Conor K. Bell; Robert F. Keefe; Jeremy S. Fried

    2017-01-01

    OpCost is a harvest and fuel treatment operations cost model developed to function as both a standalone tool and an integrated component of the Bioregional Inventory Originated Simulation Under Management (BioSum) analytical framework for landscape-level analysis of forest management alternatives. OpCost is an updated implementation of the Fuel Reduction Cost Simulator...

  8. An Illumination- and Temperature-Dependent Analytical Model for Copper Indium Gallium Diselenide (CIGS) Solar Cells

    DOE PAGES

    Sun, Xingshu; Silverman, Timothy; Garris, Rebekah; ...

    2016-07-18

    In this study, we present a physics-based analytical model for copper indium gallium diselenide (CIGS) solar cells that describes the illumination- and temperature-dependent current-voltage (I-V) characteristics and accounts for the statistical shunt variation of each cell. The model is derived by solving the drift-diffusion transport equation so that its parameters are physical and, therefore, can be obtained from independent characterization experiments. The model is validated against CIGS I-V characteristics as a function of temperature and illumination intensity. This physics-based model can be integrated into a large-scale simulation framework to optimize the performance of solar modules, as well as predict themore » long-term output yields of photovoltaic farms under different environmental conditions.« less

  9. Predictive Big Data Analytics: A Study of Parkinson’s Disease Using Large, Complex, Heterogeneous, Incongruent, Multi-Source and Incomplete Observations

    PubMed Central

    Dinov, Ivo D.; Heavner, Ben; Tang, Ming; Glusman, Gustavo; Chard, Kyle; Darcy, Mike; Madduri, Ravi; Pa, Judy; Spino, Cathie; Kesselman, Carl; Foster, Ian; Deutsch, Eric W.; Price, Nathan D.; Van Horn, John D.; Ames, Joseph; Clark, Kristi; Hood, Leroy; Hampstead, Benjamin M.; Dauer, William; Toga, Arthur W.

    2016-01-01

    Background A unique archive of Big Data on Parkinson’s Disease is collected, managed and disseminated by the Parkinson’s Progression Markers Initiative (PPMI). The integration of such complex and heterogeneous Big Data from multiple sources offers unparalleled opportunities to study the early stages of prevalent neurodegenerative processes, track their progression and quickly identify the efficacies of alternative treatments. Many previous human and animal studies have examined the relationship of Parkinson’s disease (PD) risk to trauma, genetics, environment, co-morbidities, or life style. The defining characteristics of Big Data–large size, incongruency, incompleteness, complexity, multiplicity of scales, and heterogeneity of information-generating sources–all pose challenges to the classical techniques for data management, processing, visualization and interpretation. We propose, implement, test and validate complementary model-based and model-free approaches for PD classification and prediction. To explore PD risk using Big Data methodology, we jointly processed complex PPMI imaging, genetics, clinical and demographic data. Methods and Findings Collective representation of the multi-source data facilitates the aggregation and harmonization of complex data elements. This enables joint modeling of the complete data, leading to the development of Big Data analytics, predictive synthesis, and statistical validation. Using heterogeneous PPMI data, we developed a comprehensive protocol for end-to-end data characterization, manipulation, processing, cleaning, analysis and validation. Specifically, we (i) introduce methods for rebalancing imbalanced cohorts, (ii) utilize a wide spectrum of classification methods to generate consistent and powerful phenotypic predictions, and (iii) generate reproducible machine-learning based classification that enables the reporting of model parameters and diagnostic forecasting based on new data. We evaluated several complementary model-based predictive approaches, which failed to generate accurate and reliable diagnostic predictions. However, the results of several machine-learning based classification methods indicated significant power to predict Parkinson’s disease in the PPMI subjects (consistent accuracy, sensitivity, and specificity exceeding 96%, confirmed using statistical n-fold cross-validation). Clinical (e.g., Unified Parkinson's Disease Rating Scale (UPDRS) scores), demographic (e.g., age), genetics (e.g., rs34637584, chr12), and derived neuroimaging biomarker (e.g., cerebellum shape index) data all contributed to the predictive analytics and diagnostic forecasting. Conclusions Model-free Big Data machine learning-based classification methods (e.g., adaptive boosting, support vector machines) can outperform model-based techniques in terms of predictive precision and reliability (e.g., forecasting patient diagnosis). We observed that statistical rebalancing of cohort sizes yields better discrimination of group differences, specifically for predictive analytics based on heterogeneous and incomplete PPMI data. UPDRS scores play a critical role in predicting diagnosis, which is expected based on the clinical definition of Parkinson’s disease. Even without longitudinal UPDRS data, however, the accuracy of model-free machine learning based classification is over 80%. The methods, software and protocols developed here are openly shared and can be employed to study other neurodegenerative disorders (e.g., Alzheimer’s, Huntington’s, amyotrophic lateral sclerosis), as well as for other predictive Big Data analytics applications. PMID:27494614

  10. An analytic solution for numerical modeling validation in electromagnetics: the resistive sphere

    NASA Astrophysics Data System (ADS)

    Swidinsky, Andrei; Liu, Lifei

    2017-11-01

    We derive the electromagnetic response of a resistive sphere to an electric dipole source buried in a conductive whole space. The solution consists of an infinite series of spherical Bessel functions and associated Legendre polynomials, and follows the well-studied problem of a conductive sphere buried in a resistive whole space in the presence of a magnetic dipole. Our result is particularly useful for controlled-source electromagnetic problems using a grounded electric dipole transmitter and can be used to check numerical methods of calculating the response of resistive targets (such as finite difference, finite volume, finite element and integral equation). While we elect to focus on the resistive sphere in our examples, the expressions in this paper are completely general and allow for arbitrary source frequency, sphere radius, transmitter position, receiver position and sphere/host conductivity contrast so that conductive target responses can also be checked. Commonly used mesh validation techniques consist of comparisons against other numerical codes, but such solutions may not always be reliable or readily available. Alternatively, the response of simple 1-D models can be tested against well-known whole space, half-space and layered earth solutions, but such an approach is inadequate for validating models with curved surfaces. We demonstrate that our theoretical results can be used as a complementary validation tool by comparing analytic electric fields to those calculated through a finite-element analysis; the software implementation of this infinite series solution is made available for direct and immediate application.

  11. The Importance of Method Selection in Determining Product Integrity for Nutrition Research1234

    PubMed Central

    Mudge, Elizabeth M; Brown, Paula N

    2016-01-01

    The American Herbal Products Association estimates that there as many as 3000 plant species in commerce. The FDA estimates that there are about 85,000 dietary supplement products in the marketplace. The pace of product innovation far exceeds that of analytical methods development and validation, with new ingredients, matrixes, and combinations resulting in an analytical community that has been unable to keep up. This has led to a lack of validated analytical methods for dietary supplements and to inappropriate method selection where methods do exist. Only after rigorous validation procedures to ensure that methods are fit for purpose should they be used in a routine setting to verify product authenticity and quality. By following systematic procedures and establishing performance requirements for analytical methods before method development and validation, methods can be developed that are both valid and fit for purpose. This review summarizes advances in method selection, development, and validation regarding herbal supplement analysis and provides several documented examples of inappropriate method selection and application. PMID:26980823

  12. The Importance of Method Selection in Determining Product Integrity for Nutrition Research.

    PubMed

    Mudge, Elizabeth M; Betz, Joseph M; Brown, Paula N

    2016-03-01

    The American Herbal Products Association estimates that there as many as 3000 plant species in commerce. The FDA estimates that there are about 85,000 dietary supplement products in the marketplace. The pace of product innovation far exceeds that of analytical methods development and validation, with new ingredients, matrixes, and combinations resulting in an analytical community that has been unable to keep up. This has led to a lack of validated analytical methods for dietary supplements and to inappropriate method selection where methods do exist. Only after rigorous validation procedures to ensure that methods are fit for purpose should they be used in a routine setting to verify product authenticity and quality. By following systematic procedures and establishing performance requirements for analytical methods before method development and validation, methods can be developed that are both valid and fit for purpose. This review summarizes advances in method selection, development, and validation regarding herbal supplement analysis and provides several documented examples of inappropriate method selection and application. © 2016 American Society for Nutrition.

  13. Rational quality assessment procedure for less-investigated herbal medicines: Case of a Congolese antimalarial drug with an analytical report.

    PubMed

    Tshitenge, Dieudonné Tshitenge; Ioset, Karine Ndjoko; Lami, José Nzunzu; Ndelo-di-Phanzu, Josaphat; Mufusama, Jean-Pierre Koy Sita; Bringmann, Gerhard

    2016-04-01

    Herbal medicines are the most globally used type of medical drugs. Their high cultural acceptability is due to the experienced safety and efficiency over centuries of use. Many of them are still phytochemically less-investigated, and are used without standardization or quality control. Choosing SIROP KILMA, an authorized Congolese antimalarial phytomedicine, as a model case, our study describes an interdisciplinary approach for a rational quality assessment of herbal drugs in general. It combines an authentication step of the herbal remedy prior to any fingerprinting, the isolation of the major constituents, the development and validation of an HPLC-DAD analytical method with internal markers, and the application of the method to several batches of the herbal medicine (here KILMA) thus permitting the establishment of a quantitative fingerprint. From the constitutive plants of KILMA, acteoside, isoacteoside, stachannin A, and pectolinarigenin-7-O-glucoside were isolated, and acteoside was used as the prime marker for the validation of an analytical method. This study contributes to the efforts of the WHO for the establishment of standards enabling the analytical evaluation of herbal materials. Moreover, the paper describes the first phytochemical and analytical report on a marketed Congolese phytomedicine. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. The German cervical cancer screening model: development and validation of a decision-analytic model for cervical cancer screening in Germany.

    PubMed

    Siebert, Uwe; Sroczynski, Gaby; Hillemanns, Peter; Engel, Jutta; Stabenow, Roland; Stegmaier, Christa; Voigt, Kerstin; Gibis, Bernhard; Hölzel, Dieter; Goldie, Sue J

    2006-04-01

    We sought to develop and validate a decision-analytic model for the natural history of cervical cancer for the German health care context and to apply it to cervical cancer screening. We developed a Markov model for the natural history of cervical cancer and cervical cancer screening in the German health care context. The model reflects current German practice standards for screening, diagnostic follow-up and treatment regarding cervical cancer and its precursors. Data for disease progression and cervical cancer survival were obtained from the literature and German cancer registries. Accuracy of Papanicolaou (Pap) testing was based on meta-analyses. We performed internal and external model validation using observed epidemiological data for unscreened women from different German cancer registries. The model predicts life expectancy, incidence of detected cervical cancer cases, lifetime cervical cancer risks and mortality. The model predicted a lifetime cervical cancer risk of 3.0% and a lifetime cervical cancer mortality of 1.0%, with a peak cancer incidence of 84/100,000 at age 51 years. These results were similar to observed data from German cancer registries, German literature data and results from other international models. Based on our model, annual Pap screening could prevent 98.7% of diagnosed cancer cases and 99.6% of deaths due to cervical cancer in women completely adherent to screening and compliant to treatment. Extending the screening interval from 1 year to 2, 3 or 5 years resulted in reduced screening effectiveness. This model provides a tool for evaluating the long-term effectiveness of different cervical cancer screening tests and strategies.

  15. Simultaneous targeted analysis of trimethylamine-N-oxide, choline, betaine, and carnitine by high performance liquid chromatography tandem mass spectrometry.

    PubMed

    Liu, Jia; Zhao, Mingming; Zhou, Juntuo; Liu, Changjie; Zheng, Lemin; Yin, Yuxin

    2016-11-01

    Trimethylamine-N-oxide (TMAO) is a metabolite generated from choline, betaine and carnitine in a gut microbiota-dependent way. This molecule is associated with development of atherosclerosis and cardiovascular events. A sensitive liquid chromatographic electrospray ionization tandem mass spectrometry (LC-ESI-MS/MS) has been developed and validated for the simultaneous determination of TMAO related molecules including TMAO, betaine, choline, and carnitine in mouse plasma. Analytes are extracted after protein precipitation by methanol and subjected to LC-ESI-MS/MS without preliminary derivatization. Separation of analytes was achieved on an amide column with acetonitrile-water as the mobile phase. This method has been fully validated in this study in terms of selectivity, linearity, sensitivity, precision, accuracy, and carryover effect, and the stability of the analyte under various conditions has been confirmed. This developed method has successfully been applied to plasma samples of our mouse model. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Analysis of earing behaviour in deep drawing of ASS 304 at elevated temperature

    NASA Astrophysics Data System (ADS)

    Gupta, Amit Kumar; Deole, Aditya; Kotkunde, Nitin; Singh, Swadesh Kumar; jella, Gangadhar

    2016-08-01

    Earing tendency in a deep drawn cup of circular blanks is one the most prominent characteristics observed due to anisotropy in a metal sheet. Such formation of uneven rim is mainly due to dissimilarity in yield stress as well as Lankford parameter (r- value) in different orientations. In this paper, an analytical function coupled with different yield functions viz., Hill 1948, Barlat 1989 and Barlat Yld 2000-2d has been used to provide an approximation of earing profile. In order to validate the results, material parameters for yield functions and hardening rule have been calibrated for ASS 304 at 250°C and deep drawing experiment is conducted to measure the earing profile. The predicted earing profiles based on analytical results have been validated using experimental earing profile. Based on this analysis, Barlat Yld 2000-2d has been observed to be a well suited yield model for deep drawing of ASS 304, which also confirms the reliability of analytical function for earing profile estimation.

  17. Wetting boundary condition for the color-gradient lattice Boltzmann method: Validation with analytical and experimental data

    NASA Astrophysics Data System (ADS)

    Akai, Takashi; Bijeljic, Branko; Blunt, Martin J.

    2018-06-01

    In the color gradient lattice Boltzmann model (CG-LBM), a fictitious-density wetting boundary condition has been widely used because of its ease of implementation. However, as we show, this may lead to inaccurate results in some cases. In this paper, a new scheme for the wetting boundary condition is proposed which can handle complicated 3D geometries. The validity of our method for static problems is demonstrated by comparing the simulated results to analytical solutions in 2D and 3D geometries with curved boundaries. Then, capillary rise simulations are performed to study dynamic problems where the three-phase contact line moves. The results are compared to experimental results in the literature (Heshmati and Piri, 2014). If a constant contact angle is assumed, the simulations agree with the analytical solution based on the Lucas-Washburn equation. However, to match the experiments, we need to implement a dynamic contact angle that varies with the flow rate.

  18. An accelerated photo-magnetic imaging reconstruction algorithm based on an analytical forward solution and a fast Jacobian assembly method

    NASA Astrophysics Data System (ADS)

    Nouizi, F.; Erkol, H.; Luk, A.; Marks, M.; Unlu, M. B.; Gulsen, G.

    2016-10-01

    We previously introduced photo-magnetic imaging (PMI), an imaging technique that illuminates the medium under investigation with near-infrared light and measures the induced temperature increase using magnetic resonance thermometry (MRT). Using a multiphysics solver combining photon migration and heat diffusion, PMI models the spatiotemporal distribution of temperature variation and recovers high resolution optical absorption images using these temperature maps. In this paper, we present a new fast non-iterative reconstruction algorithm for PMI. This new algorithm uses analytic methods during the resolution of the forward problem and the assembly of the sensitivity matrix. We validate our new analytic-based algorithm with the first generation finite element method (FEM) based reconstruction algorithm previously developed by our team. The validation is performed using, first synthetic data and afterwards, real MRT measured temperature maps. Our new method accelerates the reconstruction process 30-fold when compared to a single iteration of the FEM-based algorithm.

  19. A Simulation Study of Threats to Validity in Quasi-Experimental Designs: Interrelationship between Design, Measurement, and Analysis.

    PubMed

    Holgado-Tello, Fco P; Chacón-Moscoso, Salvador; Sanduvete-Chaves, Susana; Pérez-Gil, José A

    2016-01-01

    The Campbellian tradition provides a conceptual framework to assess threats to validity. On the other hand, different models of causal analysis have been developed to control estimation biases in different research designs. However, the link between design features, measurement issues, and concrete impact estimation analyses is weak. In order to provide an empirical solution to this problem, we use Structural Equation Modeling (SEM) as a first approximation to operationalize the analytical implications of threats to validity in quasi-experimental designs. Based on the analogies established between the Classical Test Theory (CTT) and causal analysis, we describe an empirical study based on SEM in which range restriction and statistical power have been simulated in two different models: (1) A multistate model in the control condition (pre-test); and (2) A single-trait-multistate model in the control condition (post-test), adding a new mediator latent exogenous (independent) variable that represents a threat to validity. Results show, empirically, how the differences between both the models could be partially or totally attributed to these threats. Therefore, SEM provides a useful tool to analyze the influence of potential threats to validity.

  20. A Simulation Study of Threats to Validity in Quasi-Experimental Designs: Interrelationship between Design, Measurement, and Analysis

    PubMed Central

    Holgado-Tello, Fco. P.; Chacón-Moscoso, Salvador; Sanduvete-Chaves, Susana; Pérez-Gil, José A.

    2016-01-01

    The Campbellian tradition provides a conceptual framework to assess threats to validity. On the other hand, different models of causal analysis have been developed to control estimation biases in different research designs. However, the link between design features, measurement issues, and concrete impact estimation analyses is weak. In order to provide an empirical solution to this problem, we use Structural Equation Modeling (SEM) as a first approximation to operationalize the analytical implications of threats to validity in quasi-experimental designs. Based on the analogies established between the Classical Test Theory (CTT) and causal analysis, we describe an empirical study based on SEM in which range restriction and statistical power have been simulated in two different models: (1) A multistate model in the control condition (pre-test); and (2) A single-trait-multistate model in the control condition (post-test), adding a new mediator latent exogenous (independent) variable that represents a threat to validity. Results show, empirically, how the differences between both the models could be partially or totally attributed to these threats. Therefore, SEM provides a useful tool to analyze the influence of potential threats to validity. PMID:27378991

  1. Radiated Sound Power from a Curved Honeycomb Panel

    NASA Technical Reports Server (NTRS)

    Robinson, Jay H.; Buehrle, Ralph D.; Klos, Jacob; Grosveld, Ferdinand W.

    2003-01-01

    The validation of finite element and boundary element model for the vibro-acoustic response of a curved honeycomb core composite aircraft panel is completed. The finite element and boundary element models were previously validated separately. This validation process was hampered significantly by the method in which the panel was installed in the test facility. The fixture used was made primarily of fiberboard and the panel was held in a groove in the fiberboard by a compression fitting made of plastic tubing. The validated model is intended to be used to evaluate noise reduction concepts from both an experimental and analytic basis simultaneously. An initial parametric study of the influence of core thickness on the radiated sound power from this panel, using this numerical model was subsequently conducted. This study was significantly influenced by the presence of strong boundary condition effects but indicated that the radiated sound power from this panel was insensitive to core thickness primarily due to the offsetting effects of added mass and added stiffness in the frequency range investigated.

  2. Generalized model of electromigration with 1:1 (analyte:selector) complexation stoichiometry: part II. Application to dual systems and experimental verification.

    PubMed

    Müllerová, Ludmila; Dubský, Pavel; Gaš, Bohuslav

    2015-03-06

    Interactions among analyte forms that undergo simultaneous dissociation/protonation and complexation with multiple selectors take the shape of a highly interconnected multi-equilibrium scheme. This makes it difficult to express the effective mobility of the analyte in these systems, which are often encountered in electrophoretical separations, unless a generalized model is introduced. In the first part of this series, we presented the theory of electromigration of a multivalent weakly acidic/basic/amphoteric analyte undergoing complexation with a mixture of an arbitrary number of selectors. In this work we demonstrate the validity of this concept experimentally. The theory leads to three useful perspectives, each of which is closely related to the one originally formulated for simpler systems. If pH, IS and the selector mixture composition are all kept constant, the system is treated as if only a single analyte form interacted with a single selector. If the pH changes at constant IS and mixture composition, the already well-established models of a weakly acidic/basic analyte interacting with a single selector can be employed. Varying the mixture composition at constant IS and pH leads to a situation where virtually a single analyte form interacts with a mixture of selectors. We show how to switch between the three perspectives in practice and confirm that they can be employed interchangeably according to the specific needs by measurements performed in single- and dual-selector systems at a pH where the analyte is fully dissociated, partly dissociated or fully protonated. Weak monoprotic analyte (R-flurbiprofen) and two selectors (native β-cyclodextrin and monovalent positively charged 6-monodeoxy-6-monoamino-β-cyclodextrin) serve as a model system. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. An analytic current-voltage model for quasi-ballistic III-nitride high electron mobility transistors

    NASA Astrophysics Data System (ADS)

    Li, Kexin; Rakheja, Shaloo

    2018-05-01

    We present an analytic model to describe the DC current-voltage (I-V) relationship in scaled III-nitride high electron mobility transistors (HEMTs) in which transport within the channel is quasi-ballistic in nature. Following Landauer's transport theory and charge calculation based on two-dimensional electrostatics that incorporates negative momenta states from the drain terminal, an analytic expression for current as a function of terminal voltages is developed. The model interprets the non-linearity of access regions in non-self-aligned HEMTs. Effects of Joule heating with temperature-dependent thermal conductivity are incorporated in the model in a self-consistent manner. With a total of 26 input parameters, the analytic model offers reduced empiricism compared to existing GaN HEMT models. To verify the model, experimental I-V data of InAlN/GaN with InGaN back-barrier HEMTs with channel lengths of 42 and 105 nm are considered. Additionally, the model is validated against numerical I-V data obtained from DC hydrodynamic simulations of an unintentionally doped AlGaN-on-GaN HEMT with 50-nm gate length. The model is also verified against pulsed I-V measurements of a 150-nm T-gate GaN HEMT. Excellent agreement between the model and experimental and numerical results for output current, transconductance, and output conductance is demonstrated over a broad range of bias and temperature conditions.

  4. An analytic model for accurate spring constant calibration of rectangular atomic force microscope cantilevers.

    PubMed

    Li, Rui; Ye, Hongfei; Zhang, Weisheng; Ma, Guojun; Su, Yewang

    2015-10-29

    Spring constant calibration of the atomic force microscope (AFM) cantilever is of fundamental importance for quantifying the force between the AFM cantilever tip and the sample. The calibration within the framework of thin plate theory undoubtedly has a higher accuracy and broader scope than that within the well-established beam theory. However, thin plate theory-based accurate analytic determination of the constant has been perceived as an extremely difficult issue. In this paper, we implement the thin plate theory-based analytic modeling for the static behavior of rectangular AFM cantilevers, which reveals that the three-dimensional effect and Poisson effect play important roles in accurate determination of the spring constants. A quantitative scaling law is found that the normalized spring constant depends only on the Poisson's ratio, normalized dimension and normalized load coordinate. Both the literature and our refined finite element model validate the present results. The developed model is expected to serve as the benchmark for accurate calibration of rectangular AFM cantilevers.

  5. Pressurization of cryogens - A review of current technology and its applicability to low-gravity conditions

    NASA Technical Reports Server (NTRS)

    Van Dresar, N. T.

    1992-01-01

    A review of technology, history, and current status for pressurized expulsion of cryogenic tankage is presented. Use of tank pressurization to expel cryogenic fluid will continue to be studied for future spacecraft applications over a range of operating conditions in the low-gravity environment. The review examines experimental test results and analytical model development for quiescent and agitated conditions in normal-gravity followed by a discussion of pressurization and expulsion in low-gravity. Validated, 1-D, finite difference codes exist for the prediction of pressurant mass requirements within the range of quiescent normal-gravity test data. To date, the effects of liquid sloshing have been characterized by tests in normal-gravity, but analytical models capable of predicting pressurant gas requirements remain unavailable. Efforts to develop multidimensional modeling capabilities in both normal and low-gravity have recently occurred. Low-gravity cryogenic fluid transfer experiments are needed to obtain low-gravity pressurized expulsion data. This data is required to guide analytical model development and to verify code performance.

  6. Pressurization of cryogens: A review of current technology and its applicability to low-gravity conditions

    NASA Technical Reports Server (NTRS)

    Vandresar, N. T.

    1992-01-01

    A review of technology, history, and current status for pressurized expulsion of cryogenic tankage is presented. Use of tank pressurization to expel cryogenic fluids will continue to be studied for future spacecraft applications over a range of operating conditions in the low-gravity environment. The review examines experimental test results and analytical model development for quiescent and agitated conditions in normal-gravity, followed by a discussion of pressurization and expulsion in low-gravity. Validated, 1-D, finite difference codes exist for the prediction of pressurant mass requirements within the range of quiescent normal-gravity test data. To date, the effects of liquid sloshing have been characterized by tests in normal-gravity, but analytical models capable of predicting pressurant gas requirements remain unavailable. Efforts to develop multidimensional modeling capabilities in both normal and low-gravity have recently occurred. Low-gravity cryogenic fluid transfer experiments are needed to obtain low-gravity pressurized expulsion data. This data is required to guide analytical model development and to verify code performance.

  7. 3-D Inhomogeous Radiative Transfer Model using a Planar-stratified Forward RT Model and Horizontal Perturbation Series

    NASA Astrophysics Data System (ADS)

    Zhang, K.; Gasiewski, A. J.

    2017-12-01

    A horizontally inhomogeneous unified microwave radiative transfer (HI-UMRT) model based upon a nonspherical hydrometeor scattering model is being developed at the University of Colorado at Boulder to facilitate forward radiative simulations for 3-dimensionally inhomogeneous clouds in severe weather. The HI-UMRT 3-D analytical solution is based on incorporating a planar-stratified 1-D UMRT algorithm within a horizontally inhomogeneous iterative perturbation scheme. Single-scattering parameters are computed using the Discrete Dipole Scattering (DDSCAT v7.3) program for hundreds of carefully selected nonspherical complex frozen hydrometeors from the NASA/GSFC DDSCAT database. The required analytic factorization symmetry of transition matrix in a normalized RT equation was analytically proved and validated numerically using the DDSCAT-based full Stokes matrix of randomly oriented hydrometeors. The HI-UMRT model thus inherits the properties of unconditional numerical stability, efficiency, and accuracy from the UMRT algorithm and provides a practical 3-D two-Stokes parameter radiance solution with Jacobian to be used within microwave retrievals and data assimilation schemes. In addition, a fast forward radar reflectivity operator with Jacobian based on DDSCAT backscatter efficiency computed for large hydrometeors is incorporated into the HI-UMRT model to provide applicability to active radar sensors. The HI-UMRT will be validated strategically at two levels: 1) intercomparison of brightness temperature (Tb) results with those of several 1-D and 3-D RT models, including UMRT, CRTM and Monte Carlo models, 2) intercomparison of Tb with observed data from combined passive and active spaceborne sensors (e.g. GPM GMI and DPR). The precise expression for determining the required number of 3-D iterations to achieve an error bound on the perturbation solution will be developed to facilitate the numerical verification of the HI-UMRT code complexity and computation performance.

  8. Working towards accreditation by the International Standards Organization 15189 Standard: how to validate an in-house developed method an example of lead determination in whole blood by electrothermal atomic absorption spectrometry.

    PubMed

    Garcia Hejl, Carine; Ramirez, Jose Manuel; Vest, Philippe; Chianea, Denis; Renard, Christophe

    2014-09-01

    Laboratories working towards accreditation by the International Standards Organization (ISO) 15189 standard are required to demonstrate the validity of their analytical methods. The different guidelines set by various accreditation organizations make it difficult to provide objective evidence that an in-house method is fit for the intended purpose. Besides, the required performance characteristics tests and acceptance criteria are not always detailed. The laboratory must choose the most suitable validation protocol and set the acceptance criteria. Therefore, we propose a validation protocol to evaluate the performance of an in-house method. As an example, we validated the process for the detection and quantification of lead in whole blood by electrothermal absorption spectrometry. The fundamental parameters tested were, selectivity, calibration model, precision, accuracy (and uncertainty of measurement), contamination, stability of the sample, reference interval, and analytical interference. We have developed a protocol that has been applied successfully to quantify lead in whole blood by electrothermal atomic absorption spectrometry (ETAAS). In particular, our method is selective, linear, accurate, and precise, making it suitable for use in routine diagnostics.

  9. QSRR modeling for the chromatographic retention behavior of some β-lactam antibiotics using forward and firefly variable selection algorithms coupled with multiple linear regression.

    PubMed

    Fouad, Marwa A; Tolba, Enas H; El-Shal, Manal A; El Kerdawy, Ahmed M

    2018-05-11

    The justified continuous emerging of new β-lactam antibiotics provokes the need for developing suitable analytical methods that accelerate and facilitate their analysis. A face central composite experimental design was adopted using different levels of phosphate buffer pH, acetonitrile percentage at zero time and after 15 min in a gradient program to obtain the optimum chromatographic conditions for the elution of 31 β-lactam antibiotics. Retention factors were used as the target property to build two QSRR models utilizing the conventional forward selection and the advanced nature-inspired firefly algorithm for descriptor selection, coupled with multiple linear regression. The obtained models showed high performance in both internal and external validation indicating their robustness and predictive ability. Williams-Hotelling test and student's t-test showed that there is no statistical significant difference between the models' results. Y-randomization validation showed that the obtained models are due to significant correlation between the selected molecular descriptors and the analytes' chromatographic retention. These results indicate that the generated FS-MLR and FFA-MLR models are showing comparable quality on both the training and validation levels. They also gave comparable information about the molecular features that influence the retention behavior of β-lactams under the current chromatographic conditions. We can conclude that in some cases simple conventional feature selection algorithm can be used to generate robust and predictive models comparable to that are generated using advanced ones. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Surrogate analyte approach for quantitation of endogenous NAD(+) in human acidified blood samples using liquid chromatography coupled with electrospray ionization tandem mass spectrometry.

    PubMed

    Liu, Liling; Cui, Zhiyi; Deng, Yuzhong; Dean, Brian; Hop, Cornelis E C A; Liang, Xiaorong

    2016-02-01

    A high-performance liquid chromatography tandem mass spectrometry (LC-MS/MS) assay for the quantitative determination of NAD(+) in human whole blood using a surrogate analyte approach was developed and validated. Human whole blood was acidified using 0.5N perchloric acid at a ratio of 1:3 (v:v, blood:perchloric acid) during sample collection. 25μL of acidified blood was extracted using a protein precipitation method and the resulting extracts were analyzed using reverse-phase chromatography and positive electrospray ionization mass spectrometry. (13)C5-NAD(+) was used as the surrogate analyte for authentic analyte, NAD(+). The standard curve ranging from 0.250 to 25.0μg/mL in acidified human blood for (13)C5-NAD(+) was fitted to a 1/x(2) weighted linear regression model. The LC-MS/MS response between surrogate analyte and authentic analyte at the same concentration was obtained before and after the batch run. This response factor was not applied when determining the NAD(+) concentration from the (13)C5-NAD(+) standard curve since the percent difference was less than 5%. The precision and accuracy of the LC-MS/MS assay based on the five analytical QC levels were well within the acceptance criteria from both FDA and EMA guidance for bioanalytical method validation. Average extraction recovery of (13)C5-NAD(+) was 94.6% across the curve range. Matrix factor was 0.99 for both high and low QC indicating minimal ion suppression or enhancement. The validated assay was used to measure the baseline level of NAD(+) in 29 male and 21 female human subjects. This assay was also used to study the circadian effect of endogenous level of NAD(+) in 10 human subjects. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. The role of decision analytic modeling in the health economic assessment of spinal intervention.

    PubMed

    Edwards, Natalie C; Skelly, Andrea C; Ziewacz, John E; Cahill, Kevin; McGirt, Matthew J

    2014-10-15

    Narrative review. To review the common tenets, strengths, and weaknesses of decision modeling for health economic assessment and to review the use of decision modeling in the spine literature to date. For the majority of spinal interventions, well-designed prospective, randomized, pragmatic cost-effectiveness studies that address the specific decision-in-need are lacking. Decision analytic modeling allows for the estimation of cost-effectiveness based on data available to date. Given the rising demands for proven value in spine care, the use of decision analytic modeling is rapidly increasing by clinicians and policy makers. This narrative review discusses the general components of decision analytic models, how decision analytic models are populated and the trade-offs entailed, makes recommendations for how users of spine intervention decision models might go about appraising the models, and presents an overview of published spine economic models. A proper, integrated, clinical, and economic critical appraisal is necessary in the evaluation of the strength of evidence provided by a modeling evaluation. As is the case with clinical research, all options for collecting health economic or value data are not without their limitations and flaws. There is substantial heterogeneity across the 20 spine intervention health economic modeling studies summarized with respect to study design, models used, reporting, and general quality. There is sparse evidence for populating spine intervention models. Results mostly showed that interventions were cost-effective based on $100,000/quality-adjusted life-year threshold. Spine care providers, as partners with their health economic colleagues, have unique clinical expertise and perspectives that are critical to interpret the strengths and weaknesses of health economic models. Health economic models must be critically appraised for both clinical validity and economic quality before altering health care policy, payment strategies, or patient care decisions. 4.

  12. Validation of a Scalable Solar Sailcraft

    NASA Technical Reports Server (NTRS)

    Murphy, D. M.

    2006-01-01

    The NASA In-Space Propulsion (ISP) program sponsored intensive solar sail technology and systems design, development, and hardware demonstration activities over the past 3 years. Efforts to validate a scalable solar sail system by functional demonstration in relevant environments, together with test-analysis correlation activities on a scalable solar sail system have recently been successfully completed. A review of the program, with descriptions of the design, results of testing, and analytical model validations of component and assembly functional, strength, stiffness, shape, and dynamic behavior are discussed. The scaled performance of the validated system is projected to demonstrate the applicability to flight demonstration and important NASA road-map missions.

  13. The legal and ethical concerns that arise from using complex predictive analytics in health care.

    PubMed

    Cohen, I Glenn; Amarasingham, Ruben; Shah, Anand; Xie, Bin; Lo, Bernard

    2014-07-01

    Predictive analytics, or the use of electronic algorithms to forecast future events in real time, makes it possible to harness the power of big data to improve the health of patients and lower the cost of health care. However, this opportunity raises policy, ethical, and legal challenges. In this article we analyze the major challenges to implementing predictive analytics in health care settings and make broad recommendations for overcoming challenges raised in the four phases of the life cycle of a predictive analytics model: acquiring data to build the model, building and validating it, testing it in real-world settings, and disseminating and using it more broadly. For instance, we recommend that model developers implement governance structures that include patients and other stakeholders starting in the earliest phases of development. In addition, developers should be allowed to use already collected patient data without explicit consent, provided that they comply with federal regulations regarding research on human subjects and the privacy of health information. Project HOPE—The People-to-People Health Foundation, Inc.

  14. Particle contamination effects in EUVL: enhanced theory for the analytical determination of critical particle sizes

    NASA Astrophysics Data System (ADS)

    Brandstetter, Gerd; Govindjee, Sanjay

    2012-03-01

    Existing analytical and numerical methodologies are discussed and then extended in order to calculate critical contamination-particle sizes, which will result in deleterious effects during EUVL E-chucking in the face of an error budget on the image-placement-error (IPE). The enhanced analytical models include a gap dependant clamping pressure formulation, the consideration of a general material law for realistic particle crushing and the influence of frictional contact. We present a discussion of the defects of the classical de-coupled modeling approach where particle crushing and mask/chuck indentation are separated from the global computation of mask bending. To repair this defect we present a new analytic approach based on an exact Hankel transform method which allows a fully coupled solution. This will capture the contribution of the mask indentation to the image-placement-error (estimated IPE increase of 20%). A fully coupled finite element model is used to validate the analytical models and to further investigate the impact of a mask back-side CrN-layer. The models are applied to existing experimental data with good agreement. For a standard material combination, a given IPE tolerance of 1 nm and a 15 kPa closing pressure, we derive bounds for single particles of cylindrical shape (radius × height < 44 μm) and spherical shape (diameter < 12 μm).

  15. One-dimensional model and solutions for creeping gas flows in the approximation of uniform pressure

    NASA Astrophysics Data System (ADS)

    Vedernikov, A.; Balapanov, D.

    2016-11-01

    A model, along with analytical and numerical solutions, is presented to describe a wide variety of one-dimensional slow flows of compressible heat-conductive fluids. The model is based on the approximation of uniform pressure valid for the flows, in which the sound propagation time is much shorter than the duration of any meaningful density variation in the system. The energy balance is described by the heat equation that is solved independently. This approach enables the explicit solution for the fluid velocity to be obtained. Interfacial and volumetric heat and mass sources as well as boundary motion are considered as possible sources of density variation in the fluid. A set of particular tasks is analyzed for different motion sources in planar, axial, and central symmetries in the quasistationary limit of heat conduction (i.e., for large Fourier number). The analytical solutions are in excellent agreement with corresponding numerical solutions of the whole system of the Navier-Stokes equations. This work deals with the ideal gas. The approach is also valid for other equations of state.

  16. A buoyancy-based fiber Bragg grating tilt sensor

    NASA Astrophysics Data System (ADS)

    Maheshwari, Muneesh; Yang, Yaowen; Chaturvedi, Tanmay

    2017-04-01

    In this paper, a novel design of fiber Bragg grating tilt sensor is proposed. This tilt sensor exhibits high angle sensitivity and resolution. The presented tilt sensor works on the principle of the force of buoyancy in a liquid. It has certain advantages over the other designs of tilt sensors. The temperature effect can be easily compensated by using an un-bonded or free FBG. An analytical model is established which correlates the Bragg wavelength (λB) with the angle of inclination. This model is then validated by the experiment, where the experimental and analytical results are found in good agreement with each other.

  17. Electronic cooling design and test validation

    NASA Astrophysics Data System (ADS)

    Murtha, W. B.

    1983-07-01

    An analytical computer model has been used to design a counterflow air-cooled heat exchanger according to the cooling, structural and geometric requirements of a U.S. Navy shipboard electronics cabinet, emphasizing high reliability performance through the maintenance of electronic component junction temperatures lower than 110 C. Environmental testing of the design obtained has verified that the analytical predictions were conservative. Model correlation to the test data furnishes an upgraded capability for the evaluation of tactical effects, and has established a two-orders of magnitude growth potential for increased electronics capabilities through enhanced heat dissipation. Electronics cabinets of this type are destined for use with Vertical Launching System-type combatant vessel magazines.

  18. A new model for fluid velocity slip on a solid surface.

    PubMed

    Shu, Jian-Jun; Teo, Ji Bin Melvin; Chan, Weng Kong

    2016-10-12

    A general adsorption model is developed to describe the interactions between near-wall fluid molecules and solid surfaces. This model serves as a framework for the theoretical modelling of boundary slip phenomena. Based on this adsorption model, a new general model for the slip velocity of fluids on solid surfaces is introduced. The slip boundary condition at a fluid-solid interface has hitherto been considered separately for gases and liquids. In this paper, we show that the slip velocity in both gases and liquids may originate from dynamical adsorption processes at the interface. A unified analytical model that is valid for both gas-solid and liquid-solid slip boundary conditions is proposed based on surface science theory. The corroboration with the experimental data extracted from the literature shows that the proposed model provides an improved prediction compared to existing analytical models for gases at higher shear rates and close agreement for liquid-solid interfaces in general.

  19. IT vendor selection model by using structural equation model & analytical hierarchy process

    NASA Astrophysics Data System (ADS)

    Maitra, Sarit; Dominic, P. D. D.

    2012-11-01

    Selecting and evaluating the right vendors is imperative for an organization's global marketplace competitiveness. Improper selection and evaluation of potential vendors can dwarf an organization's supply chain performance. Numerous studies have demonstrated that firms consider multiple criteria when selecting key vendors. This research intends to develop a new hybrid model for vendor selection process with better decision making. The new proposed model provides a suitable tool for assisting decision makers and managers to make the right decisions and select the most suitable vendor. This paper proposes a Hybrid model based on Structural Equation Model (SEM) and Analytical Hierarchy Process (AHP) for long-term strategic vendor selection problems. The five steps framework of the model has been designed after the thorough literature study. The proposed hybrid model will be applied using a real life case study to assess its effectiveness. In addition, What-if analysis technique will be used for model validation purpose.

  20. Developing an analytical tool for evaluating EMS system design changes and their impact on cardiac arrest outcomes: combining geographic information systems with register data on survival rates

    PubMed Central

    2013-01-01

    Background Out-of-hospital cardiac arrest (OHCA) is a frequent and acute medical condition that requires immediate care. We estimate survival rates from OHCA in the area of Stockholm, through developing an analytical tool for evaluating Emergency Medical Services (EMS) system design changes. The study also is an attempt to validate the proposed model used to generate the outcome measures for the study. Methods and results This was done by combining a geographic information systems (GIS) simulation of driving times with register data on survival rates. The emergency resources comprised ambulance alone and ambulance plus fire services. The simulation model predicted a baseline survival rate of 3.9 per cent, and reducing the ambulance response time by one minute increased survival to 4.6 per cent. Adding the fire services as first responders (dual dispatch) increased survival to 6.2 per cent from the baseline level. The model predictions were validated using empirical data. Conclusion We have presented an analytical tool that easily can be generalized to other regions or countries. The model can be used to predict outcomes of cardiac arrest prior to investment in EMS design changes that affect the alarm process, e.g. (1) static changes such as trimming the emergency call handling time or (2) dynamic changes such as location of emergency resources or which resources should carry a defibrillator. PMID:23415045

  1. Analytical and experimental comparisons of electromechanical vibration response of a piezoelectric bimorph beam for power harvesting

    NASA Astrophysics Data System (ADS)

    Lumentut, M. F.; Howard, I. M.

    2013-03-01

    Power harvesters that extract energy from vibrating systems via piezoelectric transduction show strong potential for powering smart wireless sensor devices in applications of health condition monitoring of rotating machinery and structures. This paper presents an analytical method for modelling an electromechanical piezoelectric bimorph beam with tip mass under two input base transverse and longitudinal excitations. The Euler-Bernoulli beam equations were used to model the piezoelectric bimorph beam. The polarity-electric field of the piezoelectric element is excited by the strain field caused by base input excitation, resulting in electrical charge. The governing electromechanical dynamic equations were derived analytically using the weak form of the Hamiltonian principle to obtain the constitutive equations. Three constitutive electromechanical dynamic equations based on independent coefficients of virtual displacement vectors were formulated and then further modelled using the normalised Ritz eigenfunction series. The electromechanical formulations include both the series and parallel connections of the piezoelectric bimorph. The multi-mode frequency response functions (FRFs) under varying electrical load resistance were formulated using Laplace transformation for the multi-input mechanical vibrations to provide the multi-output dynamic displacement, velocity, voltage, current and power. The experimental and theoretical validations reduced for the single mode system were shown to provide reasonable predictions. The model results from polar base excitation for off-axis input motions were validated with experimental results showing the change to the electrical power frequency response amplitude as a function of excitation angle, with relevance for practical implementation.

  2. Developing semi-analytical solution for multiple-zone transient storage model with spatially non-uniform storage

    NASA Astrophysics Data System (ADS)

    Deng, Baoqing; Si, Yinbing; Wang, Jia

    2017-12-01

    Transient storages may vary along the stream due to stream hydraulic conditions and the characteristics of storage. Analytical solutions of transient storage models in literature didn't cover the spatially non-uniform storage. A novel integral transform strategy is presented that simultaneously performs integral transforms to the concentrations in the stream and in storage zones by using the single set of eigenfunctions derived from the advection-diffusion equation of the stream. The semi-analytical solution of the multiple-zone transient storage model with the spatially non-uniform storage is obtained by applying the generalized integral transform technique to all partial differential equations in the multiple-zone transient storage model. The derived semi-analytical solution is validated against the field data in literature. Good agreement between the computed data and the field data is obtained. Some illustrative examples are formulated to demonstrate the applications of the present solution. It is shown that solute transport can be greatly affected by the variation of mass exchange coefficient and the ratio of cross-sectional areas. When the ratio of cross-sectional areas is big or the mass exchange coefficient is small, more reaches are recommended to calibrate the parameter.

  3. Transition to synchrony in degree-frequency correlated Sakaguchi-Kuramoto model

    NASA Astrophysics Data System (ADS)

    Kundu, Prosenjit; Khanra, Pitambar; Hens, Chittaranjan; Pal, Pinaki

    2017-11-01

    We investigate transition to synchrony in degree-frequency correlated Sakaguchi-Kuramoto (SK) model on complex networks both analytically and numerically. We analytically derive self-consistent equations for group angular velocity and order parameter for the model in the thermodynamic limit. Using the self-consistent equations we investigate transition to synchronization in SK model on uncorrelated scale-free (SF) and Erdős-Rényi (ER) networks in detail. Depending on the degree distribution exponent (γ ) of SF networks and phase-frustration parameter, the population undergoes from first-order transition [explosive synchronization (ES)] to second-order transition and vice versa. In ER networks transition is always second order irrespective of the values of the phase-lag parameter. We observe that the critical coupling strength for the onset of synchronization is decreased by phase-frustration parameter in case of SF network where as in ER network, the phase-frustration delays the onset of synchronization. Extensive numerical simulations using SF and ER networks are performed to validate the analytical results. An analytical expression of critical coupling strength for the onset of synchronization is also derived from the self-consistent equations considering the vanishing order parameter limit.

  4. In-to-Out Body Antenna-Independent Path Loss Model for Multilayered Tissues and Heterogeneous Medium

    PubMed Central

    Kurup, Divya; Vermeeren, Günter; Tanghe, Emmeric; Joseph, Wout; Martens, Luc

    2015-01-01

    In this paper, we investigate multilayered lossy and heterogeneous media for wireless body area networks (WBAN) to develop a simple, fast and efficient analytical in-to-out body path loss (PL) model at 2.45 GHz and, thus, avoid time-consuming simulations. The PL model is an antenna-independent model and is validated with simulations in layered medium, as well as in a 3D human model using electromagnetic solvers. PMID:25551483

  5. An analytical model for light backscattering by coccoliths and coccospheres of Emiliania huxleyi.

    PubMed

    Fournier, Georges; Neukermans, Griet

    2017-06-26

    We present an analytical model for light backscattering by coccoliths and coccolithophores of the marine calcifying phytoplankter Emiliania huxleyi. The model is based on the separation of the effects of diffraction, refraction, and reflection on scattering, a valid assumption for particle sizes typical of coccoliths and coccolithophores. Our model results match closely with results from an exact scattering code that uses complex particle geometry and our model also mimics well abrupt transitions in scattering magnitude. Finally, we apply our model to predict changes in the spectral backscattering coefficient during an Emiliania huxleyi bloom with results that closely match in situ measurements. Because our model captures the key features that control the light backscattering process, it can be generalized to coccoliths and coccolithophores of different morphologies which can be obtained from size-calibrated electron microphotographs. Matlab codes of this model are provided as supplementary material.

  6. Analytic Modeling of Pressurization and Cryogenic Propellant

    NASA Technical Reports Server (NTRS)

    Corpening, Jeremy H.

    2010-01-01

    An analytic model for pressurization and cryogenic propellant conditions during all mission phases of any liquid rocket based vehicle has been developed and validated. The model assumes the propellant tanks to be divided into five nodes and also implements an empirical correlation for liquid stratification if desired. The five nodes include a tank wall node exposed to ullage gas, an ullage gas node, a saturated propellant vapor node at the liquid-vapor interface, a liquid node, and a tank wall node exposed to liquid. The conservation equations of mass and energy are then applied across all the node boundaries and, with the use of perfect gas assumptions, explicit solutions for ullage and liquid conditions are derived. All fluid properties are updated real time using NIST Refprop.1 Further, mass transfer at the liquid-vapor interface is included in the form of evaporation, bulk boiling of liquid propellant, and condensation given the appropriate conditions for each. Model validation has proven highly successful against previous analytic models and various Saturn era test data and reasonably successful against more recent LH2 tank self pressurization ground test data. Finally, this model has been applied to numerous design iterations for the Altair Lunar Lander, Ares V Core Stage, and Ares V Earth Departure Stage in order to characterize Helium and autogenous pressurant requirements, propellant lost to evaporation and thermodynamic venting to maintain propellant conditions, and non-uniform tank draining in configurations utilizing multiple LH2 or LO2 propellant tanks. In conclusion, this model provides an accurate and efficient means of analyzing multiple design configurations for any cryogenic propellant tank in launch, low-acceleration coast, or in-space maneuvering and supplies the user with pressurization requirements, unusable propellants from evaporation and liquid stratification, and general ullage gas, liquid, and tank wall conditions as functions of time.

  7. Comparison of modeled backscatter with SAR data at P-band

    NASA Technical Reports Server (NTRS)

    Wang, Yong; Davis, Frank W.; Melack, John M.

    1992-01-01

    In recent years several analytical models were developed to predict microwave scattering by trees and forest canopies. These models contribute to the understanding of radar backscatter over forested regions to the extent that they capture the basic interactions between microwave radiation and tree canopies, understories, and ground layers as functions of incidence angle, wavelength, and polarization. The Santa Barbara microwave model backscatter model for woodland (i.e. with discontinuous tree canopies) combines a single-tree backscatter model and a gap probability model. Comparison of model predictions with synthetic aperture radar (SAR) data and L-band (lambda = 0.235 m) is promising, but much work is still needed to test the validity of model predictions at other wavelengths. The validity of the model predictions at P-band (lambda = 0.68 m) for woodland stands at our Mt. Shasta test site was tested.

  8. In-orbit evaluation of the control system/structural mode interactions of the OSO-8 spacecraft

    NASA Technical Reports Server (NTRS)

    Slafer, L. I.

    1979-01-01

    The Orbiting Solar Observatory-8 experienced severe structural mode/control loop interaction problems during the spacecraft development. Extensive analytical studies, using the hybrid coordinate modeling approach, and comprehensive ground testing were carried out in order to achieve the system's precision pointing performance requirements. A recent series of flight tests were conducted with the spacecraft in which a wide bandwidth, high resolution telemetry system was utilized to evaluate the on-orbit flexible dynamics characteristics of the vehicle along with the control system performance. The paper describes the results of these tests, reviewing the basic design problem, analytical approach taken, ground test philosophy, and on-orbit testing. Data from the tests was used to determine the primary mode frequency, damping, and servo coupling dynamics for the on-orbit condition. Additionally, the test results have verified analytically predicted differences between the on-orbit and ground test environments, and have led to a validation of both the analytical modeling and servo design techniques used during the development of the control system.

  9. Tidally induced residual current over the Malin Sea continental slope

    NASA Astrophysics Data System (ADS)

    Stashchuk, Nataliya; Vlasenko, Vasiliy; Hosegood, Phil; Nimmo-Smith, W. Alex M.

    2017-05-01

    Tidally induced residual currents generated over shelf-slope topography are investigated analytically and numerically using the Massachusetts Institute of Technology general circulation model. Observational support for the presence of such a slope current was recorded over the Malin Sea continental slope during the 88-th cruise of the RRS ;James Cook; in July 2013. A simple analytical formula developed here in the framework of time-averaged shallow water equations has been validated against a fully nonlinear nonhydrostatic numerical solution. A good agreement between analytical and numerical solutions is found for a wide range of input parameters of the tidal flow and bottom topography. In application to the Malin Shelf area both the numerical model and analytical solution predicted a northward moving current confined to the slope with its core located above the 400 m isobath and with vertically averaged maximum velocities up to 8 cm s-1, which is consistent with the in-situ data recorded at three moorings and along cross-slope transects.

  10. Correlating N2 and CH4 adsorption on microporous carbon using a new analytical model

    USGS Publications Warehouse

    Sun, Jielun; Chen, S.; Rood, M.J.; Rostam-Abadi, M.

    1998-01-01

    A new pore size distribution (PSD) model is developed to readily describe PSDs of microporous materials with an analytical expression. Results from this model can be used to calculate the corresponding adsorption isotherm to compare the calculated isotherm to the experimental isotherm. This aspect of the model provides another check on the validity of the model's results. The model is developed on the basis of a 3-D adsorption isotherm equation that is derived from statistical mechanical principles. Least-squares error minimization is used to solve the PSD without any preassumed distribution function. In comparison with several well-accepted analytical methods from the literature, this 3-D model offers a relatively realistic PSD description for select reference materials, including activated-carbon fibers. N2 and CH4 adsorption is correlated using the 3-D model for commercial carbons BPL and AX-21. Predicted CH4 adsorption isotherms at 296 K based on N2 adsorption at 77 K are in reasonable agreement with experimental CH4 isotherms. Use of the model is also described for characterizing PSDs of tire-derived activated carbons and coal-derived activated carbons for air-quality control applications.

  11. Optic nerve signals in a neuromorphic chip I: Outer and inner retina models.

    PubMed

    Zaghloul, Kareem A; Boahen, Kwabena

    2004-04-01

    We present a novel model for the mammalian retina and analyze its behavior. Our outer retina model performs bandpass spatiotemporal filtering. It is comprised of two reciprocally connected resistive grids that model the cone and horizontal cell syncytia. We show analytically that its sensitivity is proportional to the space-constant-ratio of the two grids while its half-max response is set by the local average intensity. Thus, this outer retina model realizes luminance adaptation. Our inner retina model performs high-pass temporal filtering. It features slow negative feedback whose strength is modulated by a locally computed measure of temporal contrast, modeling two kinds of amacrine cells, one narrow-field, the other wide-field. We show analytically that, when the input is spectrally pure, the corner-frequency tracks the input frequency. But when the input is broadband, the corner frequency is proportional to contrast. Thus, this inner retina model realizes temporal frequency adaptation as well as contrast gain control. We present CMOS circuit designs for our retina model in this paper as well. Experimental measurements from the fabricated chip, and validation of our analytical results, are presented in the companion paper [Zaghloul and Boahen (2004)].

  12. Predictive QSAR modeling workflow, model applicability domains, and virtual screening.

    PubMed

    Tropsha, Alexander; Golbraikh, Alexander

    2007-01-01

    Quantitative Structure Activity Relationship (QSAR) modeling has been traditionally applied as an evaluative approach, i.e., with the focus on developing retrospective and explanatory models of existing data. Model extrapolation was considered if only in hypothetical sense in terms of potential modifications of known biologically active chemicals that could improve compounds' activity. This critical review re-examines the strategy and the output of the modern QSAR modeling approaches. We provide examples and arguments suggesting that current methodologies may afford robust and validated models capable of accurate prediction of compound properties for molecules not included in the training sets. We discuss a data-analytical modeling workflow developed in our laboratory that incorporates modules for combinatorial QSAR model development (i.e., using all possible binary combinations of available descriptor sets and statistical data modeling techniques), rigorous model validation, and virtual screening of available chemical databases to identify novel biologically active compounds. Our approach places particular emphasis on model validation as well as the need to define model applicability domains in the chemistry space. We present examples of studies where the application of rigorously validated QSAR models to virtual screening identified computational hits that were confirmed by subsequent experimental investigations. The emerging focus of QSAR modeling on target property forecasting brings it forward as predictive, as opposed to evaluative, modeling approach.

  13. Implementation of a state-to-state analytical framework for the calculation of expansion tube flow properties

    NASA Astrophysics Data System (ADS)

    James, C. M.; Gildfind, D. E.; Lewis, S. W.; Morgan, R. G.; Zander, F.

    2018-03-01

    Expansion tubes are an important type of test facility for the study of planetary entry flow-fields, being the only type of impulse facility capable of simulating the aerothermodynamics of superorbital planetary entry conditions from 10 to 20 km/s. However, the complex flow processes involved in expansion tube operation make it difficult to fully characterise flow conditions, with two-dimensional full facility computational fluid dynamics simulations often requiring tens or hundreds of thousands of computational hours to complete. In an attempt to simplify this problem and provide a rapid flow condition prediction tool, this paper presents a validated and comprehensive analytical framework for the simulation of an expansion tube facility. It identifies central flow processes and models them from state to state through the facility using established compressible and isentropic flow relations, and equilibrium and frozen chemistry. How the model simulates each section of an expansion tube is discussed, as well as how the model can be used to simulate situations where flow conditions diverge from ideal theory. The model is then validated against experimental data from the X2 expansion tube at the University of Queensland.

  14. Liquid Oxygen/Liquid Methane Integrated Propulsion System Test Bed

    NASA Technical Reports Server (NTRS)

    Flynn, Howard; Lusby, Brian; Villemarette, Mark

    2011-01-01

    In support of NASA?s Propulsion and Cryogenic Advanced Development (PCAD) project, a liquid oxygen (LO2)/liquid methane (LCH4) Integrated Propulsion System Test Bed (IPSTB) was designed and advanced to the Critical Design Review (CDR) stage at the Johnson Space Center. The IPSTB?s primary objectives are to study LO2/LCH4 propulsion system steady state and transient performance, operational characteristics and to validate fluid and thermal models of a LO2/LCH4 propulsion system for use in future flight design work. Two phase thermal and dynamic fluid flow models of the IPSTB were built to predict the system performance characteristics under a variety of operating modes and to aid in the overall system design work. While at ambient temperature and simulated altitude conditions at the White Sands Test Facility, the IPSTB and its approximately 600 channels of system instrumentation would be operated to perform a variety of integrated main engine and reaction control engine hot fire tests. The pressure, temperature, and flow rate data collected during this testing would then be used to validate the analytical models of the IPSTB?s thermal and dynamic fluid flow performance. An overview of the IPSTB design and analytical model development will be presented.

  15. Modeling ventilation time in forage tower silos.

    PubMed

    Bahloul, A; Chavez, M; Reggio, M; Roberge, B; Goyer, N

    2012-10-01

    The fermentation process in forage tower silos produces a significant amount of gases, which can easily reach dangerous concentrations and constitute a hazard for silo operators. To maintain a non-toxic environment, silo ventilation is applied. Literature reviews show that the fermentation gases reach high concentrations in the headspace of a silo and flow down the silo from the chute door to the feed room. In this article, a detailed parametric analysis of forced ventilation scenarios built via numerical simulation was performed. The methodology is based on the solution of the Navier-Stokes equations, coupled with transport equations for the gas concentrations. Validation was achieved by comparing the numerical results with experimental data obtained from a scale model silo using the tracer gas testing method for O2 and CO2 concentrations. Good agreement was found between the experimental and numerical results. The set of numerical simulations made it possible to establish a simple analytical model to predict the minimum time required to ventilate a silo to make it safe to enter. This ventilation time takes into account the headspace above the forage, the airflow rate, and the initial concentrations of O2 and CO2. The final analytical model was validated with available results from the literature.

  16. Optical eye simulator for laser dazzle events.

    PubMed

    Coelho, João M P; Freitas, José; Williamson, Craig A

    2016-03-20

    An optical simulator of the human eye and its application to laser dazzle events are presented. The simulator combines optical design software (ZEMAX) with a scientific programming language (MATLAB) and allows the user to implement and analyze a dazzle scenario using practical, real-world parameters. Contrary to conventional analytical glare analysis, this work uses ray tracing and the scattering model and parameters for each optical element of the eye. The theoretical background of each such element is presented in relation to the model. The overall simulator's calibration, validation, and performance analysis are achieved by comparison with a simpler model based uponCIE disability glare data. Results demonstrate that this kind of advanced optical eye simulation can be used to represent laser dazzle and has the potential to extend the range of applicability of analytical models.

  17. Prediction of In-hospital Mortality in Emergency Department Patients With Sepsis: A Local Big Data-Driven, Machine Learning Approach.

    PubMed

    Taylor, R Andrew; Pare, Joseph R; Venkatesh, Arjun K; Mowafi, Hani; Melnick, Edward R; Fleischman, William; Hall, M Kennedy

    2016-03-01

    Predictive analytics in emergency care has mostly been limited to the use of clinical decision rules (CDRs) in the form of simple heuristics and scoring systems. In the development of CDRs, limitations in analytic methods and concerns with usability have generally constrained models to a preselected small set of variables judged to be clinically relevant and to rules that are easily calculated. Furthermore, CDRs frequently suffer from questions of generalizability, take years to develop, and lack the ability to be updated as new information becomes available. Newer analytic and machine learning techniques capable of harnessing the large number of variables that are already available through electronic health records (EHRs) may better predict patient outcomes and facilitate automation and deployment within clinical decision support systems. In this proof-of-concept study, a local, big data-driven, machine learning approach is compared to existing CDRs and traditional analytic methods using the prediction of sepsis in-hospital mortality as the use case. This was a retrospective study of adult ED visits admitted to the hospital meeting criteria for sepsis from October 2013 to October 2014. Sepsis was defined as meeting criteria for systemic inflammatory response syndrome with an infectious admitting diagnosis in the ED. ED visits were randomly partitioned into an 80%/20% split for training and validation. A random forest model (machine learning approach) was constructed using over 500 clinical variables from data available within the EHRs of four hospitals to predict in-hospital mortality. The machine learning prediction model was then compared to a classification and regression tree (CART) model, logistic regression model, and previously developed prediction tools on the validation data set using area under the receiver operating characteristic curve (AUC) and chi-square statistics. There were 5,278 visits among 4,676 unique patients who met criteria for sepsis. Of the 4,222 patients in the training group, 210 (5.0%) died during hospitalization, and of the 1,056 patients in the validation group, 50 (4.7%) died during hospitalization. The AUCs with 95% confidence intervals (CIs) for the different models were as follows: random forest model, 0.86 (95% CI = 0.82 to 0.90); CART model, 0.69 (95% CI = 0.62 to 0.77); logistic regression model, 0.76 (95% CI = 0.69 to 0.82); CURB-65, 0.73 (95% CI = 0.67 to 0.80); MEDS, 0.71 (95% CI = 0.63 to 0.77); and mREMS, 0.72 (95% CI = 0.65 to 0.79). The random forest model AUC was statistically different from all other models (p ≤ 0.003 for all comparisons). In this proof-of-concept study, a local big data-driven, machine learning approach outperformed existing CDRs as well as traditional analytic techniques for predicting in-hospital mortality of ED patients with sepsis. Future research should prospectively evaluate the effectiveness of this approach and whether it translates into improved clinical outcomes for high-risk sepsis patients. The methods developed serve as an example of a new model for predictive analytics in emergency care that can be automated, applied to other clinical outcomes of interest, and deployed in EHRs to enable locally relevant clinical predictions. © 2015 by the Society for Academic Emergency Medicine.

  18. Prediction of In-hospital Mortality in Emergency Department Patients With Sepsis: A Local Big Data–Driven, Machine Learning Approach

    PubMed Central

    Taylor, R. Andrew; Pare, Joseph R.; Venkatesh, Arjun K.; Mowafi, Hani; Melnick, Edward R.; Fleischman, William; Hall, M. Kennedy

    2018-01-01

    Objectives Predictive analytics in emergency care has mostly been limited to the use of clinical decision rules (CDRs) in the form of simple heuristics and scoring systems. In the development of CDRs, limitations in analytic methods and concerns with usability have generally constrained models to a preselected small set of variables judged to be clinically relevant and to rules that are easily calculated. Furthermore, CDRs frequently suffer from questions of generalizability, take years to develop, and lack the ability to be updated as new information becomes available. Newer analytic and machine learning techniques capable of harnessing the large number of variables that are already available through electronic health records (EHRs) may better predict patient outcomes and facilitate automation and deployment within clinical decision support systems. In this proof-of-concept study, a local, big data–driven, machine learning approach is compared to existing CDRs and traditional analytic methods using the prediction of sepsis in-hospital mortality as the use case. Methods This was a retrospective study of adult ED visits admitted to the hospital meeting criteria for sepsis from October 2013 to October 2014. Sepsis was defined as meeting criteria for systemic inflammatory response syndrome with an infectious admitting diagnosis in the ED. ED visits were randomly partitioned into an 80%/20% split for training and validation. A random forest model (machine learning approach) was constructed using over 500 clinical variables from data available within the EHRs of four hospitals to predict in-hospital mortality. The machine learning prediction model was then compared to a classification and regression tree (CART) model, logistic regression model, and previously developed prediction tools on the validation data set using area under the receiver operating characteristic curve (AUC) and chi-square statistics. Results There were 5,278 visits among 4,676 unique patients who met criteria for sepsis. Of the 4,222 patients in the training group, 210 (5.0%) died during hospitalization, and of the 1,056 patients in the validation group, 50 (4.7%) died during hospitalization. The AUCs with 95% confidence intervals (CIs) for the different models were as follows: random forest model, 0.86 (95% CI = 0.82 to 0.90); CART model, 0.69 (95% CI = 0.62 to 0.77); logistic regression model, 0.76 (95% CI = 0.69 to 0.82); CURB-65, 0.73 (95% CI = 0.67 to 0.80); MEDS, 0.71 (95% CI = 0.63 to 0.77); and mREMS, 0.72 (95% CI = 0.65 to 0.79). The random forest model AUC was statistically different from all other models (p ≤ 0.003 for all comparisons). Conclusions In this proof-of-concept study, a local big data–driven, machine learning approach outperformed existing CDRs as well as traditional analytic techniques for predicting in-hospital mortality of ED patients with sepsis. Future research should prospectively evaluate the effectiveness of this approach and whether it translates into improved clinical outcomes for high-risk sepsis patients. The methods developed serve as an example of a new model for predictive analytics in emergency care that can be automated, applied to other clinical outcomes of interest, and deployed in EHRs to enable locally relevant clinical predictions. PMID:26679719

  19. The lunar libration: comparisons between various models - a model fitted to LLR observations

    NASA Astrophysics Data System (ADS)

    Chapront, J.; Francou, G.

    2005-09-01

    We consider 4 libration models: 3 numerical models built by JPL (ephemerides for the libration in DE245, DE403 and DE405) and an analytical model improved with numerical complements fitted to recent LLR observations. The analytical solution uses 3 angular variables (ρ1, ρ2, τ) which represent the deviations with respect to Cassini's laws. After having referred the models to a unique reference frame, we study the differences between the models which depend on gravitational and tidal parameters of the Moon, as well as amplitudes and frequencies of the free librations. It appears that the differences vary widely depending of the above quantities. They correspond to a few meters displacement on the lunar surface, reminding that LLR distances are precise to the centimeter level. Taking advantage of the lunar libration theory built by Moons (1984) and improved by Chapront et al. (1999) we are able to establish 4 solutions and to represent their differences by Fourier series after a numerical substitution of the gravitational constants and free libration parameters. The results are confirmed by frequency analyses performed separately. Using DE245 as a basic reference ephemeris, we approximate the differences between the analytical and numerical models with Poisson series. The analytical solution - improved with numerical complements under the form of Poisson series - is valid over several centuries with an internal precision better than 5 centimeters.

  20. Analytical and numerical study of the electro-osmotic annular flow of viscoelastic fluids.

    PubMed

    Ferrás, L L; Afonso, A M; Alves, M A; Nóbrega, J M; Pinho, F T

    2014-04-15

    In this work we present semi-analytical solutions for the electro-osmotic annular flow of viscoelastic fluids modeled by the Linear and Exponential PTT models. The viscoelastic fluid flows in the axial direction between two concentric cylinders under the combined influences of electrokinetic and pressure forcings. The analysis invokes the Debye-Hückel approximation and includes the limit case of pure electro-osmotic flow. The solution is valid for both no slip and slip velocity at the walls and the chosen slip boundary condition is the linear Navier slip velocity model. The combined effects of fluid rheology, electro-osmotic and pressure gradient forcings on the fluid velocity distribution are also discussed. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. An analytic formula for the supercluster mass function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, Seunghwan; Lee, Jounghun, E-mail: slim@astro.umass.edu, E-mail: jounghun@astro.snu.ac.kr

    2014-03-01

    We present an analytic formula for the supercluster mass function, which is constructed by modifying the extended Zel'dovich model for the halo mass function. The formula has two characteristic parameters whose best-fit values are determined by fitting to the numerical results from N-body simulations for the standard ΛCDM cosmology. The parameters are found to be independent of redshifts and robust against variation of the key cosmological parameters. Under the assumption that the same formula for the supercluster mass function is valid for non-standard cosmological models, we show that the relative abundance of the rich superclusters should be a powerful indicatormore » of any deviation of the real universe from the prediction of the standard ΛCDM model.« less

  2. Multi-variate joint PDF for non-Gaussianities: exact formulation and generic approximations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verde, Licia; Jimenez, Raul; Alvarez-Gaume, Luis

    2013-06-01

    We provide an exact expression for the multi-variate joint probability distribution function of non-Gaussian fields primordially arising from local transformations of a Gaussian field. This kind of non-Gaussianity is generated in many models of inflation. We apply our expression to the non-Gaussianity estimation from Cosmic Microwave Background maps and the halo mass function where we obtain analytical expressions. We also provide analytic approximations and their range of validity. For the Cosmic Microwave Background we give a fast way to compute the PDF which is valid up to more than 7σ for f{sub NL} values (both true and sampled) not ruledmore » out by current observations, which consists of expressing the PDF as a combination of bispectrum and trispectrum of the temperature maps. The resulting expression is valid for any kind of non-Gaussianity and is not limited to the local type. The above results may serve as the basis for a fully Bayesian analysis of the non-Gaussianity parameter.« less

  3. The NIH analytical methods and reference materials program for dietary supplements.

    PubMed

    Betz, Joseph M; Fisher, Kenneth D; Saldanha, Leila G; Coates, Paul M

    2007-09-01

    Quality of botanical products is a great uncertainty that consumers, clinicians, regulators, and researchers face. Definitions of quality abound, and include specifications for sanitation, adventitious agents (pesticides, metals, weeds), and content of natural chemicals. Because dietary supplements (DS) are often complex mixtures, they pose analytical challenges and method validation may be difficult. In response to product quality concerns and the need for validated and publicly available methods for DS analysis, the US Congress directed the Office of Dietary Supplements (ODS) at the National Institutes of Health (NIH) to accelerate an ongoing methods validation process, and the Dietary Supplements Methods and Reference Materials Program was created. The program was constructed from stakeholder input and incorporates several federal procurement and granting mechanisms in a coordinated and interlocking framework. The framework facilitates validation of analytical methods, analytical standards, and reference materials.

  4. Dynamic imaging model and parameter optimization for a star tracker.

    PubMed

    Yan, Jinyun; Jiang, Jie; Zhang, Guangjun

    2016-03-21

    Under dynamic conditions, star spots move across the image plane of a star tracker and form a smeared star image. This smearing effect increases errors in star position estimation and degrades attitude accuracy. First, an analytical energy distribution model of a smeared star spot is established based on a line segment spread function because the dynamic imaging process of a star tracker is equivalent to the static imaging process of linear light sources. The proposed model, which has a clear physical meaning, explicitly reflects the key parameters of the imaging process, including incident flux, exposure time, velocity of a star spot in an image plane, and Gaussian radius. Furthermore, an analytical expression of the centroiding error of the smeared star spot is derived using the proposed model. An accurate and comprehensive evaluation of centroiding accuracy is obtained based on the expression. Moreover, analytical solutions of the optimal parameters are derived to achieve the best performance in centroid estimation. Finally, we perform numerical simulations and a night sky experiment to validate the correctness of the dynamic imaging model, the centroiding error expression, and the optimal parameters.

  5. Prediction of the chromatographic retention of acid-base compounds in pH buffered methanol-water mobile phases in gradient mode by a simplified model.

    PubMed

    Andrés, Axel; Rosés, Martí; Bosch, Elisabeth

    2015-03-13

    Retention of ionizable analytes under gradient elution depends on the pH of the mobile phase, the pKa of the analyte and their evolution along the programmed gradient. In previous work, a model depending on two fitting parameters was recommended because of its very favorable relationship between accuracy and required experimental work. It was developed using acetonitrile as the organic modifier and involves pKa modeling by means of equations that take into account the acidic functional group of the compound (carboxylic acid, protonated amine, etc.). In this work, the two-parameter predicting model is tested and validated using methanol as the organic modifier of the mobile phase and several compounds of higher pharmaceutical relevance and structural complexity as testing analytes. The results have been quite good overall, showing that the predicting model is applicable to a wide variety of acid-base compounds using mobile phases prepared with acetonitrile or methanol. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Analyzing data from open enrollment groups: current considerations and future directions.

    PubMed

    Morgan-Lopez, Antonio A; Fals-Stewart, William

    2008-07-01

    Difficulties in modeling turnover in treatment-group membership have been cited as one of the major impediments to ecological validity of substance abuse and alcoholism treatment research. In this review, our primary foci are on (a) the discussion of approaches that draw on state-of-the-science analytic methods for modeling open-enrollment group data and (b) highlighting emerging issues that are critical to this relatively new area of methodological research (e.g., quantifying membership change, modeling "holiday" effects, and modeling membership change among group members and leaders). Continuing refinement of new modeling tools to address these analytic complexities may ultimately lead to the development of more federally funded open-enrollment trials. These developments may also facilitate the building of a "community-friendly" treatment research portfolio for funding agencies that support substance abuse and alcoholism treatment research.

  7. Development of a child head analytical dynamic model considering cranial nonuniform thickness and curvature - Applying to children aged 0-1 years old.

    PubMed

    Li, Zhigang; Ji, Cheng; Wang, Lishu

    2018-07-01

    Although analytical models have been used to quickly predict head response under impact condition, the existing models generally took the head as regular shell with uniform thickness which cannot account for the actual head geometry with varied cranial thickness and curvature at different locations. The objective of this study is to develop and validate an analytical model incorporating actual cranial thickness and curvature for child aged 0-1YO and investigate their effects on child head dynamic responses at different head locations. To develop the new analytical model, the child head was simplified into an irregular fluid-filled shell with non-uniform thickness and the cranial thickness and curvature at different locations were automatically obtained from CT scans using a procedure developed in this study. The implicit equation of maximum impact force was derived as a function of elastic modulus, thickness and radius of curvature of cranium. The proposed analytical model are compared with cadaver test data of children aged 0-1 years old and it is shown to be accurate in predicting head injury metrics. According to this model, obvious difference in injury metrics were observed among subjects with the same age, but different cranial thickness and curvature; and the injury metrics at forehead location are significant higher than those at other locations due to large thickness it owns. The proposed model shows good biofidelity and can be used in quickly predicting the dynamics response at any location of head for child younger than 1 YO. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. A three-step approach for the derivation and validation of high-performing predictive models using an operational dataset: congestive heart failure readmission case study.

    PubMed

    AbdelRahman, Samir E; Zhang, Mingyuan; Bray, Bruce E; Kawamoto, Kensaku

    2014-05-27

    The aim of this study was to propose an analytical approach to develop high-performing predictive models for congestive heart failure (CHF) readmission using an operational dataset with incomplete records and changing data over time. Our analytical approach involves three steps: pre-processing, systematic model development, and risk factor analysis. For pre-processing, variables that were absent in >50% of records were removed. Moreover, the dataset was divided into a validation dataset and derivation datasets which were separated into three temporal subsets based on changes to the data over time. For systematic model development, using the different temporal datasets and the remaining explanatory variables, the models were developed by combining the use of various (i) statistical analyses to explore the relationships between the validation and the derivation datasets; (ii) adjustment methods for handling missing values; (iii) classifiers; (iv) feature selection methods; and (iv) discretization methods. We then selected the best derivation dataset and the models with the highest predictive performance. For risk factor analysis, factors in the highest-performing predictive models were analyzed and ranked using (i) statistical analyses of the best derivation dataset, (ii) feature rankers, and (iii) a newly developed algorithm to categorize risk factors as being strong, regular, or weak. The analysis dataset consisted of 2,787 CHF hospitalizations at University of Utah Health Care from January 2003 to June 2013. In this study, we used the complete-case analysis and mean-based imputation adjustment methods; the wrapper subset feature selection method; and four ranking strategies based on information gain, gain ratio, symmetrical uncertainty, and wrapper subset feature evaluators. The best-performing models resulted from the use of a complete-case analysis derivation dataset combined with the Class-Attribute Contingency Coefficient discretization method and a voting classifier which averaged the results of multi-nominal logistic regression and voting feature intervals classifiers. Of 42 final model risk factors, discharge disposition, discretized age, and indicators of anemia were the most significant. This model achieved a c-statistic of 86.8%. The proposed three-step analytical approach enhanced predictive model performance for CHF readmissions. It could potentially be leveraged to improve predictive model performance in other areas of clinical medicine.

  9. Modeling and Analysis of Structural Dynamics for a One-Tenth Scale Model NGST Sunshield

    NASA Technical Reports Server (NTRS)

    Johnston, John; Lienard, Sebastien; Brodeur, Steve (Technical Monitor)

    2001-01-01

    New modeling and analysis techniques have been developed for predicting the dynamic behavior of the Next Generation Space Telescope (NGST) sunshield. The sunshield consists of multiple layers of pretensioned, thin-film membranes supported by deployable booms. Modeling the structural dynamic behavior of the sunshield is a challenging aspect of the problem due to the effects of membrane wrinkling. A finite element model of the sunshield was developed using an approximate engineering approach, the cable network method, to account for membrane wrinkling effects. Ground testing of a one-tenth scale model of the NGST sunshield were carried out to provide data for validating the analytical model. A series of analyses were performed to predict the behavior of the sunshield under the ground test conditions. Modal analyses were performed to predict the frequencies and mode shapes of the test article and transient response analyses were completed to simulate impulse excitation tests. Comparison was made between analytical predictions and test measurements for the dynamic behavior of the sunshield. In general, the results show good agreement with the analytical model correctly predicting the approximate frequency and mode shapes for the significant structural modes.

  10. Rate decline curves analysis of multiple-fractured horizontal wells in heterogeneous reservoirs

    NASA Astrophysics Data System (ADS)

    Wang, Jiahang; Wang, Xiaodong; Dong, Wenxiu

    2017-10-01

    In heterogeneous reservoir with multiple-fractured horizontal wells (MFHWs), due to the high density network of artificial hydraulic fractures, the fluid flow around fracture tips behaves like non-linear flow. Moreover, the production behaviors of different artificial hydraulic fractures are also different. A rigorous semi-analytical model for MFHWs in heterogeneous reservoirs is presented by combining source function with boundary element method. The model are first validated by both analytical model and simulation model. Then new Blasingame type curves are established. Finally, the effects of critical parameters on the rate decline characteristics of MFHWs are discussed. The results show that heterogeneity has significant influence on the rate decline characteristics of MFHWs; the parameters related to the MFHWs, such as fracture conductivity and length also can affect the rate characteristics of MFHWs. One novelty of this model is to consider the elliptical flow around artificial hydraulic fracture tips. Therefore, our model can be used to predict rate performance more accurately for MFHWs in heterogeneous reservoir. The other novelty is the ability to model the different production behavior at different fracture stages. Compared to numerical and analytic methods, this model can not only reduce extensive computing processing but also show high accuracy.

  11. Analytical modeling of conformal mantle cloaks for cylindrical objects using sub-wavelength printed and slotted arrays

    NASA Astrophysics Data System (ADS)

    Padooru, Yashwanth R.; Yakovlev, Alexander B.; Chen, Pai-Yen; Alù, Andrea

    2012-08-01

    Following the idea of "cloaking by a surface" [A. Alù, Phys. Rev. B 80, 245115 (2009); P. Y. Chen and A. Alù, Phys. Rev. B 84, 205110 (2011)], we present a rigorous analytical model applicable to mantle cloaking of cylindrical objects using 1D and 2D sub-wavelength conformal frequency selective surface (FSS) elements. The model is based on Lorenz-Mie scattering theory which utilizes the two-sided impedance boundary conditions at the interface of the sub-wavelength elements. The FSS arrays considered in this work are composed of 1D horizontal and vertical metallic strips and 2D printed (patches, Jerusalem crosses, and cross dipoles) and slotted structures (meshes, slot-Jerusalem crosses, and slot-cross dipoles). It is shown that the analytical grid-impedance expressions derived for the planar arrays of sub-wavelength elements may be successfully used to model and tailor the surface reactance of cylindrical conformal mantle cloaks. By properly tailoring the surface reactance of the cloak, the total scattering from the cylinder can be significantly reduced, thus rendering the object invisible over the range of frequencies of interest (i.e., at microwaves and far-infrared). The results obtained using our analytical model for mantle cloaks are validated against full-wave numerical simulations.

  12. Construct Meaning in Multilevel Settings

    ERIC Educational Resources Information Center

    Stapleton, Laura M.; Yang, Ji Seung; Hancock, Gregory R.

    2016-01-01

    We present types of constructs, individual- and cluster-level, and their confirmatory factor analytic validation models when data are from individuals nested within clusters. When a construct is theoretically individual level, spurious construct-irrelevant dependency in the data may appear to signal cluster-level dependency; in such cases,…

  13. Thermal gravitational separation of ternary mixture n-dodecane/isobutylbenzene/tetralin components in a porous medium

    NASA Astrophysics Data System (ADS)

    Larabi, Mohamed Aziz; Mutschler, Dimitri; Mojtabi, Abdelkader

    2016-06-01

    Our present work focuses on the coupling between thermal diffusion and convection in order to improve the thermal gravitational separation of mixture components. The separation phenomenon was studied in a porous medium contained in vertical columns. We performed analytical and numerical simulations to corroborate the experimental measurements of the thermal diffusion coefficients of ternary mixture n-dodecane, isobutylbenzene, and tetralin obtained in microgravity in the international space station. Our approach corroborates the existing data published in the literature. The authors show that it is possible to quantify and to optimize the species separation for ternary mixtures. The authors checked, for ternary mixtures, the validity of the "forgotten effect hypothesis" established for binary mixtures by Furry, Jones, and Onsager. Two complete and different analytical resolution methods were used in order to describe the separation in terms of Lewis numbers, the separation ratios, the cross-diffusion coefficients, and the Rayleigh number. The analytical model is based on the parallel flow approximation. In order to validate this model, a numerical simulation was performed using the finite element method. From our new approach to vertical separation columns, new relations for mass fraction gradients and the optimal Rayleigh number for each component of the ternary mixture were obtained.

  14. Verification and Validation of Autonomy Software at NASA

    NASA Technical Reports Server (NTRS)

    Pecheur, Charles

    2000-01-01

    Autonomous software holds the promise of new operation possibilities, easier design and development and lower operating costs. However, as those system close control loops and arbitrate resources on board with specialized reasoning, the range of possible situations becomes very large and uncontrollable from the outside, making conventional scenario-based testing very inefficient. Analytic verification and validation (V&V) techniques, and model checking in particular, can provide significant help for designing autonomous systems in a more efficient and reliable manner, by providing a better coverage and allowing early error detection. This article discusses the general issue of V&V of autonomy software, with an emphasis towards model-based autonomy, model-checking techniques and concrete experiments at NASA.

  15. Verification and Validation of Autonomy Software at NASA

    NASA Technical Reports Server (NTRS)

    Pecheur, Charles

    2000-01-01

    Autonomous software holds the promise of new operation possibilities, easier design and development, and lower operating costs. However, as those system close control loops and arbitrate resources on-board with specialized reasoning, the range of possible situations becomes very large and uncontrollable from the outside, making conventional scenario-based testing very inefficient. Analytic verification and validation (V&V) techniques, and model checking in particular, can provide significant help for designing autonomous systems in a more efficient and reliable manner, by providing a better coverage and allowing early error detection. This article discusses the general issue of V&V of autonomy software, with an emphasis towards model-based autonomy, model-checking techniques, and concrete experiments at NASA.

  16. Aeroservoelastic wind-tunnel investigations using the Active Flexible Wing Model: Status and recent accomplishments

    NASA Technical Reports Server (NTRS)

    Noll, Thomas E.; Perry, Boyd, III; Tiffany, Sherwood H.; Cole, Stanley R.; Buttrill, Carey S.; Adams, William M., Jr.; Houck, Jacob A.; Srinathkumar, S.; Mukhopadhyay, Vivek; Pototzky, Anthony S.

    1989-01-01

    The status of the joint NASA/Rockwell Active Flexible Wing Wind-Tunnel Test Program is described. The objectives are to develop and validate the analysis, design, and test methodologies required to apply multifunction active control technology for improving aircraft performance and stability. Major tasks include designing digital multi-input/multi-output flutter-suppression and rolling-maneuver-load alleviation concepts for a flexible full-span wind-tunnel model, obtaining an experimental data base for the basic model and each control concept and providing comparisons between experimental and analytical results to validate the methodologies. The opportunity is provided to improve real-time simulation techniques and to gain practical experience with digital control law implementation procedures.

  17. Physical models for the normal YORP and diurnal Yarkovsky effects

    NASA Astrophysics Data System (ADS)

    Golubov, O.; Kravets, Y.; Krugly, Yu. N.; Scheeres, D. J.

    2016-06-01

    We propose an analytic model for the normal Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) and diurnal Yarkovsky effects experienced by a convex asteroid. Both the YORP torque and the Yarkovsky force are expressed as integrals of a universal function over the surface of an asteroid. Although in general this function can only be calculated numerically from the solution of the heat conductivity equation, approximate solutions can be obtained in quadratures for important limiting cases. We consider three such simplified models: Rubincam's approximation (zero heat conductivity), low thermal inertia limit (including the next order correction and thus valid for small heat conductivity), and high thermal inertia limit (valid for large heat conductivity). All three simplified models are compared with the exact solution.

  18. Nonlinear Analyte Concentration Gradients for One-Step Kinetic Analysis Employing Optical Microring Resonators

    PubMed Central

    Marty, Michael T.; Kuhnline Sloan, Courtney D.; Bailey, Ryan C.; Sligar, Stephen G.

    2012-01-01

    Conventional methods to probe the binding kinetics of macromolecules at biosensor surfaces employ a stepwise titration of analyte concentrations and measure the association and dissociation to the immobilized ligand at each concentration level. It has previously been shown that kinetic rates can be measured in a single step by monitoring binding as the analyte concentration increases over time in a linear gradient. We report here the application of nonlinear analyte concentration gradients for determining kinetic rates and equilibrium binding affinities in a single experiment. A versatile nonlinear gradient maker is presented, which is easily applied to microfluidic systems. Simulations validate that accurate kinetic rates can be extracted for a wide range of association and dissociation rates, gradient slopes and curvatures, and with models for mass transport. The nonlinear analyte gradient method is demonstrated with a silicon photonic microring resonator platform to measure prostate specific antigen-antibody binding kinetics. PMID:22686186

  19. Nonlinear analyte concentration gradients for one-step kinetic analysis employing optical microring resonators.

    PubMed

    Marty, Michael T; Sloan, Courtney D Kuhnline; Bailey, Ryan C; Sligar, Stephen G

    2012-07-03

    Conventional methods to probe the binding kinetics of macromolecules at biosensor surfaces employ a stepwise titration of analyte concentrations and measure the association and dissociation to the immobilized ligand at each concentration level. It has previously been shown that kinetic rates can be measured in a single step by monitoring binding as the analyte concentration increases over time in a linear gradient. We report here the application of nonlinear analyte concentration gradients for determining kinetic rates and equilibrium binding affinities in a single experiment. A versatile nonlinear gradient maker is presented, which is easily applied to microfluidic systems. Simulations validate that accurate kinetic rates can be extracted for a wide range of association and dissociation rates, gradient slopes, and curvatures, and with models for mass transport. The nonlinear analyte gradient method is demonstrated with a silicon photonic microring resonator platform to measure prostate specific antigen-antibody binding kinetics.

  20. Validating and determining the weight of items used for evaluating clinical governance implementation based on analytic hierarchy process model.

    PubMed

    Hooshmand, Elaheh; Tourani, Sogand; Ravaghi, Hamid; Vafaee Najar, Ali; Meraji, Marziye; Ebrahimipour, Hossein

    2015-04-08

    The purpose of implementing a system such as Clinical Governance (CG) is to integrate, establish and globalize distinct policies in order to improve quality through increasing professional knowledge and the accountability of healthcare professional toward providing clinical excellence. Since CG is related to change, and change requires money and time, CG implementation has to be focused on priority areas that are in more dire need of change. The purpose of the present study was to validate and determine the significance of items used for evaluating CG implementation. The present study was descriptive-quantitative in method and design. Items used for evaluating CG implementation were first validated by the Delphi method and then compared with one another and ranked based on the Analytical Hierarchy Process (AHP) model. The items that were validated for evaluating CG implementation in Iran include performance evaluation, training and development, personnel motivation, clinical audit, clinical effectiveness, risk management, resource allocation, policies and strategies, external audit, information system management, research and development, CG structure, implementation prerequisites, the management of patients' non-medical needs, complaints and patients' participation in the treatment process. The most important items based on their degree of significance were training and development, performance evaluation, and risk management. The least important items included the management of patients' non-medical needs, patients' participation in the treatment process and research and development. The fundamental requirements of CG implementation included having an effective policy at national level, avoiding perfectionism, using the expertise and potentials of the entire country and the coordination of this model with other models of quality improvement such as accreditation and patient safety. © 2015 by Kerman University of Medical Sciences.

  1. Equivalent circuit modeling of a piezo-patch energy harvester on a thin plate with AC-DC conversion

    NASA Astrophysics Data System (ADS)

    Bayik, B.; Aghakhani, A.; Basdogan, I.; Erturk, A.

    2016-05-01

    As an alternative to beam-like structures, piezoelectric patch-based energy harvesters attached to thin plates can be readily integrated to plate-like structures in automotive, marine, and aerospace applications, in order to directly exploit structural vibration modes of the host system without mass loading and volumetric occupancy of cantilever attachments. In this paper, a multi-mode equivalent circuit model of a piezo-patch energy harvester integrated to a thin plate is developed and coupled with a standard AC-DC conversion circuit. Equivalent circuit parameters are obtained in two different ways: (1) from the modal analysis solution of a distributed-parameter analytical model and (2) from the finite-element numerical model of the harvester by accounting for two-way coupling. After the analytical modeling effort, multi-mode equivalent circuit representation of the harvester is obtained via electronic circuit simulation software SPICE. Using the SPICE software, electromechanical response of the piezoelectric energy harvester connected to linear and nonlinear circuit elements are computed. Simulation results are validated for the standard AC-AC and AC-DC configurations. For the AC input-AC output problem, voltage frequency response functions are calculated for various resistive loads, and they show excellent agreement with modal analysis-based analytical closed-form solution and with the finite-element model. For the standard ideal AC input-DC output case, a full-wave rectifier and a smoothing capacitor are added to the harvester circuit for conversion of the AC voltage to a stable DC voltage, which is also validated against an existing solution by treating the single-mode plate dynamics as a single-degree-of-freedom system.

  2. Validation of the enthalpy method by means of analytical solution

    NASA Astrophysics Data System (ADS)

    Kleiner, Thomas; Rückamp, Martin; Bondzio, Johannes; Humbert, Angelika

    2014-05-01

    Numerical simulations moved in the recent year(s) from describing the cold-temperate transition surface (CTS) towards an enthalpy description, which allows avoiding incorporating a singular surface inside the model (Aschwanden et al., 2012). In Enthalpy methods the CTS is represented as a level set of the enthalpy state variable. This method has several numerical and practical advantages (e.g. representation of the full energy by one scalar field, no restriction to topology and shape of the CTS). The proposed method is rather new in glaciology and to our knowledge not verified and validated against analytical solutions. Unfortunately we are still lacking analytical solutions for sufficiently complex thermo-mechanically coupled polythermal ice flow. However, we present two experiments to test the implementation of the enthalpy equation and corresponding boundary conditions. The first experiment tests particularly the functionality of the boundary condition scheme and the corresponding basal melt rate calculation. Dependent on the different thermal situations that occur at the base, the numerical code may have to switch to another boundary type (from Neuman to Dirichlet or vice versa). The main idea of this set-up is to test the reversibility during transients. A former cold ice body that run through a warmer period with an associated built up of a liquid water layer at the base must be able to return to its initial steady state. Since we impose several assumptions on the experiment design analytical solutions can be formulated for different quantities during distinct stages of the simulation. The second experiment tests the positioning of the internal CTS in a parallel-sided polythermal slab. We compare our simulation results to the analytical solution proposed by Greve and Blatter (2009). Results from three different ice flow-models (COMIce, ISSM, TIMFD3) are presented.

  3. One-dimensional model of interacting-step fluctuations on vicinal surfaces: Analytical formulas and kinetic Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Patrone, Paul N.; Einstein, T. L.; Margetis, Dionisios

    2010-12-01

    We study analytically and numerically a one-dimensional model of interacting line defects (steps) fluctuating on a vicinal crystal. Our goal is to formulate and validate analytical techniques for approximately solving systems of coupled nonlinear stochastic differential equations (SDEs) governing fluctuations in surface motion. In our analytical approach, the starting point is the Burton-Cabrera-Frank (BCF) model by which step motion is driven by diffusion of adsorbed atoms on terraces and atom attachment-detachment at steps. The step energy accounts for entropic and nearest-neighbor elastic-dipole interactions. By including Gaussian white noise to the equations of motion for terrace widths, we formulate large systems of SDEs under different choices of diffusion coefficients for the noise. We simplify this description via (i) perturbation theory and linearization of the step interactions and, alternatively, (ii) a mean-field (MF) approximation whereby widths of adjacent terraces are replaced by a self-consistent field but nonlinearities in step interactions are retained. We derive simplified formulas for the time-dependent terrace-width distribution (TWD) and its steady-state limit. Our MF analytical predictions for the TWD compare favorably with kinetic Monte Carlo simulations under the addition of a suitably conservative white noise in the BCF equations.

  4. Analytical and Experimental Study to Improve Computer Models for Mixing and Dilution of Soluble Hazardous Chemicals.

    DTIC Science & Technology

    1982-08-01

    Trajectory and Concentration of Various Plumes 59 IV.2 Tank and Cargo Geometry Assumed for Discharge Rate Calculation Using HACS Venting Rate Model 61...Discharge Rate Calculation Using HACS Venting Rate Model 62 IV.4 Original Test Plan for Validation of the Continuous Spill Model 66 IV.5 Final Test Plan...at t= 0. exEyEz = turbulent diffusivities. p = water density. Pc = chemical density. Symbols Used Only in Continuous-Spill Models for a Steady River b

  5. Two-Speed Gearbox Dynamic Simulation Predictions and Test Validation

    NASA Technical Reports Server (NTRS)

    Lewicki, David G.; DeSmidt, Hans; Smith, Edward C.; Bauman, Steven W.

    2010-01-01

    Dynamic simulations and experimental validation tests were performed on a two-stage, two-speed gearbox as part of the drive system research activities of the NASA Fundamental Aeronautics Subsonics Rotary Wing Project. The gearbox was driven by two electromagnetic motors and had two electromagnetic, multi-disk clutches to control output speed. A dynamic model of the system was created which included a direct current electric motor with proportional-integral-derivative (PID) speed control, a two-speed gearbox with dual electromagnetically actuated clutches, and an eddy current dynamometer. A six degree-of-freedom model of the gearbox accounted for the system torsional dynamics and included gear, clutch, shaft, and load inertias as well as shaft flexibilities and a dry clutch stick-slip friction model. Experimental validation tests were performed on the gearbox in the NASA Glenn gear noise test facility. Gearbox output speed and torque as well as drive motor speed and current were compared to those from the analytical predictions. The experiments correlate very well with the predictions, thus validating the dynamic simulation methodologies.

  6. Analytical Modeling for Mechanical Strength Prediction with Raman Spectroscopy and Fractured Surface Morphology of Novel Coconut Shell Powder Reinforced: Epoxy Composites

    NASA Astrophysics Data System (ADS)

    Singh, Savita; Singh, Alok; Sharma, Sudhir Kumar

    2017-06-01

    In this paper, an analytical modeling and prediction of tensile and flexural strength of three dimensional micro-scaled novel coconut shell powder (CSP) reinforced epoxy polymer composites have been reported. The novel CSP has a specific mixing ratio of different coconut shell particle size. A comparison is made between obtained experimental strength and modified Guth model. The result shows a strong evidence for non-validation of modified Guth model for strength prediction. Consequently, a constitutive modeled equation named Singh model has been developed to predict the tensile and flexural strength of this novel CSP reinforced epoxy composite. Moreover, high resolution Raman spectrum shows that 40 % CSP reinforced epoxy composite has high dielectric constant to become an alternative material for capacitance whereas fractured surface morphology revealed that a strong bonding between novel CSP and epoxy polymer for the application as light weight composite materials in engineering.

  7. Fast Estimation of Strains for Cross-Beams Six-Axis Force/Torque Sensors by Mechanical Modeling

    PubMed Central

    Ma, Junqing; Song, Aiguo

    2013-01-01

    Strain distributions are crucial criteria of cross-beams six-axis force/torque sensors. The conventional method for calculating the criteria is to utilize Finite Element Analysis (FEA) to get numerical solutions. This paper aims to obtain analytical solutions of strains under the effect of external force/torque in each dimension. Genetic mechanical models for cross-beams six-axis force/torque sensors are proposed, in which deformable cross elastic beams and compliant beams are modeled as quasi-static Timoshenko beam. A detailed description of model assumptions, model idealizations, application scope and model establishment is presented. The results are validated by both numerical FEA simulations and calibration experiments, and test results are found to be compatible with each other for a wide range of geometric properties. The proposed analytical solutions are demonstrated to be an accurate estimation algorithm with higher efficiency. PMID:23686144

  8. Model updating strategy for structures with localised nonlinearities using frequency response measurements

    NASA Astrophysics Data System (ADS)

    Wang, Xing; Hill, Thomas L.; Neild, Simon A.; Shaw, Alexander D.; Haddad Khodaparast, Hamed; Friswell, Michael I.

    2018-02-01

    This paper proposes a model updating strategy for localised nonlinear structures. It utilises an initial finite-element (FE) model of the structure and primary harmonic response data taken from low and high amplitude excitations. The underlying linear part of the FE model is first updated using low-amplitude test data with established techniques. Then, using this linear FE model, the nonlinear elements are localised, characterised, and quantified with primary harmonic response data measured under stepped-sine or swept-sine excitations. Finally, the resulting model is validated by comparing the analytical predictions with both the measured responses used in the updating and with additional test data. The proposed strategy is applied to a clamped beam with a nonlinear mechanism and good agreements between the analytical predictions and measured responses are achieved. Discussions on issues of damping estimation and dealing with data from amplitude-varying force input in the updating process are also provided.

  9. An experimental and analytical method for approximate determination of the tilt rotor research aircraft rotor/wing download

    NASA Technical Reports Server (NTRS)

    Jordon, D. E.; Patterson, W.; Sandlin, D. R.

    1985-01-01

    The XV-15 Tilt Rotor Research Aircraft download phenomenon was analyzed. This phenomenon is a direct result of the two rotor wakes impinging on the wing upper surface when the aircraft is in the hover configuration. For this study the analysis proceeded along tow lines. First was a method whereby results from actual hover tests of the XV-15 aircraft were combined with drag coefficient results from wind tunnel tests of a wing that was representative of the aircraft wing. Second, an analytical method was used that modeled that airflow caused gy the two rotors. Formulas were developed in such a way that acomputer program could be used to calculate the axial velocities were then used in conjunction with the aforementioned wind tunnel drag coefficinet results to produce download values. An attempt was made to validate the analytical results by modeling a model rotor system for which direct download values were determinrd..

  10. A combined analytical formulation and genetic algorithm to analyze the nonlinear damage responses of continuous fiber toughened composites

    NASA Astrophysics Data System (ADS)

    Jeon, Haemin; Yu, Jaesang; Lee, Hunsu; Kim, G. M.; Kim, Jae Woo; Jung, Yong Chae; Yang, Cheol-Min; Yang, B. J.

    2017-09-01

    Continuous fiber-reinforced composites are important materials that have the highest commercialized potential in the upcoming future among existing advanced materials. Despite their wide use and value, their theoretical mechanisms have not been fully established due to the complexity of the compositions and their unrevealed failure mechanisms. This study proposes an effective three-dimensional damage modeling of a fibrous composite by combining analytical micromechanics and evolutionary computation. The interface characteristics, debonding damage, and micro-cracks are considered to be the most influential factors on the toughness and failure behaviors of composites, and a constitutive equation considering these factors was explicitly derived in accordance with the micromechanics-based ensemble volume averaged method. The optimal set of various model parameters in the analytical model were found using modified evolutionary computation that considers human-induced error. The effectiveness of the proposed formulation was validated by comparing a series of numerical simulations with experimental data from available studies.

  11. Bayesian Monte Carlo and Maximum Likelihood Approach for ...

    EPA Pesticide Factsheets

    Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood estimation (BMCML) to calibrate a lake oxygen recovery model. We first derive an analytical solution of the differential equation governing lake-averaged oxygen dynamics as a function of time-variable wind speed. Statistical inferences on model parameters and predictive uncertainty are then drawn by Bayesian conditioning of the analytical solution on observed daily wind speed and oxygen concentration data obtained from an earlier study during two recovery periods on a eutrophic lake in upper state New York. The model is calibrated using oxygen recovery data for one year and statistical inferences were validated using recovery data for another year. Compared with essentially two-step, regression and optimization approach, the BMCML results are more comprehensive and performed relatively better in predicting the observed temporal dissolved oxygen levels (DO) in the lake. BMCML also produced comparable calibration and validation results with those obtained using popular Markov Chain Monte Carlo technique (MCMC) and is computationally simpler and easier to implement than the MCMC. Next, using the calibrated model, we derive an optimal relationship between liquid film-transfer coefficien

  12. GeneratorSE: A Sizing Tool for Variable-Speed Wind Turbine Generators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sethuraman, Latha; Dykes, Katherine L

    This report documents a set of analytical models employed by the optimization algorithms within the GeneratorSE framework. The initial values and boundary conditions employed for the generation of the various designs and initial estimates for basic design dimensions, masses, and efficiency for the four different models of generators are presented and compared with empirical data collected from previous studies and some existing commercial turbines. These models include designs applicable for variable-speed, high-torque application featuring direct-drive synchronous generators and low-torque application featuring induction generators. In all of the four models presented, the main focus of optimization is electromagnetic design with themore » exception of permanent-magnet and wire-wound synchronous generators, wherein the structural design is also optimized. Thermal design is accommodated in GeneratorSE as a secondary attribute by limiting the winding current densities to acceptable limits. A preliminary validation of electromagnetic design was carried out by comparing the optimized magnetic loading against those predicted by numerical simulation in FEMM4.2, a finite-element software for analyzing electromagnetic and thermal physics problems for electrical machines. For direct-drive synchronous generators, the analytical models for the structural design are validated by static structural analysis in ANSYS.« less

  13. Modeling of a ring rosen-type piezoelectric transformer by Hamilton's principle.

    PubMed

    Nadal, Clément; Pigache, Francois; Erhart, Jiří

    2015-04-01

    This paper deals with the analytical modeling of a ring Rosen-type piezoelectric transformer. The developed model is based on a Hamiltonian approach, enabling to obtain main parameters and performance evaluation for the first radial vibratory modes. Methodology is detailed, and final results, both the input admittance and the electric potential distribution on the surface of the secondary part, are compared with numerical and experimental ones for discussion and validation.

  14. Towards a complete physically based forecast model for underwater noise related to impact pile driving.

    PubMed

    Fricke, Moritz B; Rolfes, Raimund

    2015-03-01

    An approach for the prediction of underwater noise caused by impact pile driving is described and validated based on in situ measurements. The model is divided into three sub-models. The first sub-model, based on the finite element method, is used to describe the vibration of the pile and the resulting acoustic radiation into the surrounding water and soil column. The mechanical excitation of the pile by the piling hammer is estimated by the second sub-model using an analytical approach which takes the large vertical dimension of the ram into account. The third sub-model is based on the split-step Padé solution of the parabolic equation and targets the long-range propagation up to 20 km. In order to presume realistic environmental properties for the validation, a geoacoustic model is derived from spatially averaged geological information about the investigation area. Although it can be concluded from the validation that the model and the underlying assumptions are appropriate, there are some deviations between modeled and measured results. Possible explanations for the observed errors are discussed.

  15. Measuring Students' Writing Ability on a Computer-Analytic Developmental Scale: An Exploratory Validity Study

    ERIC Educational Resources Information Center

    Burdick, Hal; Swartz, Carl W.; Stenner, A. Jackson; Fitzgerald, Jill; Burdick, Don; Hanlon, Sean T.

    2013-01-01

    The purpose of the study was to explore the validity of a novel computer-analytic developmental scale, the Writing Ability Developmental Scale. On the whole, collective results supported the validity of the scale. It was sensitive to writing ability differences across grades and sensitive to within-grade variability as compared to human-rated…

  16. Focus on Student Success: Components for Effective Summer Bridge Programs

    ERIC Educational Resources Information Center

    Gonzalez Quiroz, Alicia; Garza, Nora R.

    2018-01-01

    Using research focused on best practices, focus group information, and data analytics, the Title V: Focus on Student Success (FOSS) Grant created a model for the development, implementation, and evaluation of a summer bridge program. Results included increased academic performance indicators in first-year Hispanic college students. Validation for…

  17. Effect of low relative humidity on properties of structural lumber products

    Treesearch

    David W. Green; James W. Evans

    2003-01-01

    Wood used in industrial settings, and in some arid parts of the United States, may be subjected to very low relative humidity (RH). Analytical models available for predicting the effect of moisture content (MC) on the properties of solid-sawn lumber imply significant strength loss at very low MC. However, these models are generally valid only for MC above about 10%....

  18. Hybrid perturbation methods based on statistical time series models

    NASA Astrophysics Data System (ADS)

    San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario

    2016-04-01

    In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.

  19. Exact Local Correlations and Full Counting Statistics for Arbitrary States of the One-Dimensional Interacting Bose Gas

    NASA Astrophysics Data System (ADS)

    Bastianello, Alvise; Piroli, Lorenzo; Calabrese, Pasquale

    2018-05-01

    We derive exact analytic expressions for the n -body local correlations in the one-dimensional Bose gas with contact repulsive interactions (Lieb-Liniger model) in the thermodynamic limit. Our results are valid for arbitrary states of the model, including ground and thermal states, stationary states after a quantum quench, and nonequilibrium steady states arising in transport settings. Calculations for these states are explicitly presented and physical consequences are critically discussed. We also show that the n -body local correlations are directly related to the full counting statistics for the particle-number fluctuations in a short interval, for which we provide an explicit analytic result.

  20. Estimation of the curvature of the solid liquid interface during Bridgman crystal growth

    NASA Astrophysics Data System (ADS)

    Barat, Catherine; Duffar, Thierry; Garandet, Jean-Paul

    1998-11-01

    An approximate solution for the solid/liquid interface curvature due to the crucible effect in crystal growth is derived from simple heat flux considerations. The numerical modelling of the problem carried out with the help of the finite element code FIDAP supports the predictions of our analytical expression and allows to identify its range of validity. Experimental interface curvatures, measured in gallium antimonide samples grown by the vertical Bridgman method, are seen to compare satisfactorily to analytical and numerical results. Other literature data are also in fair agreement with the predictions of our models in the case where the amount of heat carried by the crucible is small compared to the overall heat flux.

  1. 2-D modeling and analysis of short-channel behavior of a front high- K gate stack triple-material gate SB SON MOSFET

    NASA Astrophysics Data System (ADS)

    Banerjee, Pritha; Kumari, Tripty; Sarkar, Subir Kumar

    2018-02-01

    This paper presents the 2-D analytical modeling of a front high- K gate stack triple-material gate Schottky Barrier Silicon-On-Nothing MOSFET. Using the two-dimensional Poisson's equation and considering the popular parabolic potential approximation, expression for surface potential as well as the electric field has been considered. In addition, the response of the proposed device towards aggressive downscaling, that is, its extent of immunity towards the different short-channel effects, has also been considered in this work. The analytical results obtained have been validated using the simulated results obtained using ATLAS, a two-dimensional device simulator from SILVACO.

  2. Buckling of a stiff thin film on an elastic graded compliant substrate.

    PubMed

    Chen, Zhou; Chen, Weiqiu; Song, Jizhou

    2017-12-01

    The buckling of a stiff film on a compliant substrate has attracted much attention due to its wide applications such as thin-film metrology, surface patterning and stretchable electronics. An analytical model is established for the buckling of a stiff thin film on a semi-infinite elastic graded compliant substrate subjected to in-plane compression. The critical compressive strain and buckling wavelength for the sinusoidal mode are obtained analytically for the case with the substrate modulus decaying exponentially. The rigorous finite element analysis (FEA) is performed to validate the analytical model and investigate the postbuckling behaviour of the system. The critical buckling strain for the period-doubling mode is obtained numerically. The influences of various material parameters on the results are investigated. These results are helpful to provide physical insights on the buckling of elastic graded substrate-supported thin film.

  3. Viscous damping and spring force in periodic perforated planar microstructures when the Reynolds’ equation cannot be applied

    PubMed Central

    Homentcovschi, Dorel; Miles, Ronald N.

    2010-01-01

    A model of squeeze-film behavior is developed based on Stokes’ equations for viscous, compressible isothermal flows. The flow domain is an axisymmetrical, unit cell approximation of a planar, periodic, perforated microstructure. The model is developed for cases when the lubrication approximation cannot be applied. The complex force generated by vibrations of the diaphragm driving the flow has two components: the damping force and the spring force. While for large frequencies the spring force dominates, at low (acoustical) frequencies the damping force is the most important part. The analytical approach developed here yields an explicit formula for both forces. In addition, using a finite element software package, the damping force is also obtained numerically. A comparison is made between the analytic result, numerical solution, and some experimental data found in the literature, which validates the analytic formula and provides compelling arguments about its value in designing microelectomechanical devices. PMID:20329828

  4. Wavelength-modulated differential photothermal radiometry: Theory and experimental applications to glucose detection in water

    NASA Astrophysics Data System (ADS)

    Mandelis, Andreas; Guo, Xinxin

    2011-10-01

    A differential photothermal radiometry method, wavelength-modulated differential photothermal radiometry (WM-DPTR), has been developed theoretically and experimentally for noninvasive, noncontact biological analyte detection, such as blood glucose monitoring. WM-DPTR features analyte specificity and sensitivity by combining laser excitation by two out-of-phase modulated beams at wavelengths near the peak and the base line of a prominent and isolated mid-IR analyte absorption band (here the carbon-oxygen-carbon bond in the pyran ring of the glucose molecule). A theoretical photothermal model of WM-DPTR signal generation and detection has been developed. Simulation results on water-glucose phantoms with the human blood range (0-300 mg/dl) glucose concentration demonstrated high sensitivity and resolution to meet wide clinical detection requirements. The model has also been validated by experimental data of the glucose-water system obtained using WM-DPTR.

  5. Theory and simulation of the dynamics, deformation, and breakup of a chain of superparamagnetic beads under a rotating magnetic field

    NASA Astrophysics Data System (ADS)

    Vázquez-Quesada, A.; Franke, T.; Ellero, M.

    2017-03-01

    In this work, an analytical model for the behavior of superparamagnetic chains under the effect of a rotating magnetic field is presented. It is postulated that the relevant mechanisms for describing the shape and breakup of the chains into smaller fragments are the induced dipole-dipole magnetic force on the external beads, their translational and rotational drag forces, and the tangential lubrication between particles. Under this assumption, the characteristic S-shape of the chain can be qualitatively understood. Furthermore, based on a straight chain approximation, a novel analytical expression for the critical frequency for the chain breakup is obtained. In order to validate the model, the analytical expressions are compared with full three-dimensional smoothed particle hydrodynamics simulations of magnetic beads showing excellent agreement. Comparison with previous theoretical results and experimental data is also reported.

  6. Buckling of a stiff thin film on an elastic graded compliant substrate

    NASA Astrophysics Data System (ADS)

    Chen, Zhou; Chen, Weiqiu; Song, Jizhou

    2017-12-01

    The buckling of a stiff film on a compliant substrate has attracted much attention due to its wide applications such as thin-film metrology, surface patterning and stretchable electronics. An analytical model is established for the buckling of a stiff thin film on a semi-infinite elastic graded compliant substrate subjected to in-plane compression. The critical compressive strain and buckling wavelength for the sinusoidal mode are obtained analytically for the case with the substrate modulus decaying exponentially. The rigorous finite element analysis (FEA) is performed to validate the analytical model and investigate the postbuckling behaviour of the system. The critical buckling strain for the period-doubling mode is obtained numerically. The influences of various material parameters on the results are investigated. These results are helpful to provide physical insights on the buckling of elastic graded substrate-supported thin film.

  7. Analysis procedures and subjective flight results of a simulator validation and cue fidelity experiment

    NASA Technical Reports Server (NTRS)

    Carr, Peter C.; Mckissick, Burnell T.

    1988-01-01

    A joint experiment to investigate simulator validation and cue fidelity was conducted by the Dryden Flight Research Facility of NASA Ames Research Center (Ames-Dryden) and NASA Langley Research Center. The primary objective was to validate the use of a closed-loop pilot-vehicle mathematical model as an analytical tool for optimizing the tradeoff between simulator fidelity requirements and simulator cost. The validation process includes comparing model predictions with simulation and flight test results to evaluate various hypotheses for differences in motion and visual cues and information transfer. A group of five pilots flew air-to-air tracking maneuvers in the Langley differential maneuvering simulator and visual motion simulator and in an F-14 aircraft at Ames-Dryden. The simulators used motion and visual cueing devices including a g-seat, a helmet loader, wide field-of-view horizon, and a motion base platform.

  8. Mechanics of additively manufactured porous biomaterials based on the rhombicuboctahedron unit cell.

    PubMed

    Hedayati, R; Sadighi, M; Mohammadi-Aghdam, M; Zadpoor, A A

    2016-01-01

    Thanks to recent developments in additive manufacturing techniques, it is now possible to fabricate porous biomaterials with arbitrarily complex micro-architectures. Micro-architectures of such biomaterials determine their physical and biological properties, meaning that one could potentially improve the performance of such biomaterials through rational design of micro-architecture. The relationship between the micro-architecture of porous biomaterials and their physical and biological properties has therefore received increasing attention recently. In this paper, we studied the mechanical properties of porous biomaterials made from a relatively unexplored unit cell, namely rhombicuboctahedron. We derived analytical relationships that relate the micro-architecture of such porous biomaterials, i.e. the dimensions of the rhombicuboctahedron unit cell, to their elastic modulus, Poisson's ratio, and yield stress. Finite element models were also developed to validate the analytical solutions. Analytical and numerical results were compared with experimental data from one of our recent studies. It was found that analytical solutions and numerical results show a very good agreement particularly for smaller values of apparent density. The elastic moduli predicted by analytical and numerical models were in very good agreement with experimental observations too. While in excellent agreement with each other, analytical and numerical models somewhat over-predicted the yield stress of the porous structures as compared to experimental data. As the ratio of the vertical struts to the inclined struts, α, approaches zero and infinity, the rhombicuboctahedron unit cell respectively approaches the octahedron (or truncated cube) and cube unit cells. For those limits, the analytical solutions presented here were found to approach the analytic solutions obtained for the octahedron, truncated cube, and cube unit cells, meaning that the presented solutions are generalizations of the analytical solutions obtained for several other types of porous biomaterials. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Residential Saudi load forecasting using analytical model and Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Al-Harbi, Ahmad Abdulaziz

    In recent years, load forecasting has become one of the main fields of study and research. Short Term Load Forecasting (STLF) is an important part of electrical power system operation and planning. This work investigates the applicability of different approaches; Artificial Neural Networks (ANNs) and hybrid analytical models to forecast residential load in Kingdom of Saudi Arabia (KSA). These two techniques are based on model human modes behavior formulation. These human modes represent social, religious, official occasions and environmental parameters impact. The analysis is carried out on residential areas for three regions in two countries exposed to distinct people activities and weather conditions. The collected data are for Al-Khubar and Yanbu industrial city in KSA, in addition to Seattle, USA to show the validity of the proposed models applied on residential load. For each region, two models are proposed. First model is next hour load forecasting while second model is next day load forecasting. Both models are analyzed using the two techniques. The obtained results for ANN next hour models yield very accurate results for all areas while relatively reasonable results are achieved when using hybrid analytical model. For next day load forecasting, the two approaches yield satisfactory results. Comparative studies were conducted to prove the effectiveness of the models proposed.

  10. Investigating Compaction by Intergranular Pressure Solution Using the Discrete Element Method

    NASA Astrophysics Data System (ADS)

    van den Ende, M. P. A.; Marketos, G.; Niemeijer, A. R.; Spiers, C. J.

    2018-01-01

    Intergranular pressure solution creep is an important deformation mechanism in the Earth's crust. The phenomenon has been frequently studied and several analytical models have been proposed that describe its constitutive behavior. These models require assumptions regarding the geometry of the aggregate and the grain size distribution in order to solve for the contact stresses and often neglect shear tractions. Furthermore, analytical models tend to overestimate experimental compaction rates at low porosities, an observation for which the underlying mechanisms remain to be elucidated. Here we present a conceptually simple, 3-D discrete element method (DEM) approach for simulating intergranular pressure solution creep that explicitly models individual grains, relaxing many of the assumptions that are required by analytical models. The DEM model is validated against experiments by direct comparison of macroscopic sample compaction rates. Furthermore, the sensitivity of the overall DEM compaction rate to the grain size and applied stress is tested. The effects of the interparticle friction and of a distributed grain size on macroscopic strain rates are subsequently investigated. Overall, we find that the DEM model is capable of reproducing realistic compaction behavior, and that the strain rates produced by the model are in good agreement with uniaxial compaction experiments. Characteristic features, such as the dependence of the strain rate on grain size and applied stress, as predicted by analytical models, are also observed in the simulations. DEM results show that interparticle friction and a distributed grain size affect the compaction rates by less than half an order of magnitude.

  11. Comparison and validation of point spread models for imaging in natural waters.

    PubMed

    Hou, Weilin; Gray, Deric J; Weidemann, Alan D; Arnone, Robert A

    2008-06-23

    It is known that scattering by particulates within natural waters is the main cause of the blur in underwater images. Underwater images can be better restored or enhanced with knowledge of the point spread function (PSF) of the water. This will extend the performance range as well as the information retrieval from underwater electro-optical systems, which is critical in many civilian and military applications, including target and especially mine detection, search and rescue, and diver visibility. A better understanding of the physical process involved also helps to predict system performance and simulate it accurately on demand. The presented effort first reviews several PSF models, including the introduction of a semi-analytical PSF given optical properties of the medium, including scattering albedo, mean scattering angles and the optical range. The models under comparison include the empirical model of Duntley, a modified PSF model by Dolin et al, as well as the numerical integration of analytical forms from Wells, as a benchmark of theoretical results. For experimental results, in addition to that of Duntley, we validate the above models with measured point spread functions by applying field measured scattering properties with Monte Carlo simulations. Results from these comparisons suggest it is sufficient but necessary to have the three parameters listed above to model PSFs. The simplified approach introduced also provides adequate accuracy and flexibility for imaging applications, as shown by examples of restored underwater images.

  12. Analytical and numerical analysis of frictional damage in quasi brittle materials

    NASA Astrophysics Data System (ADS)

    Zhu, Q. Z.; Zhao, L. Y.; Shao, J. F.

    2016-07-01

    Frictional sliding and crack growth are two main dissipation processes in quasi brittle materials. The frictional sliding along closed cracks is the origin of macroscopic plastic deformation while the crack growth induces a material damage. The main difficulty of modeling is to consider the inherent coupling between these two processes. Various models and associated numerical algorithms have been proposed. But there are so far no analytical solutions even for simple loading paths for the validation of such algorithms. In this paper, we first present a micro-mechanical model taking into account the damage-friction coupling for a large class of quasi brittle materials. The model is formulated by combining a linear homogenization procedure with the Mori-Tanaka scheme and the irreversible thermodynamics framework. As an original contribution, a series of analytical solutions of stress-strain relations are developed for various loading paths. Based on the micro-mechanical model, two numerical integration algorithms are exploited. The first one involves a coupled friction/damage correction scheme, which is consistent with the coupling nature of the constitutive model. The second one contains a friction/damage decoupling scheme with two consecutive steps: the friction correction followed by the damage correction. With the analytical solutions as reference results, the two algorithms are assessed through a series of numerical tests. It is found that the decoupling correction scheme is efficient to guarantee a systematic numerical convergence.

  13. Method validation for chemical composition determination by electron microprobe with wavelength dispersive spectrometer

    NASA Astrophysics Data System (ADS)

    Herrera-Basurto, R.; Mercader-Trejo, F.; Muñoz-Madrigal, N.; Juárez-García, J. M.; Rodriguez-López, A.; Manzano-Ramírez, A.

    2016-07-01

    The main goal of method validation is to demonstrate that the method is suitable for its intended purpose. One of the advantages of analytical method validation is translated into a level of confidence about the measurement results reported to satisfy a specific objective. Elemental composition determination by wavelength dispersive spectrometer (WDS) microanalysis has been used over extremely wide areas, mainly in the field of materials science, impurity determinations in geological, biological and food samples. However, little information is reported about the validation of the applied methods. Herein, results of the in-house method validation for elemental composition determination by WDS are shown. SRM 482, a binary alloy Cu-Au of different compositions, was used during the validation protocol following the recommendations for method validation proposed by Eurachem. This paper can be taken as a reference for the evaluation of the validation parameters more frequently requested to get the accreditation under the requirements of the ISO/IEC 17025 standard: selectivity, limit of detection, linear interval, sensitivity, precision, trueness and uncertainty. A model for uncertainty estimation was proposed including systematic and random errors. In addition, parameters evaluated during the validation process were also considered as part of the uncertainty model.

  14. Uncertainty estimates of purity measurements based on current information: toward a "live validation" of purity methods.

    PubMed

    Apostol, Izydor; Kelner, Drew; Jiang, Xinzhao Grace; Huang, Gang; Wypych, Jette; Zhang, Xin; Gastwirt, Jessica; Chen, Kenneth; Fodor, Szilan; Hapuarachchi, Suminda; Meriage, Dave; Ye, Frank; Poppe, Leszek; Szpankowski, Wojciech

    2012-12-01

    To predict precision and other performance characteristics of chromatographic purity methods, which represent the most widely used form of analysis in the biopharmaceutical industry. We have conducted a comprehensive survey of purity methods, and show that all performance characteristics fall within narrow measurement ranges. This observation was used to develop a model called Uncertainty Based on Current Information (UBCI), which expresses these performance characteristics as a function of the signal and noise levels, hardware specifications, and software settings. We applied the UCBI model to assess the uncertainty of purity measurements, and compared the results to those from conventional qualification. We demonstrated that the UBCI model is suitable to dynamically assess method performance characteristics, based on information extracted from individual chromatograms. The model provides an opportunity for streamlining qualification and validation studies by implementing a "live validation" of test results utilizing UBCI as a concurrent assessment of measurement uncertainty. Therefore, UBCI can potentially mitigate the challenges associated with laborious conventional method validation and facilitates the introduction of more advanced analytical technologies during the method lifecycle.

  15. Effect of primary and secondary parameters on analytical estimation of effective thermal conductivity of two phase materials using unit cell approach

    NASA Astrophysics Data System (ADS)

    S, Chidambara Raja; P, Karthikeyan; Kumaraswamidhas, L. A.; M, Ramu

    2018-05-01

    Most of the thermal design systems involve two phase materials and analysis of such systems requires detailed understanding of the thermal characteristics of the two phase material. This article aimed to develop geometry dependent unit cell approach model by considering the effects of all primary parameters (conductivity ratio and concentration) and secondary parameters (geometry, contact resistance, natural convection, Knudsen and radiation) for the estimation of effective thermal conductivity of two-phase materials. The analytical equations have been formulated based on isotherm approach for 2-D and 3-D spatially periodic medium. The developed models are validated with standard models and suited for all kind of operating conditions. The results have shown substantial improvement compared to the existing models and are in good agreement with the experimental data.

  16. Semi-analytical models of hydroelastic sloshing impact in tanks of liquefied natural gas vessels.

    PubMed

    Ten, I; Malenica, Š; Korobkin, A

    2011-07-28

    The present paper deals with the methods for the evaluation of the hydroelastic interactions that appear during the violent sloshing impacts inside the tanks of liquefied natural gas carriers. The complexity of both the fluid flow and the structural behaviour (containment system and ship structure) does not allow for a fully consistent direct approach according to the present state of the art. Several simplifications are thus necessary in order to isolate the most dominant physical aspects and to treat them properly. In this paper, choice was made of semi-analytical modelling for the hydrodynamic part and finite-element modelling for the structural part. Depending on the impact type, different hydrodynamic models are proposed, and the basic principles of hydroelastic coupling are clearly described and validated with respect to the accuracy and convergence of the numerical results.

  17. Piloted Evaluation of a UH-60 Mixer Equivalent Turbulence Simulation Model

    NASA Technical Reports Server (NTRS)

    Lusardi, Jeff A.; Blanken, Chris L.; Tischeler, Mark B.

    2002-01-01

    A simulation study of a recently developed hover/low speed Mixer Equivalent Turbulence Simulation (METS) model for the UH-60 Black Hawk helicopter was conducted in the NASA Ames Research Center Vertical Motion Simulator (VMS). The experiment was a continuation of previous work to develop a simple, but validated, turbulence model for hovering rotorcraft. To validate the METS model, two experienced test pilots replicated precision hover tasks that had been conducted in an instrumented UH-60 helicopter in turbulence. Objective simulation data were collected for comparison with flight test data, and subjective data were collected that included handling qualities ratings and pilot comments for increasing levels of turbulence. Analyses of the simulation results show good analytic agreement between the METS model and flight test data, with favorable pilot perception of the simulated turbulence. Precision hover tasks were also repeated using the more complex rotating-frame SORBET (Simulation Of Rotor Blade Element Turbulence) model to generate turbulence. Comparisons of the empirically derived METS model with the theoretical SORBET model show good agreement providing validation of the more complex blade element method of simulating turbulence.

  18. Build-Up Approach to Updating the Mock Quiet Spike Beam Model

    NASA Technical Reports Server (NTRS)

    Herrera, Claudia Y.; Pak, Chan-gi

    2007-01-01

    When a new aircraft is designed or a modification is done to an existing aircraft, the aeroelastic properties of the aircraft should be examined to ensure the aircraft is flight worthy. Evaluating the aeroelastic properties of a new or modified aircraft can include performing a variety of analyses, such as modal and flutter analyses. In order to produce accurate results from these analyses, it is imperative to work with finite element models (FEM) that have been validated by or correlated to ground vibration test (GVT) data, Updating an analytical model using measured data is a challenge in the area of structural dynamics. The analytical model update process encompasses a series of optimizations that match analytical frequencies and mode shapes to the measured modal characteristics of structure. In the past, the method used to update a model to test data was "trial and error." This is an inefficient method - running a modal analysis, comparing the analytical results to the GVT data, manually modifying one or more structural parameters (mass, CG, inertia, area, etc.), rerunning the analysis, and comparing the new analytical modal characteristics to the GVT modal data. If the match is close enough (close enough defined by analyst's updating requirements), then the updating process is completed. If the match does not meet updating-requirements, then the parameters are changed again and the process is repeated. Clearly, this manual optimization process is highly inefficient for large FEM's and/or a large number of structural parameters. NASA Dryden Flight Research Center (DFRC) has developed, in-house, a Mode Matching Code that automates the above-mentioned optimization process, DFRC's in-house Mode Matching Code reads mode shapes and frequencies acquired from GVT to create the target model. It also reads the current analytical model, as we11 as the design variables and their upper and lower limits. It performs a modal analysis on this model and modifies it to create an updated model that has similar mode shapes and frequencies as those of the target model. The Mode Matching Code output frequencies and modal assurance criteria (MAC) values that allow for the quantified comparison of the updated model versus the target model. A recent application of this code is the F453 supersonic flight testing platform, NASA DFRC possesses a modified F-15B that is used as a test bed aircraft for supersonic flight experiments. Traditionally, the finite element model of the test article is generated. A GVT is done on the test article ta validate and update its FEM. This FEM is then mated to the F-15B model, which was correlated to GVT data in fall of 2004, A GVT is conducted with the test article mated to the aircraft, and this mated F-15B/ test article FEM is correlated to this final GVT.

  19. Branch and bound algorithm for accurate estimation of analytical isotropic bidirectional reflectance distribution function models.

    PubMed

    Yu, Chanki; Lee, Sang Wook

    2016-05-20

    We present a reliable and accurate global optimization framework for estimating parameters of isotropic analytical bidirectional reflectance distribution function (BRDF) models. This approach is based on a branch and bound strategy with linear programming and interval analysis. Conventional local optimization is often very inefficient for BRDF estimation since its fitting quality is highly dependent on initial guesses due to the nonlinearity of analytical BRDF models. The algorithm presented in this paper employs L1-norm error minimization to estimate BRDF parameters in a globally optimal way and interval arithmetic to derive our feasibility problem and lower bounding function. Our method is developed for the Cook-Torrance model but with several normal distribution functions such as the Beckmann, Berry, and GGX functions. Experiments have been carried out to validate the presented method using 100 isotropic materials from the MERL BRDF database, and our experimental results demonstrate that the L1-norm minimization provides a more accurate and reliable solution than the L2-norm minimization.

  20. Analytical modeling of the temporal evolution of hot spot temperatures in silicon solar cells

    NASA Astrophysics Data System (ADS)

    Wasmer, Sven; Rajsrima, Narong; Geisemeyer, Ino; Fertig, Fabian; Greulich, Johannes Michael; Rein, Stefan

    2018-03-01

    We present an approach to predict the equilibrium temperature of hot spots in crystalline silicon solar cells based on the analysis of their temporal evolution right after turning on a reverse bias. For this end, we derive an analytical expression for the time-dependent heat diffusion of a breakdown channel that is assumed to be cylindrical. We validate this by means of thermography imaging of hot spots right after turning on a reverse bias. The expression allows to be used to extract hot spot powers and radii from short-term measurements, targeting application in inline solar cell characterization. The extracted hot spot powers are validated at the hands of long-term dark lock-in thermography imaging. Using a look-up table of expected equilibrium temperatures determined by numerical and analytical simulations, we utilize the determined hot spot properties to predict the equilibrium temperatures of about 100 industrial aluminum back-surface field solar cells and achieve a high correlation coefficient of 0.86 and a mean absolute error of only 3.3 K.

  1. A Generalized Pivotal Quantity Approach to Analytical Method Validation Based on Total Error.

    PubMed

    Yang, Harry; Zhang, Jianchun

    2015-01-01

    The primary purpose of method validation is to demonstrate that the method is fit for its intended use. Traditionally, an analytical method is deemed valid if its performance characteristics such as accuracy and precision are shown to meet prespecified acceptance criteria. However, these acceptance criteria are not directly related to the method's intended purpose, which is usually a gurantee that a high percentage of the test results of future samples will be close to their true values. Alternate "fit for purpose" acceptance criteria based on the concept of total error have been increasingly used. Such criteria allow for assessing method validity, taking into account the relationship between accuracy and precision. Although several statistical test methods have been proposed in literature to test the "fit for purpose" hypothesis, the majority of the methods are not designed to protect the risk of accepting unsuitable methods, thus having the potential to cause uncontrolled consumer's risk. In this paper, we propose a test method based on generalized pivotal quantity inference. Through simulation studies, the performance of the method is compared to five existing approaches. The results show that both the new method and the method based on β-content tolerance interval with a confidence level of 90%, hereafter referred to as the β-content (0.9) method, control Type I error and thus consumer's risk, while the other existing methods do not. It is further demonstrated that the generalized pivotal quantity method is less conservative than the β-content (0.9) method when the analytical methods are biased, whereas it is more conservative when the analytical methods are unbiased. Therefore, selection of either the generalized pivotal quantity or β-content (0.9) method for an analytical method validation depends on the accuracy of the analytical method. It is also shown that the generalized pivotal quantity method has better asymptotic properties than all of the current methods. Analytical methods are often used to ensure safety, efficacy, and quality of medicinal products. According to government regulations and regulatory guidelines, these methods need to be validated through well-designed studies to minimize the risk of accepting unsuitable methods. This article describes a novel statistical test for analytical method validation, which provides better protection for the risk of accepting unsuitable analytical methods. © PDA, Inc. 2015.

  2. Design and Optimization of AlN based RF MEMS Switches

    NASA Astrophysics Data System (ADS)

    Hasan Ziko, Mehadi; Koel, Ants

    2018-05-01

    Radio frequency microelectromechanical system (RF MEMS) switch technology might have potential to replace the semiconductor technology in future communication systems as well as communication satellites, wireless and mobile phones. This study is to explore the possibilities of RF MEMS switch design and optimization with aluminium nitride (AlN) thin film as the piezoelectric actuation material. Achieving low actuation voltage and high contact force with optimal geometry using the principle of piezoelectric effect is the main motivation for this research. Analytical and numerical modelling of single beam type RF MEMS switch used to analyse the design parameters and optimize them for the minimum actuation voltage and high contact force. An analytical model using isotropic AlN material properties used to obtain the optimal parameters. The optimized geometry of the device length, width and thickness are 2000 µm, 500 µm and 0.6 µm respectively obtained for the single beam RF MEMS switch. Low actuation voltage and high contact force with optimal geometry are less than 2 Vand 100 µN obtained by analytical analysis. Additionally, the single beam RF MEMS switch are optimized and validated by comparing the analytical and finite element modelling (FEM) analysis.

  3. Joint Analysis of X-Ray and Sunyaev-Zel'Dovich Observations of Galaxy Clusters Using an Analytic Model of the Intracluster Medium

    NASA Technical Reports Server (NTRS)

    Hasler, Nicole; Bulbul, Esra; Bonamente, Massimiliano; Carlstrom, John E.; Culverhouse, Thomas L.; Gralla, Megan; Greer, Christopher; Lamb, James W.; Hawkins, David; Hennessy, Ryan; hide

    2012-01-01

    We perform a joint analysis of X-ray and Sunyaev-Zel'dovich effect data using an analytic model that describes the gas properties of galaxy clusters. The joint analysis allows the measurement of the cluster gas mass fraction profile and Hubble constant independent of cosmological parameters. Weak cosmological priors are used to calculate the overdensity radius within which the gas mass fractions are reported. Such an analysis can provide direct constraints on the evolution of the cluster gas mass fraction with redshift. We validate the model and the joint analysis on high signal-to-noise data from the Chandra X-ray Observatory and the Sunyaev-Zel'dovich Array for two clusters, A2631 and A2204.

  4. Postbuckling and Growth of Delaminations in Composite Plates Subjected to Axial Compression

    NASA Technical Reports Server (NTRS)

    Reeder, James R.; Chunchu, Prasad B.; Song, Kyongchan; Ambur, Damodar R.

    2002-01-01

    The postbuckling response and growth of circular delaminations in flat and curved plates are investigated as part of a study to identify the criticality of delamination locations through the laminate thickness. The experimental results from tests on delaminated plates are compared with finite element analysis results generated using shell models. The analytical prediction of delamination growth is obtained by assessing the strain energy release rate results from the finite element model and comparing them to a mixed-mode fracture toughness failure criterion. The analytical results for onset of delamination growth compare well with experimental results generated using a 3-dimensional displacement visualization system. The record of delamination progression measured in this study has resulted in a fully 3-dimensional test case with which progressive failure models can be validated.

  5. Analytical model of surface potential profiles and transfer characteristics for hetero stacked tunnel field-effect transistors

    NASA Astrophysics Data System (ADS)

    Xu, Hui Fang; Sun, Wen; Han, Xin Feng

    2018-06-01

    An analytical model of surface potential profiles and transfer characteristics for hetero stacked tunnel field-effect transistors (HS-TFETs) is presented for the first time, where hetero stacked materials are composed of two different bandgaps. The bandgap of the underlying layer is smaller than that of the upper layer. Under different device parameters (upper layer thickness, underlying layer thickness, and hetero stacked materials) and temperature, the validity of the model is demonstrated by the agreement of its results with the simulation results. Moreover, the results show that the HS-TFETs can obtain predominant performance with relatively slow changes of subthreshold swing (SS) over a wide drain current range, steep average subthreshold swing, high on-state current, and large on–off state current ratio.

  6. A semi-analytical bearing model considering outer race flexibility for model based bearing load monitoring

    NASA Astrophysics Data System (ADS)

    Kerst, Stijn; Shyrokau, Barys; Holweg, Edward

    2018-05-01

    This paper proposes a novel semi-analytical bearing model addressing flexibility of the bearing outer race structure. It furthermore presents the application of this model in a bearing load condition monitoring approach. The bearing model is developed as current computational low cost bearing models fail to provide an accurate description of the more and more common flexible size and weight optimized bearing designs due to their assumptions of rigidity. In the proposed bearing model raceway flexibility is described by the use of static deformation shapes. The excitation of the deformation shapes is calculated based on the modelled rolling element loads and a Fourier series based compliance approximation. The resulting model is computational low cost and provides an accurate description of the rolling element loads for flexible outer raceway structures. The latter is validated by a simulation-based comparison study with a well-established bearing simulation software tool. An experimental study finally shows the potential of the proposed model in a bearing load monitoring approach.

  7. Prognostic models for complete recovery in ischemic stroke: a systematic review and meta-analysis.

    PubMed

    Jampathong, Nampet; Laopaiboon, Malinee; Rattanakanokchai, Siwanon; Pattanittum, Porjai

    2018-03-09

    Prognostic models have been increasingly developed to predict complete recovery in ischemic stroke. However, questions arise about the performance characteristics of these models. The aim of this study was to systematically review and synthesize performance of existing prognostic models for complete recovery in ischemic stroke. We searched journal publications indexed in PUBMED, SCOPUS, CENTRAL, ISI Web of Science and OVID MEDLINE from inception until 4 December, 2017, for studies designed to develop and/or validate prognostic models for predicting complete recovery in ischemic stroke patients. Two reviewers independently examined titles and abstracts, and assessed whether each study met the pre-defined inclusion criteria and also independently extracted information about model development and performance. We evaluated validation of the models by medians of the area under the receiver operating characteristic curve (AUC) or c-statistic and calibration performance. We used a random-effects meta-analysis to pool AUC values. We included 10 studies with 23 models developed from elderly patients with a moderately severe ischemic stroke, mainly in three high income countries. Sample sizes for each study ranged from 75 to 4441. Logistic regression was the only analytical strategy used to develop the models. The number of various predictors varied from one to 11. Internal validation was performed in 12 models with a median AUC of 0.80 (95% CI 0.73 to 0.84). One model reported good calibration. Nine models reported external validation with a median AUC of 0.80 (95% CI 0.76 to 0.82). Four models showed good discrimination and calibration on external validation. The pooled AUC of the two validation models of the same developed model was 0.78 (95% CI 0.71 to 0.85). The performance of the 23 models found in the systematic review varied from fair to good in terms of internal and external validation. Further models should be developed with internal and external validation in low and middle income countries.

  8. Modeling of parasitic current collection by solar arrays in low-earth orbit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, V.A.; Gardner, B.M.; Guidice, D.A.

    1996-11-01

    In this paper we describe the development of a model of the electron current collected by solar arrays from the ionospheric plasma. This model will assist spacecraft designers in minimizing the impact of plasma interactions on spacecraft operations as they move to higher-voltage solar arrays. The model was developed by first examining in detail the physical processes of importance and then finding an analytic fit to the results over the parameter range of interest. The analytic model is validated by comparison with flight data from the Photovoltaic Array for Space Power Plus diagnostics (PASP Plus) flight experiment [D. A. Guidice,more » 34{ital th} {ital Aerospace} {ital Sciences} {ital Meeting} {ital and} {ital Exhibit}, Reno, NV, 1996, AIAA 96-0926 (American Institute of Aeronautics and Astronautics, Washington, DC, 1996)]. {copyright} {ital 1996 American Institute of Physics.}« less

  9. Development of a System Model for Non-Invasive Quantification of Bilirubin in Jaundice Patients

    NASA Astrophysics Data System (ADS)

    Alla, Suresh K.

    Neonatal jaundice is a medical condition which occurs in newborns as a result of an imbalance between the production and elimination of bilirubin. Excess bilirubin in the blood stream diffuses into the surrounding tissue leading to a yellowing of the skin. An optical system integrated with a signal processing system is used as a platform to noninvasively quantify bilirubin concentration through the measurement of diffuse skin reflectance. Initial studies have lead to the generation of a clinical analytical model for neonatal jaundice which generates spectral reflectance data for jaundiced skin with varying levels of bilirubin concentration in the tissue. The spectral database built using the clinical analytical model is then used as a test database to validate the signal processing system in real time. This evaluation forms the basis for understanding the translation of this research to human trials. The clinical analytical model and signal processing system have been successful validated on three spectral databases. First spectral database is constructed using a porcine model as a surrogate for neonatal skin tissue. Samples of pig skin were soaked in bilirubin solutions of varying concentrations to simulate jaundice skin conditions. The resulting skins samples were analyzed with our skin reflectance systems producing bilirubin concentration values that show a high correlation (R2 = 0.94) to concentration of the bilirubin solution that each porcine tissue sample is soaked in. The second spectral database is the spectral measurements collected on human volunteers to quantify the different chromophores and other physical properties of the tissue such a Hematocrit, Hemoglobin etc. The third spectral database is the spectral data collected at different time periods from the moment a bruise is induced.

  10. Mixing of two co-directional Rayleigh surface waves in a nonlinear elastic material.

    PubMed

    Morlock, Merlin B; Kim, Jin-Yeon; Jacobs, Laurence J; Qu, Jianmin

    2015-01-01

    The mixing of two co-directional, initially monochromatic Rayleigh surface waves in an isotropic, homogeneous, and nonlinear elastic solid is investigated using analytical, finite element method, and experimental approaches. The analytical investigations show that while the horizontal velocity component can form a shock wave, the vertical velocity component can form a pulse independent of the specific ratios of the fundamental frequencies and amplitudes that are mixed. This analytical model is then used to simulate the development of the fundamentals, second harmonics, and the sum and difference frequency components over the propagation distance. The analytical model is further extended to include diffraction effects in the parabolic approximation. Finally, the frequency and amplitude ratios of the fundamentals are identified which provide maximum amplitudes of the second harmonics as well as of the sum and difference frequency components, to help guide effective material characterization; this approach should make it possible to measure the acoustic nonlinearity of a solid not only with the second harmonics, but also with the sum and difference frequency components. Results of the analytical investigations are then confirmed using the finite element method and the experimental feasibility of the proposed technique is validated for an aluminum specimen.

  11. Negations in syllogistic reasoning: evidence for a heuristic-analytic conflict.

    PubMed

    Stupple, Edward J N; Waterhouse, Eleanor F

    2009-08-01

    An experiment utilizing response time measures was conducted to test dominant processing strategies in syllogistic reasoning with the expanded quantifier set proposed by Roberts (2005). Through adding negations to existing quantifiers it is possible to change problem surface features without altering logical validity. Biases based on surface features such as atmosphere, matching, and the probability heuristics model (PHM; Chater & Oaksford, 1999; Wetherick & Gilhooly, 1995) would not be expected to show variance in response latencies, but participant responses should be highly sensitive to changes in the surface features of the quantifiers. In contrast, according to analytic accounts such as mental models theory and mental logic (e.g., Johnson-Laird & Byrne, 1991; Rips, 1994) participants should exhibit increased response times for negated premises, but not be overly impacted upon by the surface features of the conclusion. Data indicated that the dominant response strategy was based on a matching heuristic, but also provided evidence of a resource-demanding analytic procedure for dealing with double negatives. The authors propose that dual-process theories offer a stronger account of these data whereby participants employ competing heuristic and analytic strategies and fall back on a heuristic response when analytic processing fails.

  12. Analytical Verifications in Cryogenic Testing of NGST Advanced Mirror System Demonstrators

    NASA Technical Reports Server (NTRS)

    Cummings, Ramona; Levine, Marie; VanBuren, Dave; Kegley, Jeff; Green, Joseph; Hadaway, James; Presson, Joan; Cline, Todd; Stahl, H. Philip (Technical Monitor)

    2002-01-01

    Ground based testing is a critical and costly part of component, assembly, and system verifications of large space telescopes. At such tests, however, with integral teamwork by planners, analysts, and test personnel, segments can be included to validate specific analytical parameters and algorithms at relatively low additional cost. This paper opens with strategy of analytical verification segments added to vacuum cryogenic testing of Advanced Mirror System Demonstrator (AMSD) assemblies. These AMSD assemblies incorporate material and architecture concepts being considered in the Next Generation Space Telescope (NGST) design. The test segments for workmanship testing, cold survivability, and cold operation optical throughput are supplemented by segments for analytical verifications of specific structural, thermal, and optical parameters. Utilizing integrated modeling and separate materials testing, the paper continues with support plan for analyses, data, and observation requirements during the AMSD testing, currently slated for late calendar year 2002 to mid calendar year 2003. The paper includes anomaly resolution as gleaned by authors from similar analytical verification support of a previous large space telescope, then closes with draft of plans for parameter extrapolations, to form a well-verified portion of the integrated modeling being done for NGST performance predictions.

  13. Dynamics Impact Tolerance of Shuttle RCC Leading Edge Panels Using LS-DYNA

    NASA Technical Reports Server (NTRS)

    Fasanella, Edwin L.; Jackson, Karen E.; Lyle, Karen H.; Jones, Lisa E.; Hardy, Robin C.; Spellman, Regina L.; Carney, Kelly S.; Melis, Matthew E.; Stockwell, Alan E.

    2005-01-01

    This paper describes a research program conducted to enable accurate prediction of the impact tolerance of the shuttle Orbiter leading-edge wing panels using physics-based codes such as LS-DYNA, a nonlinear, explicit transient dynamic finite element code. The shuttle leading-edge panels are constructed of Reinforced-Carbon-Carbon (RCC) composite material, which is used because of its thermal properties to protect the shuttle during reentry into the Earth's atmosphere. Accurate predictions of impact damage from insulating foam and other debris strikes that occur during launch required materials characterization of expected debris, including strain-rate effects. First, analytical models of individual foam and RCC materials were validated. Next, analytical models of foam cylinders impacting 6- in. x 6-in. RCC flat plates were developed and validated. LS-DYNA pre-test models of the RCC flat plate specimens established the impact velocity of the test for three damage levels: no-detectable damage, non-destructive evaluation (NDE) detectable damage, or visible damage such as a through crack or hole. Finally, the threshold of impact damage for RCC on representative Orbiter wing panels was predicted for both a small through crack and for NDE-detectable damage.

  14. Dynamic Impact Tolerance of Shuttle RCC Leading Edge Panels using LS-DYNA

    NASA Technical Reports Server (NTRS)

    Fasanella, Edwin; Jackson, Karen E.; Lyle, Karen H.; Jones, Lisa E.; Hardy, Robin C.; Spellman, Regina L.; Carney, Kelly S.; Melis, Matthew E.; Stockwell, Alan E.

    2008-01-01

    This paper describes a research program conducted to enable accurate prediction of the impact tolerance of the shuttle Orbiter leading-edge wing panels using 'physics-based- codes such as LS-DYNA, a nonlinear, explicit transient dynamic finite element code. The shuttle leading-edge panels are constructed of Reinforced-Carbon-Carbon (RCC) composite material, which issued because of its thermal properties to protect the shuttle during re-entry into the Earth's atmosphere. Accurate predictions of impact damage from insulating foam and other debris strikes that occur during launch required materials characterization of expected debris, including strain-rate effects. First, analytical models of individual foam and RCC materials were validated. Next, analytical models of individual foam cylinders impacting 6-in. x 6-in. RCC flat plates were developed and validated. LS-DYNA pre-test models of the RCC flat plate specimens established the impact velocity of the test for three damage levels: no-detectable damage, non-destructive evaluation (NDE) detectable damage, or visible damage such as a through crack or hole. Finally, the threshold of impact damage for RCC on representative Orbiter wing panels was predicted for both a small through crack and for NDE-detectable damage.

  15. SCS-CN based time-distributed sediment yield model

    NASA Astrophysics Data System (ADS)

    Tyagi, J. V.; Mishra, S. K.; Singh, Ranvir; Singh, V. P.

    2008-05-01

    SummaryA sediment yield model is developed to estimate the temporal rates of sediment yield from rainfall events on natural watersheds. The model utilizes the SCS-CN based infiltration model for computation of rainfall-excess rate, and the SCS-CN-inspired proportionality concept for computation of sediment-excess. For computation of sedimentographs, the sediment-excess is routed to the watershed outlet using a single linear reservoir technique. Analytical development of the model shows the ratio of the potential maximum erosion (A) to the potential maximum retention (S) of the SCS-CN method is constant for a watershed. The model is calibrated and validated on a number of events using the data of seven watersheds from India and the USA. Representative values of the A/S ratio computed for the watersheds from calibration are used for the validation of the model. The encouraging results of the proposed simple four parameter model exhibit its potential in field application.

  16. STANDARDIZATION AND VALIDATION OF MICROBIOLOGICAL METHODS FOR EXAMINATION OF BIOSOLIDS

    EPA Science Inventory

    The objective of this presentation is to discuss pathogens of concern in biosolids, the analytical techniques used to evaluate microorganisms in biosolids, and to discuss standardization and validation of analytical protocols for microbes within a complex matrix. Implications of ...

  17. NCI-FDA Interagency Oncology Task Force Workshop Provides Guidance for Analytical Validation of Protein-based Multiplex Assays | Office of Cancer Clinical Proteomics Research

    Cancer.gov

    An NCI-FDA Interagency Oncology Task Force (IOTF) Molecular Diagnostics Workshop was held on October 30, 2008 in Cambridge, MA, to discuss requirements for analytical validation of protein-based multiplex technologies in the context of its intended use. This workshop developed through NCI's Clinical Proteomic Technologies for Cancer initiative and the FDA focused on technology-specific analytical validation processes to be addressed prior to use in clinical settings. In making this workshop unique, a case study approach was used to discuss issues related to

  18. Validation of GC and HPLC systems for residue studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, M.

    1995-12-01

    For residue studies, GC and HPLC system performance must be validated prior to and during use. One excellent measure of system performance is the standard curve and associated chromatograms used to construct that curve. The standard curve is a model of system response to an analyte over a specific time period, and is prima facia evidence of system performance beginning at the auto sampler and proceeding through the injector, column, detector, electronics, data-capture device, and printer/plotter. This tool measures the performance of the entire chromatographic system; its power negates most of the benefits associated with costly and time-consuming validation ofmore » individual system components. Other measures of instrument and method validation will be discussed, including quality control charts and experimental designs for method validation.« less

  19. Numerically calibrated model for propagation of a relativistic unmagnetized jet in dense media

    NASA Astrophysics Data System (ADS)

    Harrison, Richard; Gottlieb, Ore; Nakar, Ehud

    2018-06-01

    Relativistic jets reside in high-energy astrophysical systems of all scales. Their interaction with the surrounding media is critical as it determines the jet evolution, observable signature, and feedback on the environment. During its motion, the interaction of the jet with the ambient media inflates a highly pressurized cocoon, which under certain conditions collimates the jet and strongly affects its propagation. Recently, Bromberg et al. derived a general simplified (semi-)analytic solution for the evolution of the jet and the cocoon in case of an unmagnetized jet that propagates in a medium with a range of density profiles. In this work we use a large suite of 2D and 3D relativistic hydrodynamic simulations in order to test the validity and accuracy of this model. We discuss the similarities and differences between the analytic model and numerical simulations and also, to some extent, between 2D and 3D simulations. Our main finding is that although the analytic model is highly simplified, it properly predicts the evolution of the main ingredients of the jet-cocoon system, including its temporal evolution and the transition between various regimes (e.g. collimated to uncollimated). The analytic solution predicts a jet head velocity that is faster by a factor of about 3 compared to the simulations, as long as the head velocity is Newtonian. We use the results of the simulations to calibrate the analytic model which significantly increases its accuracy. We provide an applet that calculates semi-analytically the propagation of a jet in an arbitrary density profile defined by the user at http://www.astro.tau.ac.il/˜ore/propagation.html.

  20. Analytical Qualification of Aircraft Structures: Meeting of the Structures and Materials Panel of AGARD (70th) Held in Sorrento, Italy on 1-6 April 1990 (La Qualification Analytique des Structures d’Avion).

    DTIC Science & Technology

    1991-04-01

    sources of preliminary evalution are the software theor and validation documents. E amination of the theoretical bsis and numerical algorithms, together...knowledge reflected In production models. However, It Is a horror for him to see that production models are composed by stress people who insufficiently

  1. The analytical validation of the Oncotype DX Recurrence Score assay

    PubMed Central

    Baehner, Frederick L

    2016-01-01

    In vitro diagnostic multivariate index assays are highly complex molecular assays that can provide clinically actionable information regarding the underlying tumour biology and facilitate personalised treatment. These assays are only useful in clinical practice if all of the following are established: analytical validation (i.e., how accurately/reliably the assay measures the molecular characteristics), clinical validation (i.e., how consistently/accurately the test detects/predicts the outcomes of interest), and clinical utility (i.e., how likely the test is to significantly improve patient outcomes). In considering the use of these assays, clinicians often focus primarily on the clinical validity/utility; however, the analytical validity of an assay (e.g., its accuracy, reproducibility, and standardisation) should also be evaluated and carefully considered. This review focuses on the rigorous analytical validation and performance of the Oncotype DX® Breast Cancer Assay, which is performed at the Central Clinical Reference Laboratory of Genomic Health, Inc. The assay process includes tumour tissue enrichment (if needed), RNA extraction, gene expression quantitation (using a gene panel consisting of 16 cancer genes plus 5 reference genes and quantitative real-time RT-PCR), and an automated computer algorithm to produce a Recurrence Score® result (scale: 0–100). This review presents evidence showing that the Recurrence Score result reported for each patient falls within a tight clinically relevant confidence interval. Specifically, the review discusses how the development of the assay was designed to optimise assay performance, presents data supporting its analytical validity, and describes the quality control and assurance programmes that ensure optimal test performance over time. PMID:27729940

  2. The analytical validation of the Oncotype DX Recurrence Score assay.

    PubMed

    Baehner, Frederick L

    2016-01-01

    In vitro diagnostic multivariate index assays are highly complex molecular assays that can provide clinically actionable information regarding the underlying tumour biology and facilitate personalised treatment. These assays are only useful in clinical practice if all of the following are established: analytical validation (i.e., how accurately/reliably the assay measures the molecular characteristics), clinical validation (i.e., how consistently/accurately the test detects/predicts the outcomes of interest), and clinical utility (i.e., how likely the test is to significantly improve patient outcomes). In considering the use of these assays, clinicians often focus primarily on the clinical validity/utility; however, the analytical validity of an assay (e.g., its accuracy, reproducibility, and standardisation) should also be evaluated and carefully considered. This review focuses on the rigorous analytical validation and performance of the Oncotype DX ® Breast Cancer Assay, which is performed at the Central Clinical Reference Laboratory of Genomic Health, Inc. The assay process includes tumour tissue enrichment (if needed), RNA extraction, gene expression quantitation (using a gene panel consisting of 16 cancer genes plus 5 reference genes and quantitative real-time RT-PCR), and an automated computer algorithm to produce a Recurrence Score ® result (scale: 0-100). This review presents evidence showing that the Recurrence Score result reported for each patient falls within a tight clinically relevant confidence interval. Specifically, the review discusses how the development of the assay was designed to optimise assay performance, presents data supporting its analytical validity, and describes the quality control and assurance programmes that ensure optimal test performance over time.

  3. MAIN software for density averaging, model building, structure refinement and validation

    PubMed Central

    Turk, Dušan

    2013-01-01

    MAIN is software that has been designed to interactively perform the complex tasks of macromolecular crystal structure determination and validation. Using MAIN, it is possible to perform density modification, manual and semi-automated or automated model building and rebuilding, real- and reciprocal-space structure optimization and refinement, map calculations and various types of molecular structure validation. The prompt availability of various analytical tools and the immediate visualization of molecular and map objects allow a user to efficiently progress towards the completed refined structure. The extraordinary depth perception of molecular objects in three dimensions that is provided by MAIN is achieved by the clarity and contrast of colours and the smooth rotation of the displayed objects. MAIN allows simultaneous work on several molecular models and various crystal forms. The strength of MAIN lies in its manipulation of averaged density maps and molecular models when noncrystallographic symmetry (NCS) is present. Using MAIN, it is possible to optimize NCS parameters and envelopes and to refine the structure in single or multiple crystal forms. PMID:23897458

  4. On-orbit evaluation of the control system/structural mode interactions on OSO-8

    NASA Technical Reports Server (NTRS)

    Slafer, L. I.

    1980-01-01

    The Orbiting Solar Observatory-8 experienced severe structural mode/control loop interaction problems during the spacecraft development. Extensive analytical studies, using the hybrid coordinate modeling approach, and comprehensive ground testing were carried out in order to achieve the system's precision pointing performance requirements. A recent series of flight tests were conducted with the spacecraft in which a wide bandwidth, high resolution telemetry system was utilized to evaluate the on-orbit flexible dynamics characteristics of the vehicle along with the control system performance. This paper describes the results of these tests, reviewing the basic design problem, analytical approach taken, ground test philosophy, and on-orbit testing. Data from the tests was used to determine the primary mode frequency, damping, and servo coupling dynamics for the on-orbit condition. Additionally, the test results have verified analytically predicted differences between the on-orbit and ground test environments. The test results have led to a validation of both the analytical modeling and servo design techniques used during the development of the control system, and also verified the approach taken to vehicle and servo ground testing.

  5. Ionization Efficiency of Doubly Charged Ions Formed from Polyprotic Acids in Electrospray Negative Mode

    NASA Astrophysics Data System (ADS)

    Liigand, Piia; Kaupmees, Karl; Kruve, Anneli

    2016-07-01

    The ability of polyprotic acids to give doubly charged ions in negative mode electrospray was studied and related to physicochemical properties of the acids via linear discriminant analysis (LDA). It was discovered that the compound has to be strongly acidic (low p K a1 and p K a2) and to have high hydrophobicity (log P ow) to become multiply charged. Ability to give multiply charged ions in ESI/MS cannot be directly predicted from the solution phase acidities. Therefore, for the first time, a quantitative model to predict the charge state of the analyte in ESI/MS is proposed and validated for small anions. Also, a model to predict ionization efficiencies of these analytes was developed. Results indicate that acidity of the analyte, its octanol-water partition coefficient, and charge delocalization are important factors that influence ionization efficiencies as well as charge states of the analytes. The pH of the solvent was also found to be an important factor influencing the ionization efficiency of doubly charged ions.

  6. Future Directions for Space Transportation and Propulsion at NASA

    NASA Technical Reports Server (NTRS)

    Sackheim, Robert L.

    2005-01-01

    Contents include the following: Oxygen Compatible Materials. Manufacturing Technology Demonstrations. Turbopump Inducer Waterflow Test. Turbine Damping "Whirligig" Test. Single Element Preburner and Main Injector Test. 40K Multi-Element Preburner and MI. Full-Scale Battleship Preburner. Prototype Preburner Test Article. Full-Scale Prototype TCA. Turbopump Hot-Fire Test Article. Prototype Engine. Validated Analytical Models.

  7. Clarifying Relationships among Work and Family Social Support, Stressors, and Work-Family Conflict

    ERIC Educational Resources Information Center

    Michel, Jesse S.; Mitchelson, Jacqueline K.; Pichler, Shaun; Cullen, Kristin L.

    2010-01-01

    Although work and family social support predict role stressors and work-family conflict, there has been much ambiguity regarding the conceptual relationships among these constructs. Using path analysis on meta-analytically derived validity coefficients (528 effect sizes from 156 samples), we compare three models to address these concerns and…

  8. Probability of identification (POI): a statistical model for the validation of qualitative botanical identification methods

    USDA-ARS?s Scientific Manuscript database

    A qualitative botanical identification method (BIM) is an analytical procedure which returns a binary result (1 = Identified, 0 = Not Identified). A BIM may be used by a buyer, manufacturer, or regulator to determine whether a botanical material being tested is the same as the target (desired) mate...

  9. Student Satisfaction in Higher Education: A Meta-Analytic Study

    ERIC Educational Resources Information Center

    Santini, Fernando de Oliveira; Ladeira, Wagner Junior; Sampaio, Claudio Hoffmann; da Silva Costa, Gustavo

    2017-01-01

    This paper discusses the results of a meta-analysis performed to identify key antecedent and consequent constructs of satisfaction in higher education. We offer an integrated model to achieve a better understanding of satisfaction in the context of higher education. To accomplish this objective, we identified 83 studies that were valid and…

  10. The Law Enforcement Officer Stress Survey (LEOSS): Evaluation of Psychometric Properties

    ERIC Educational Resources Information Center

    Van Hasselt, Vincent B.; Sheehan, Donald C.; Malcolm, Abigail S.; Sellers, Alfred H.; Baker, Monty T.; Couwels, Judy

    2008-01-01

    This study establishes the reliability and validity of the Law Enforcement Officer Stress Survey (LEOSS), a short early-warning stress-screening measure for law enforcement officers. The initial phase of LEOSS development employed the behavioral-analytic model to construct a 25-item instrument specifically geared toward evaluation of stress in…

  11. Validation and Verification of Composite Pressure Vessel Design

    NASA Technical Reports Server (NTRS)

    Kreger, Stephen T.; Ortyl, Nicholas; Grant, Joseph; Taylor, F. Tad

    2006-01-01

    Ten composite pressure vessels were instrumented with fiber Bragg grating sensors and pressure tested Through burst. This paper and presentation will discuss the testing methodology, the test results, compare the testing results to the analytical model, and also compare the fiber Bragg grating sensor data with data obtained against that obtained from foil strain gages.

  12. Modeling brook trout presence and absence from landscape variables using four different analytical methods

    USGS Publications Warehouse

    Steen, Paul J.; Passino-Reader, Dora R.; Wiley, Michael J.

    2006-01-01

    As a part of the Great Lakes Regional Aquatic Gap Analysis Project, we evaluated methodologies for modeling associations between fish species and habitat characteristics at a landscape scale. To do this, we created brook trout Salvelinus fontinalis presence and absence models based on four different techniques: multiple linear regression, logistic regression, neural networks, and classification trees. The models were tested in two ways: by application to an independent validation database and cross-validation using the training data, and by visual comparison of statewide distribution maps with historically recorded occurrences from the Michigan Fish Atlas. Although differences in the accuracy of our models were slight, the logistic regression model predicted with the least error, followed by multiple regression, then classification trees, then the neural networks. These models will provide natural resource managers a way to identify habitats requiring protection for the conservation of fish species.

  13. Modeling of mechanical properties of stack actuators based on electroactive polymers

    NASA Astrophysics Data System (ADS)

    Tepel, Dominik; Graf, Christian; Maas, Jürgen

    2013-04-01

    Dielectric elastomers are thin polymer films belonging to the class of electroactive polymers, which are coated with compliant and conductive electrodes on each side. Under the influence of an electrical field, dielectric elastomers perform a large amount of deformation. Depending on the mechanical setup, stack and roll actuators can be realized. In this contribution the mechanical properties of stack actuators are modeled by a holistic electromechanical approach of a single actuator film, by which the model of a stack actuator without constraints can be derived. Due to the mechanical connection between the stack actuator and the application, bulges occur at the free surfaces of the EAP material, which are calculated, experimentally validated and considered in the model of the stack actuator. Finally, the analytic actuator film model as well as the stack actuator model are validated by comparison to numerical FEM-models in ANSYS.

  14. Importance of aggregation and small ice crystals in cirrus clouds, based on observations and an ice particle growth model

    NASA Technical Reports Server (NTRS)

    Mitchell, David L.; Chai, Steven K.; Dong, Yayi; Arnott, W. Patrick; Hallett, John

    1993-01-01

    The 1 November 1986 FIRE I case study was used to test an ice particle growth model which predicts bimodal size spectra in cirrus clouds. The model was developed from an analytically based model which predicts the height evolution of monomodal ice particle size spectra from the measured ice water content (IWC). Size spectra from the monomodal model are represented by a gamma distribution, N(D) = N(sub o)D(exp nu)exp(-lambda D), where D = ice particle maximum dimension. The slope parameter, lambda, and the parameter N(sub o) are predicted from the IWC through the growth processes of vapor diffusion and aggregation. The model formulation is analytical, computationally efficient, and well suited for incorporation into larger models. The monomodal model has been validated against two other cirrus cloud case studies. From the monomodal size spectra, the size distributions which determine concentrations of ice particles less than about 150 mu m are predicted.

  15. Novel approach for dam break flow modeling using computational intelligence

    NASA Astrophysics Data System (ADS)

    Seyedashraf, Omid; Mehrabi, Mohammad; Akhtari, Ali Akbar

    2018-04-01

    A new methodology based on the computational intelligence (CI) system is proposed and tested for modeling the classic 1D dam-break flow problem. The reason to seek for a new solution lies in the shortcomings of the existing analytical and numerical models. This includes the difficulty of using the exact solutions and the unwanted fluctuations, which arise in the numerical results. In this research, the application of the radial-basis-function (RBF) and multi-layer-perceptron (MLP) systems is detailed for the solution of twenty-nine dam-break scenarios. The models are developed using seven variables, i.e. the length of the channel, the depths of the up-and downstream sections, time, and distance as the inputs. Moreover, the depths and velocities of each computational node in the flow domain are considered as the model outputs. The models are validated against the analytical, and Lax-Wendroff and MacCormack FDM schemes. The findings indicate that the employed CI models are able to replicate the overall shape of the shock- and rarefaction-waves. Furthermore, the MLP system outperforms RBF and the tested numerical schemes. A new monolithic equation is proposed based on the best fitting model, which can be used as an efficient alternative to the existing piecewise analytic equations.

  16. Do placebo based validation standards mimic real batch products behaviour? Case studies.

    PubMed

    Bouabidi, A; Talbi, M; Bouklouze, A; El Karbane, M; Bourichi, H; El Guezzar, M; Ziemons, E; Hubert, Ph; Rozet, E

    2011-06-01

    Analytical methods validation is a mandatory step to evaluate the ability of developed methods to provide accurate results for their routine application. Validation usually involves validation standards or quality control samples that are prepared in placebo or reconstituted matrix made of a mixture of all the ingredients composing the drug product except the active substance or the analyte under investigation. However, one of the main concerns that can be made with this approach is that it may lack an important source of variability that come from the manufacturing process. The question that remains at the end of the validation step is about the transferability of the quantitative performance from validation standards to real authentic drug product samples. In this work, this topic is investigated through three case studies. Three analytical methods were validated using the commonly spiked placebo validation standards at several concentration levels as well as using samples coming from authentic batch samples (tablets and syrups). The results showed that, depending on the type of response function used as calibration curve, there were various degrees of differences in the results accuracy obtained with the two types of samples. Nonetheless the use of spiked placebo validation standards was showed to mimic relatively well the quantitative behaviour of the analytical methods with authentic batch samples. Adding these authentic batch samples into the validation design may help the analyst to select and confirm the most fit for purpose calibration curve and thus increase the accuracy and reliability of the results generated by the method in routine application. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Image charge models for accurate construction of the electrostatic self-energy of 3D layered nanostructure devices.

    PubMed

    Barker, John R; Martinez, Antonio

    2018-04-04

    Efficient analytical image charge models are derived for the full spatial variation of the electrostatic self-energy of electrons in semiconductor nanostructures that arises from dielectric mismatch using semi-classical analysis. The methodology provides a fast, compact and physically transparent computation for advanced device modeling. The underlying semi-classical model for the self-energy has been established and validated during recent years and depends on a slight modification of the macroscopic static dielectric constants for individual homogeneous dielectric regions. The model has been validated for point charges as close as one interatomic spacing to a sharp interface. A brief introduction to image charge methodology is followed by a discussion and demonstration of the traditional failure of the methodology to derive the electrostatic potential at arbitrary distances from a source charge. However, the self-energy involves the local limit of the difference between the electrostatic Green functions for the full dielectric heterostructure and the homogeneous equivalent. It is shown that high convergence may be achieved for the image charge method for this local limit. A simple re-normalisation technique is introduced to reduce the number of image terms to a minimum. A number of progressively complex 3D models are evaluated analytically and compared with high precision numerical computations. Accuracies of 1% are demonstrated. Introducing a simple technique for modeling the transition of the self-energy between disparate dielectric structures we generate an analytical model that describes the self-energy as a function of position within the source, drain and gated channel of a silicon wrap round gate field effect transistor on a scale of a few nanometers cross-section. At such scales the self-energies become large (typically up to ~100 meV) close to the interfaces as well as along the channel. The screening of a gated structure is shown to reduce the self-energy relative to un-gated nanowires.

  18. Image charge models for accurate construction of the electrostatic self-energy of 3D layered nanostructure devices

    NASA Astrophysics Data System (ADS)

    Barker, John R.; Martinez, Antonio

    2018-04-01

    Efficient analytical image charge models are derived for the full spatial variation of the electrostatic self-energy of electrons in semiconductor nanostructures that arises from dielectric mismatch using semi-classical analysis. The methodology provides a fast, compact and physically transparent computation for advanced device modeling. The underlying semi-classical model for the self-energy has been established and validated during recent years and depends on a slight modification of the macroscopic static dielectric constants for individual homogeneous dielectric regions. The model has been validated for point charges as close as one interatomic spacing to a sharp interface. A brief introduction to image charge methodology is followed by a discussion and demonstration of the traditional failure of the methodology to derive the electrostatic potential at arbitrary distances from a source charge. However, the self-energy involves the local limit of the difference between the electrostatic Green functions for the full dielectric heterostructure and the homogeneous equivalent. It is shown that high convergence may be achieved for the image charge method for this local limit. A simple re-normalisation technique is introduced to reduce the number of image terms to a minimum. A number of progressively complex 3D models are evaluated analytically and compared with high precision numerical computations. Accuracies of 1% are demonstrated. Introducing a simple technique for modeling the transition of the self-energy between disparate dielectric structures we generate an analytical model that describes the self-energy as a function of position within the source, drain and gated channel of a silicon wrap round gate field effect transistor on a scale of a few nanometers cross-section. At such scales the self-energies become large (typically up to ~100 meV) close to the interfaces as well as along the channel. The screening of a gated structure is shown to reduce the self-energy relative to un-gated nanowires.

  19. "Negative capacitance" in resistor-ferroelectric and ferroelectric-dielectric networks: Apparent or intrinsic?

    NASA Astrophysics Data System (ADS)

    Saha, Atanu K.; Datta, Suman; Gupta, Sumeet K.

    2018-03-01

    In this paper, we describe and analytically substantiate an alternate explanation for the negative capacitance (NC) effect in ferroelectrics (FE). We claim that the NC effect previously demonstrated in resistance-ferroelectric (R-FE) networks does not necessarily validate the existence of "S" shaped relation between polarization and voltage (according to Landau theory). In fact, the NC effect can be explained without invoking the "S"-shaped behavior of FE. We employ an analytical model for FE (Miller model) in which the steady state polarization strictly increases with the voltage across the FE and show that despite the inherent positive FE capacitance, reduction in FE voltage with the increase in its charge is possible in a R-FE network as well as in a ferroelectric-dielectric (FE-DE) stack. This can be attributed to a large increase in FE capacitance near the coercive voltage coupled with the polarization lag with respect to the electric field. Under certain conditions, these two factors yield transient NC effect. We analytically derive conditions for NC effect in R-FE and FE-DE networks. We couple our analysis with extensive simulations to explain the evolution of NC effect. We also compare the trends predicted by the aforementioned Miller model with Landau-Khalatnikov (L-K) model (static negative capacitance due to "S"-shape behaviour) and highlight the differences between the two approaches. First, with an increase in external resistance in the R-FE network, NC effect shows a non-monotonic behavior according to Miller model but increases according to L-K model. Second, with the increase in ramp-rate of applied voltage in the FE-DE stack, NC effect increases according to Miller model but decreases according to L-K model. These results unveil a possible way to experimentally validate the actual reason of NC effect in FE.

  20. Novel three-stage kinetic model for aqueous benzene adsorption on activated carbon.

    PubMed

    Choi, Jae-Woo; Choi, Nag-Choul; Lee, Soon-Jae; Kim, Dong-Ju

    2007-10-15

    We propose a novel kinetic model for adsorption of aqueous benzene onto both granular activated carbon (GAC) and powdered activated carbon (PAC). The model is based on mass conservation of benzene coupled with three-stage adsorption: (1) the first portion for an instantaneous stage or external surface adsorption, (2) the second portion for a gradual stage with rate-limiting intraparticle diffusion, and (3) the third portion for a constant stage in which the aqueous phase no longer interacts with activated carbon. An analytical solution of the kinetic model was validated with the kinetic data obtained from aqueous benzene adsorption onto GAC and PAC in batch experiments with two different solution concentrations (C(0)=300 mg L(-1), 600 mg L(-1)). Experimental results revealed that benzene adsorption for the two concentrations followed three distinct stages for PAC but two stages for GAC. The analytical solution could successfully describe the kinetic adsorption of aqueous benzene in the batch reaction system, showing a fast instantaneous adsorption followed by a slow rate-limiting adsorption and a final long constant adsorption. Use of the two-stage model gave incorrect values of adsorption coefficients in the analytical solution due to inability to describe the third stage.

  1. Simultaneous LC-MS/MS quantitation of acetaminophen and its glucuronide and sulfate metabolites in human dried blood spot samples collected by subjects in a pilot clinical study.

    PubMed

    Li, Wenkui; Doherty, John P; Kulmatycki, Kenneth; Smith, Harold T; Tse, Francis Ls

    2012-06-01

    In support of a pilot clinical trial using acetaminophen as the model compound to assess dried blood spot (DBS) sampling as the method for clinical pharmacokinetic sample collection, a novel sensitive LC-MS/MS method was developed and validated for the simultaneous determination of acetaminophen and its major metabolites, acetaminophen glucuronide and sulfate, in human DBS samples collected by subjects via fingerprick. The validated assay dynamic range was from 50.0 to 5000 ng/ml for each compound using a 1/8´´ (3-mm) disc punched from a DBS sample. Baseline separation of the three analytes was achieved to eliminate the possible impact of insource fragmentation of the conjugated metabolites on the analysis of the parent. The overall extraction efficiency was from 61.3 to 78.8% for the three analytes by direct extraction using methanol. The validated method was successfully implemented in the pilot clinical study with the obtained pharmacokinetic parameters in agreement with the values reported in literature.

  2. Cross-section and rate formulas for electron-impact ionization, excitation, deexcitation, and total depopulation of excited atoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vriens, L.; Smeets, A.H.M.

    1980-09-01

    For electron-induced ionization, excitation, and de-excitation, mainly from excited atomic states, a detailed analysis is presented of the dependence of the cross sections and rate coefficients on electron energy and temperature, and on atomic parameters. A wide energy range is covered, including sudden as well as adiabatic collisions. By combining the available experimental and theoretical information, a set of simple analytical formulas is constructed for the cross sections and rate coefficients of the processes mentioned, for the total depopulation, and for three-body recombination. The formulas account for large deviations from classical and semiclassical scaling, as found for excitation. They agreemore » with experimental data and with the theories in their respective ranges of validity, but have a wider range of validity than the separate theories. The simple analytical form further facilitates the application in plasma modeling.« less

  3. An accurate analytic description of neutrino oscillations in matter

    NASA Astrophysics Data System (ADS)

    Akhmedov, E. Kh.; Niro, Viviana

    2008-12-01

    A simple closed-form analytic expression for the probability of two-flavour neutrino oscillations in a matter with an arbitrary density profile is derived. Our formula is based on a perturbative expansion and allows an easy calculation of higher order corrections. The expansion parameter is small when the density changes relatively slowly along the neutrino path and/or neutrino energy is not very close to the Mikheyev-Smirnov-Wolfenstein (MSW) resonance energy. Our approximation is not equivalent to the adiabatic approximation and actually goes beyond it. We demonstrate the validity of our results using a few model density profiles, including the PREM density profile of the Earth. It is shown that by combining the results obtained from the expansions valid below and above the MSW resonance one can obtain a very good description of neutrino oscillations in matter in the entire energy range, including the resonance region.

  4. Stochastic modeling of macrodispersion in unsaturated heterogeneous porous media. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yeh, T.C.J.

    1995-02-01

    Spatial heterogeneity of geologic media leads to uncertainty in predicting both flow and transport in the vadose zone. In this work an efficient and flexible, combined analytical-numerical Monte Carlo approach is developed for the analysis of steady-state flow and transient transport processes in highly heterogeneous, variably saturated porous media. The approach is also used for the investigation of the validity of linear, first order analytical stochastic models. With the Monte Carlo analysis accurate estimates of the ensemble conductivity, head, velocity, and concentration mean and covariance are obtained; the statistical moments describing displacement of solute plumes, solute breakthrough at a compliancemore » surface, and time of first exceedance of a given solute flux level are analyzed; and the cumulative probability density functions for solute flux across a compliance surface are investigated. The results of the Monte Carlo analysis show that for very heterogeneous flow fields, and particularly in anisotropic soils, the linearized, analytical predictions of soil water tension and soil moisture flux become erroneous. Analytical, linearized Lagrangian transport models also overestimate both the longitudinal and the transverse spreading of the mean solute plume in very heterogeneous soils and in dry soils. A combined analytical-numerical conditional simulation algorithm is also developed to estimate the impact of in-situ soil hydraulic measurements on reducing the uncertainty of concentration and solute flux predictions.« less

  5. March 2017 Grenada Manufacturing, LLC Data Validation Reports and Analytical Laboratory Reports for the Main Plant Building Vapor Intrusion Sampling

    EPA Pesticide Factsheets

    Data Validation Reports and Full Analytical Lab Reports for Indoor Air, Ambient Air and Sub-slab samples taken during the facility vapor intrusion investigation in March 2017 at the Grenada Manufacturing plant

  6. Validation of finite element and boundary element methods for predicting structural vibration and radiated noise

    NASA Technical Reports Server (NTRS)

    Seybert, A. F.; Wu, X. F.; Oswald, Fred B.

    1992-01-01

    Analytical and experimental validation of methods to predict structural vibration and radiated noise are presented. A rectangular box excited by a mechanical shaker was used as a vibrating structure. Combined finite element method (FEM) and boundary element method (BEM) models of the apparatus were used to predict the noise radiated from the box. The FEM was used to predict the vibration, and the surface vibration was used as input to the BEM to predict the sound intensity and sound power. Vibration predicted by the FEM model was validated by experimental modal analysis. Noise predicted by the BEM was validated by sound intensity measurements. Three types of results are presented for the total radiated sound power: (1) sound power predicted by the BEM modeling using vibration data measured on the surface of the box; (2) sound power predicted by the FEM/BEM model; and (3) sound power measured by a sound intensity scan. The sound power predicted from the BEM model using measured vibration data yields an excellent prediction of radiated noise. The sound power predicted by the combined FEM/BEM model also gives a good prediction of radiated noise except for a shift of the natural frequencies that are due to limitations in the FEM model.

  7. Standardization, evaluation and early-phase method validation of an analytical scheme for batch-consistency N-glycosylation analysis of recombinant produced glycoproteins.

    PubMed

    Zietze, Stefan; Müller, Rainer H; Brecht, René

    2008-03-01

    In order to set up a batch-to-batch-consistency analytical scheme for N-glycosylation analysis, several sample preparation steps including enzyme digestions and fluorophore labelling and two HPLC-methods were established. The whole method scheme was standardized, evaluated and validated according to the requirements on analytical testing in early clinical drug development by usage of a recombinant produced reference glycoprotein (RGP). The standardization of the methods was performed by clearly defined standard operation procedures. During evaluation of the methods, the major interest was in the loss determination of oligosaccharides within the analytical scheme. Validation of the methods was performed with respect to specificity, linearity, repeatability, LOD and LOQ. Due to the fact that reference N-glycan standards were not available, a statistical approach was chosen to derive accuracy from the linearity data. After finishing the validation procedure, defined limits for method variability could be calculated and differences observed in consistency analysis could be separated into significant and incidental ones.

  8. Actively Controlled Landing Gear for Aircraft Vibration Reduction

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Daugherty, Robert H.; Martinson, Veloria J.

    1999-01-01

    Concepts for long-range air travel are characterized by airframe designs with long, slender, relatively flexible fuselages. One aspect often overlooked is ground induced vibration of these aircraft. This paper presents an analytical and experimental study of reducing ground-induced aircraft vibration loads using actively controlled landing gears. A facility has been developed to test various active landing gear control concepts and their performance. The facility uses a NAVY A6-intruder landing gear fitted with an auxiliary hydraulic supply electronically controlled by servo valves. An analytical model of the gear is presented including modifications to actuate the gear externally and test data is used to validate the model. The control design is described and closed-loop test and analysis comparisons are presented.

  9. Analytical approach to an integrate-and-fire model with spike-triggered adaptation

    NASA Astrophysics Data System (ADS)

    Schwalger, Tilo; Lindner, Benjamin

    2015-12-01

    The calculation of the steady-state probability density for multidimensional stochastic systems that do not obey detailed balance is a difficult problem. Here we present the analytical derivation of the stationary joint and various marginal probability densities for a stochastic neuron model with adaptation current. Our approach assumes weak noise but is valid for arbitrary adaptation strength and time scale. The theory predicts several effects of adaptation on the statistics of the membrane potential of a tonically firing neuron: (i) a membrane potential distribution with a convex shape, (ii) a strongly increased probability of hyperpolarized membrane potentials induced by strong and fast adaptation, and (iii) a maximized variability associated with the adaptation current at a finite adaptation time scale.

  10. Interacting steps with finite-range interactions: Analytical approximation and numerical results

    NASA Astrophysics Data System (ADS)

    Jaramillo, Diego Felipe; Téllez, Gabriel; González, Diego Luis; Einstein, T. L.

    2013-05-01

    We calculate an analytical expression for the terrace-width distribution P(s) for an interacting step system with nearest- and next-nearest-neighbor interactions. Our model is derived by mapping the step system onto a statistically equivalent one-dimensional system of classical particles. The validity of the model is tested with several numerical simulations and experimental results. We explore the effect of the range of interactions q on the functional form of the terrace-width distribution and pair correlation functions. For physically plausible interactions, we find modest changes when next-nearest neighbor interactions are included and generally negligible changes when more distant interactions are allowed. We discuss methods for extracting from simulated experimental data the characteristic scale-setting terms in assumed potential forms.

  11. Development of a bidirectional ring thermal actuator

    NASA Astrophysics Data System (ADS)

    Stevenson, Mathew; Yang, Peng; Lai, Yongjun; Mechefske, Chris

    2007-10-01

    A new planar micro electrothermal actuator capable of bidirectional rotation is presented. The ring thermal actuator has a wheel-like geometry with eight arms connecting an outer ring to a central hub. Thermal expansion of the arms results in a rotation of the outer ring about its center. An analytical model is developed for the electrothermal and thermal-mechanical aspects of the actuator's operation. Finite element analysis is used to validate the analytic study. The actuator has been fabricated using the multi-user MEMS process and experimental displacement results are compared with model predictions. Experiments show a possible displacement of 7.4 µm in each direction. Also, by switching the current between the arms it is possible to achieve an oscillating motion.

  12. An analytical solution for predicting the transient seepage from a subsurface drainage system

    NASA Astrophysics Data System (ADS)

    Xin, Pei; Dan, Han-Cheng; Zhou, Tingzhang; Lu, Chunhui; Kong, Jun; Li, Ling

    2016-05-01

    Subsurface drainage systems have been widely used to deal with soil salinization and waterlogging problems around the world. In this paper, a mathematical model was introduced to quantify the transient behavior of the groundwater table and the seepage from a subsurface drainage system. Based on the assumption of a hydrostatic pressure distribution, the model considered the pore-water flow in both the phreatic and vadose soil zones. An approximate analytical solution for the model was derived to quantify the drainage of soils which were initially water-saturated. The analytical solution was validated against laboratory experiments and a 2-D Richards equation-based model, and found to predict well the transient water seepage from the subsurface drainage system. A saturated flow-based model was also tested and found to over-predict the time required for drainage and the total water seepage by nearly one order of magnitude, in comparison with the experimental results and the present analytical solution. During drainage, a vadose zone with a significant water storage capacity developed above the phreatic surface. A considerable amount of water still remained in the vadose zone at the steady state with the water table situated at the drain bottom. Sensitivity analyses demonstrated that effects of the vadose zone were intensified with an increased thickness of capillary fringe, capillary rise and/or burying depth of drains, in terms of the required drainage time and total water seepage. The analytical solution provides guidance for assessing the capillary effects on the effectiveness and efficiency of subsurface drainage systems for combating soil salinization and waterlogging problems.

  13. Modeling of magnetic particle orientation in magnetic powder injection molding

    NASA Astrophysics Data System (ADS)

    Doo Jung, Im; Kang, Tae Gon; Seul Shin, Da; Park, Seong Jin

    2018-03-01

    The magnetic micro powder orientation under viscous shear flow has been analytically understood and characterized into a new analytical orientation model for a powder injection molding process. The effects of hydrodynamic force from the viscous flow, external magnetic force and internal dipole-dipole interaction were considered to predict the orientation under given process conditions. Comparative studies with a finite element method proved the calculation validity with a partial differential form of the model. The angular motion, agglomeration and magnetic chain formation have been simulated, which shows that the effect of dipole-dipole interaction among powders on the orientation state becomes negligible at a high Mason number condition and at a low λ condition (the ratio of external magnetic field strength and internal magnetic moment of powder). Our developed model can be very usefully employed in the process analysis and design of magnetic powder injection molding.

  14. Symmetric tridiagonal structure preserving finite element model updating problem for the quadratic model

    NASA Astrophysics Data System (ADS)

    Rakshit, Suman; Khare, Swanand R.; Datta, Biswa Nath

    2018-07-01

    One of the most important yet difficult aspect of the Finite Element Model Updating Problem is to preserve the finite element inherited structures in the updated model. Finite element matrices are in general symmetric, positive definite (or semi-definite) and banded (tridiagonal, diagonal, penta-diagonal, etc.). Though a large number of papers have been published in recent years on various aspects of solutions of this problem, papers dealing with structure preservation almost do not exist. A novel optimization based approach that preserves the symmetric tridiagonal structures of the stiffness and damping matrices is proposed in this paper. An analytical expression for the global minimum solution of the associated optimization problem along with the results of numerical experiments obtained by both the analytical expressions and by an appropriate numerical optimization algorithm are presented. The results of numerical experiments support the validity of the proposed method.

  15. Comparison of simulator fidelity model predictions with in-simulator evaluation data

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.; Mckissick, B. T.; Ashworth, B. R.

    1983-01-01

    A full factorial in simulator experiment of a single axis, multiloop, compensatory pitch tracking task is described. The experiment was conducted to provide data to validate extensions to an analytic, closed loop model of a real time digital simulation facility. The results of the experiment encompassing various simulation fidelity factors, such as visual delay, digital integration algorithms, computer iteration rates, control loading bandwidths and proprioceptive cues, and g-seat kinesthetic cues, are compared with predictions obtained from the analytic model incorporating an optimal control model of the human pilot. The in-simulator results demonstrate more sensitivity to the g-seat and to the control loader conditions than were predicted by the model. However, the model predictions are generally upheld, although the predicted magnitudes of the states and of the error terms are sometimes off considerably. Of particular concern is the large sensitivity difference for one control loader condition, as well as the model/in-simulator mismatch in the magnitude of the plant states when the other states match.

  16. Modal analysis of graphene-based structures for large deformations, contact and material nonlinearities

    NASA Astrophysics Data System (ADS)

    Ghaffari, Reza; Sauer, Roger A.

    2018-06-01

    The nonlinear frequencies of pre-stressed graphene-based structures, such as flat graphene sheets and carbon nanotubes, are calculated. These structures are modeled with a nonlinear hyperelastic shell model. The model is calibrated with quantum mechanics data and is valid for high strains. Analytical solutions of the natural frequencies of various plates are obtained for the Canham bending model by assuming infinitesimal strains. These solutions are used for the verification of the numerical results. The performance of the model is illustrated by means of several examples. Modal analysis is performed for square plates under pure dilatation or uniaxial stretch, circular plates under pure dilatation or under the effects of an adhesive substrate, and carbon nanotubes under uniaxial compression or stretch. The adhesive substrate is modeled with van der Waals interaction (based on the Lennard-Jones potential) and a coarse grained contact model. It is shown that the analytical natural frequencies underestimate the real ones, and this should be considered in the design of devices based on graphene structures.

  17. Theoretical and Numerical Investigation of the Cavity Evolution in Gypsum Rock

    NASA Astrophysics Data System (ADS)

    Li, Wei; Einstein, Herbert H.

    2017-11-01

    When water flows through a preexisting cylindrical tube in gypsum rock, the nonuniform dissolution alters the tube into an enlarged tapered tube. A 2-D analytical model is developed to study the transport-controlled dissolution in an enlarged tapered tube, with explicit consideration of the tapered geometry and induced radial flow. The analytical model shows that the Graetz solution can be extended to model dissolution in the tapered tube. An alternative form of the governing equations is proposed to take advantage of the invariant quantities in the Graetz solution to facilitate modeling cavity evolution in gypsum rock. A 2-D finite volume model was developed to validate the extended Graetz solution. The time evolution of the transport-controlled and the reaction-controlled dissolution models for a single tube with time-invariant flow rate are compared. This comparison shows that for time-invariant flow rate, the reaction-controlled dissolution model produces a positive feedback between the tube enlargement and dissolution, while the transport-controlled dissolution does not.

  18. External Standards or Standard Addition? Selecting and Validating a Method of Standardization

    NASA Astrophysics Data System (ADS)

    Harvey, David T.

    2002-05-01

    A common feature of many problem-based laboratories in analytical chemistry is a lengthy independent project involving the analysis of "real-world" samples. Students research the literature, adapting and developing a method suitable for their analyte, sample matrix, and problem scenario. Because these projects encompass the complete analytical process, students must consider issues such as obtaining a representative sample, selecting a method of analysis, developing a suitable standardization, validating results, and implementing appropriate quality assessment/quality control practices. Most textbooks and monographs suitable for an undergraduate course in analytical chemistry, however, provide only limited coverage of these important topics. The need for short laboratory experiments emphasizing important facets of method development, such as selecting a method of standardization, is evident. The experiment reported here, which is suitable for an introductory course in analytical chemistry, illustrates the importance of matrix effects when selecting a method of standardization. Students also learn how a spike recovery is used to validate an analytical method, and obtain a practical experience in the difference between performing an external standardization and a standard addition.

  19. Analytical method development of nifedipine and its degradants binary mixture using high performance liquid chromatography through a quality by design approach

    NASA Astrophysics Data System (ADS)

    Choiri, S.; Ainurofiq, A.; Ratri, R.; Zulmi, M. U.

    2018-03-01

    Nifedipin (NIF) is a photo-labile drug that easily degrades when it exposures a sunlight. This research aimed to develop of an analytical method using a high-performance liquid chromatography and implemented a quality by design approach to obtain effective, efficient, and validated analytical methods of NIF and its degradants. A 22 full factorial design approach with a curvature as a center point was applied to optimize of the analytical condition of NIF and its degradants. Mobile phase composition (MPC) and flow rate (FR) as factors determined on the system suitability parameters. The selected condition was validated by cross-validation using a leave one out technique. Alteration of MPC affected on time retention significantly. Furthermore, an increase of FR reduced the tailing factor. In addition, the interaction of both factors affected on an increase of the theoretical plates and resolution of NIF and its degradants. The selected analytical condition of NIF and its degradants has been validated at range 1 – 16 µg/mL that had good linearity, precision, accuration and efficient due to an analysis time within 10 min.

  20. TOPEX Microwave Radiometer - Thermal design verification test and analytical model validation

    NASA Technical Reports Server (NTRS)

    Lin, Edward I.

    1992-01-01

    The testing of the TOPEX Microwave Radiometer (TMR) is described in terms of hardware development based on the modeling and thermal vacuum testing conducted. The TMR and the vacuum-test facility are described, and the thermal verification test includes a hot steady-state segment, a cold steady-state segment, and a cold survival mode segment totalling 65 hours. A graphic description is given of the test history which is related temperature tracking, and two multinode TMR test-chamber models are compared to the test results. Large discrepancies between the test data and the model predictions are attributed to contact conductance, effective emittance from the multilayer insulation, and heat leaks related to deviations from the flight configuration. The TMR thermal testing/modeling effort is shown to provide technical corrections for the procedure outlined, and the need for validating predictive models is underscored.

  1. Analytical halo model of galactic conformity

    NASA Astrophysics Data System (ADS)

    Pahwa, Isha; Paranjape, Aseem

    2017-09-01

    We present a fully analytical halo model of colour-dependent clustering that incorporates the effects of galactic conformity in a halo occupation distribution framework. The model, based on our previous numerical work, describes conformity through a correlation between the colour of a galaxy and the concentration of its parent halo, leading to a correlation between central and satellite galaxy colours at fixed halo mass. The strength of the correlation is set by a tunable 'group quenching efficiency', and the model can separately describe group-level correlations between galaxy colour (1-halo conformity) and large-scale correlations induced by assembly bias (2-halo conformity). We validate our analytical results using clustering measurements in mock galaxy catalogues, finding that the model is accurate at the 10-20 per cent level for a wide range of luminosities and length-scales. We apply the formalism to interpret the colour-dependent clustering of galaxies in the Sloan Digital Sky Survey (SDSS). We find good overall agreement between the data and a model that has 1-halo conformity at a level consistent with previous results based on an SDSS group catalogue, although the clustering data require satellites to be redder than suggested by the group catalogue. Within our modelling uncertainties, however, we do not find strong evidence of 2-halo conformity driven by assembly bias in SDSS clustering.

  2. High-Performance Data Analytics Beyond the Relational and Graph Data Models with GEMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castellana, Vito G.; Minutoli, Marco; Bhatt, Shreyansh

    Graphs represent an increasingly popular data model for data-analytics, since they can naturally represent relationships and interactions between entities. Relational databases and their pure table-based data model are not well suitable to store and process sparse data. Consequently, graph databases have gained interest in the last few years and the Resource Description Framework (RDF) became the standard data model for graph data. Nevertheless, while RDF is well suited to analyze the relationships between the entities, it is not efficient in representing their attributes and properties. In this work we propose the adoption of a new hybrid data model, based onmore » attributed graphs, that aims at overcoming the limitations of the pure relational and graph data models. We present how we have re-designed the GEMS data-analytics framework to fully take advantage of the proposed hybrid data model. To improve analysts productivity, in addition to a C++ API for applications development, we adopt GraQL as input query language. We validate our approach implementing a set of queries on net-flow data and we compare our framework performance against Neo4j. Experimental results show significant performance improvement over Neo4j, up to several orders of magnitude when increasing the size of the input data.« less

  3. Analytical expression for position sensitivity of linear response beam position monitor having inter-electrode cross talk

    NASA Astrophysics Data System (ADS)

    Kumar, Mukesh; Ojha, A.; Garg, A. D.; Puntambekar, T. A.; Senecha, V. K.

    2017-02-01

    According to the quasi electrostatic model of linear response capacitive beam position monitor (BPM), the position sensitivity of the device depends only on the aperture of the device and it is independent of processing frequency and load impedance. In practice, however, due to the inter-electrode capacitive coupling (cross talk), the actual position sensitivity of the device decreases with increasing frequency and load impedance. We have taken into account the inter-electrode capacitance to derive and propose a new analytical expression for the position sensitivity as a function of frequency and load impedance. The sensitivity of a linear response shoe-box type BPM has been obtained through simulation using CST Studio Suite to verify and confirm the validity of the new analytical equation. Good agreement between the simulation results and the new analytical expression suggest that this method can be exploited for proper designing of BPM.

  4. Residual Stress Reversal in Highly Strained Shot Peened Structural Elements. Degree awarded by Florida Univ.

    NASA Technical Reports Server (NTRS)

    Mitchell, William S.; Throckmorton, David (Technical Monitor)

    2002-01-01

    The purpose of this research was to further the understanding of a crack initiation problem in a highly strained pressure containment housing. Finite Element Analysis methods were used to model the behavior of shot peened materials undergoing plastic deformation. Analytical results are in agreement with laboratory tensile tests that simulated the actual housing load conditions. These results further validate the original investigation finding that the shot peened residual stress had reversed, changing from compressive to tensile, and demonstrate that analytical finite element methods can be used to predict this behavior.

  5. Integrable Time-Dependent Quantum Hamiltonians

    NASA Astrophysics Data System (ADS)

    Sinitsyn, Nikolai A.; Yuzbashyan, Emil A.; Chernyak, Vladimir Y.; Patra, Aniket; Sun, Chen

    2018-05-01

    We formulate a set of conditions under which the nonstationary Schrödinger equation with a time-dependent Hamiltonian is exactly solvable analytically. The main requirement is the existence of a non-Abelian gauge field with zero curvature in the space of system parameters. Known solvable multistate Landau-Zener models satisfy these conditions. Our method provides a strategy to incorporate time dependence into various quantum integrable models while maintaining their integrability. We also validate some prior conjectures, including the solution of the driven generalized Tavis-Cummings model.

  6. Psychological and Physiological Selection of Military Special Operations Forces Personnel (Selection psychologique et physiologique des militaires des forces d’operations speciales)

    DTIC Science & Technology

    2012-10-01

    in the selection literature today is the Five Factor Model ( FFM ) or “Big 5” model of personality. This model includes: 1) Openness; 2...Conscientiousness; 3) Extraversion; 4) Agreeableness; and 5) Emotional Stability. Meta-analytic studies have found the FFM of personality to be predictive...is a self-report measure of the FFM that has demonstrated reliability and validity in numerous studies [18]. Another FFM measure, the Trait Self

  7. Applications of SMART: A DRDC Atmospheric Radiative Transfer Library Optimized for Wide Band Computations

    DTIC Science & Technology

    2011-06-28

    DRDC accurate refracted path calculation – 2-stream (flux) and DISORT (N-stream) MS calculations – Lambert and sea surface (DRDC analytical model ) BRDF ...display a currently valid OMB control number. 1. REPORT DATE 28 JUN 2011 2. REPORT TYPE 3. DATES COVERED 00-00-2011 to 00-00-2011 4. TITLE AND...radiance – MODTRAN molecular extinctions (CK) • Seamless integration of MOD4v3r1 – MODTRAN and DRDC aerosol models – Falling snow model (DRDC

  8. Modification and validation of an analytical source model for external beam radiotherapy Monte Carlo dose calculations.

    PubMed

    Davidson, Scott E; Cui, Jing; Kry, Stephen; Deasy, Joseph O; Ibbott, Geoffrey S; Vicic, Milos; White, R Allen; Followill, David S

    2016-08-01

    A dose calculation tool, which combines the accuracy of the dose planning method (DPM) Monte Carlo code and the versatility of a practical analytical multisource model, which was previously reported has been improved and validated for the Varian 6 and 10 MV linear accelerators (linacs). The calculation tool can be used to calculate doses in advanced clinical application studies. One shortcoming of current clinical trials that report dose from patient plans is the lack of a standardized dose calculation methodology. Because commercial treatment planning systems (TPSs) have their own dose calculation algorithms and the clinical trial participant who uses these systems is responsible for commissioning the beam model, variation exists in the reported calculated dose distributions. Today's modern linac is manufactured to tight specifications so that variability within a linac model is quite low. The expectation is that a single dose calculation tool for a specific linac model can be used to accurately recalculate dose from patient plans that have been submitted to the clinical trial community from any institution. The calculation tool would provide for a more meaningful outcome analysis. The analytical source model was described by a primary point source, a secondary extra-focal source, and a contaminant electron source. Off-axis energy softening and fluence effects were also included. The additions of hyperbolic functions have been incorporated into the model to correct for the changes in output and in electron contamination with field size. A multileaf collimator (MLC) model is included to facilitate phantom and patient dose calculations. An offset to the MLC leaf positions was used to correct for the rudimentary assumed primary point source. Dose calculations of the depth dose and profiles for field sizes 4 × 4 to 40 × 40 cm agree with measurement within 2% of the maximum dose or 2 mm distance to agreement (DTA) for 95% of the data points tested. The model was capable of predicting the depth of the maximum dose within 1 mm. Anthropomorphic phantom benchmark testing of modulated and patterned MLCs treatment plans showed agreement to measurement within 3% in target regions using thermoluminescent dosimeters (TLD). Using radiochromic film normalized to TLD, a gamma criteria of 3% of maximum dose and 2 mm DTA was applied with a pass rate of least 85% in the high dose, high gradient, and low dose regions. Finally, recalculations of patient plans using DPM showed good agreement relative to a commercial TPS when comparing dose volume histograms and 2D dose distributions. A unique analytical source model coupled to the dose planning method Monte Carlo dose calculation code has been modified and validated using basic beam data and anthropomorphic phantom measurement. While this tool can be applied in general use for a particular linac model, specifically it was developed to provide a singular methodology to independently assess treatment plan dose distributions from those clinical institutions participating in National Cancer Institute trials.

  9. Development and necessary norms of reasoning

    PubMed Central

    Markovits, Henry

    2014-01-01

    The question of whether reasoning can, or should, be described by a single normative model is an important one. In the following, I combine epistemological considerations taken from Piaget’s notion of genetic epistemology, a hypothesis about the role of reasoning in communication and developmental data to argue that some basic logical principles are in fact highly normative. I argue here that explicit, analytic human reasoning, in contrast to intuitive reasoning, uniformly relies on a form of validity that allows distinguishing between valid and invalid arguments based on the existence of counterexamples to conclusions. PMID:24904501

  10. Validation of the Arabic Version of the Group Personality Projective Test among university students in Bahrain.

    PubMed

    Al-Musawi, Nu'man M

    2003-04-01

    Using confirmatory factor analytic techniques on data generated from 200 students enrolled at the University of Bahrain, we obtained some construct validity and reliability data for the Arabic Version of the 1961 Group Personality Projective Test by Cassel and Khan. In contrast to the 5-factor model proposed for the Group Personality Projective Test, a 6-factor solution appeared justified for the Arabic Version of this test, suggesting some variance between the cultural groups in the United States and in Bahrain.

  11. SMA-MAP: a plasma protein panel for spinal muscular atrophy.

    PubMed

    Kobayashi, Dione T; Shi, Jing; Stephen, Laurie; Ballard, Karri L; Dewey, Ruth; Mapes, James; Chung, Brett; McCarthy, Kathleen; Swoboda, Kathryn J; Crawford, Thomas O; Li, Rebecca; Plasterer, Thomas; Joyce, Cynthia; Chung, Wendy K; Kaufmann, Petra; Darras, Basil T; Finkel, Richard S; Sproule, Douglas M; Martens, William B; McDermott, Michael P; De Vivo, Darryl C; Walker, Michael G; Chen, Karen S

    2013-01-01

    Spinal Muscular Atrophy (SMA) presents challenges in (i) monitoring disease activity and predicting progression, (ii) designing trials that allow rapid assessment of candidate therapies, and (iii) understanding molecular causes and consequences of the disease. Validated biomarkers of SMA motor and non-motor function would offer utility in addressing these challenges. Our objectives were (i) to discover additional markers from the Biomarkers for SMA (BforSMA) study using an immunoassay platform, and (ii) to validate the putative biomarkers in an independent cohort of SMA patients collected from a multi-site natural history study (NHS). BforSMA study plasma samples (N = 129) were analyzed by immunoassay to identify new analytes correlating to SMA motor function. These immunoassays included the strongest candidate biomarkers identified previously by chromatography. We selected 35 biomarkers to validate in an independent cohort SMA type 1, 2, and 3 samples (N = 158) from an SMA NHS. The putative biomarkers were tested for association to multiple motor scales and to pulmonary function, neurophysiology, strength, and quality of life measures. We implemented a Tobit model to predict SMA motor function scores. 12 of the 35 putative SMA biomarkers were significantly associated (p<0.05) with motor function, with a 13(th) analyte being nearly significant. Several other analytes associated with non-motor SMA outcome measures. From these 35 biomarkers, 27 analytes were selected for inclusion in a commercial panel (SMA-MAP) for association with motor and other functional measures. Discovery and validation using independent cohorts yielded a set of SMA biomarkers significantly associated with motor function and other measures of SMA disease activity. A commercial SMA-MAP biomarker panel was generated for further testing in other SMA collections and interventional trials. Future work includes evaluating the panel in other neuromuscular diseases, for pharmacodynamic responsiveness to experimental SMA therapies, and for predicting functional changes over time in SMA patients.

  12. Heat transfer analysis of the Bridgman-Stockbarger configuration for crystal growth. Part 1: Analytical treatment of the axial temperature distribution

    NASA Technical Reports Server (NTRS)

    Jasinski, T. J.; Rohsenow, W. M.; Witt, A. F.

    1982-01-01

    All first order effects on the axial temperature distribution in a solidifying charge in a Bridgman-Stockbarger configuration for crystal growth are analyzed on the basis of a one dimensional model whose validity can be verified through comparison with published finite difference ana;uses of two dimensional models. The model presented includes an insulated region between axially aligned heat pipes and considers the effects of charge diameter, charge motion, thickness, and thermal conductivity of a confining crucible, thermal conductivity change at the crystal-melt interface, generation of latent heat at the interface, and finite charge length. Results are primarily given in analytical form and can be used without recourse to computer work for both improve furnace design and optimization of growth conditions in a given thermal configuration.

  13. Thermal design and TDM test of the ETS-VI

    NASA Astrophysics Data System (ADS)

    Yoshinaka, T.; Kanamori, K.; Takenaka, N.; Kawashima, J.; Ido, Y.; Kuriyama, Y.

    The Engineering Test Satellite-VI (ETS-VI) thermal design, thermal development model (TDM) test, and evaluation results are described. The allocation of the thermal control materials on the spacecraft is illustrated. The principal design approach is to minimize the interactions between the antenna tower module and the main body, and between the main body and the liquid apogee propulsion system by means of multilayer insulation blankets and low conductance graphite epoxy support structures. The TDM test shows that the thermal control subsystem is capable of maintaining the on-board components within specified temperature limits. The heat pipe network is confirmed to operate properly, and a uniform panel temperature distribution is accomplished. The thermal analytical model is experimentally verified. The validity of the thermal control subsystem design is confirmed by the modified on-orbit analytical model.

  14. Improvement of sound insulation performance of double-glazed windows by using viscoelastic connectors

    NASA Astrophysics Data System (ADS)

    Takahashi, D.; Sawaki, S.; Mu, R.-L.

    2016-06-01

    A new method for improving the sound insulation performance of double-glazed windows is proposed. This technique uses viscoelastic materials as connectors between the two glass panels to ensure that the appropriate spacing is maintained. An analytical model that makes it possible to discuss the effects of spacing, contact area, and viscoelastic properties of the connectors on the performance in terms of sound insulation is developed. The validity of the model is verified by comparing its results with measured data. The numerical experiments using this analytical model showed the importance of the ability of the connectors to achieve the appropriate spacing and their viscoelastic properties, both of which are necessary for improving the sound insulation performance. In addition, it was shown that the most effective factor is damping: the stronger the damping, the more the insulation performance increases.

  15. Design optimization of an axial-field eddy-current magnetic coupling based on magneto-thermal analytical model

    NASA Astrophysics Data System (ADS)

    Fontchastagner, Julien; Lubin, Thierry; Mezani, Smaïl; Takorabet, Noureddine

    2018-03-01

    This paper presents a design optimization of an axial-flux eddy-current magnetic coupling. The design procedure is based on a torque formula derived from a 3D analytical model and a population algorithm method. The main objective of this paper is to determine the best design in terms of magnets volume in order to transmit a torque between two movers, while ensuring a low slip speed and a good efficiency. The torque formula is very accurate and computationally efficient, and is valid for any slip speed values. Nevertheless, in order to solve more realistic problems, and then, take into account the thermal effects on the torque value, a thermal model based on convection heat transfer coefficients is also established and used in the design optimization procedure. Results show the effectiveness of the proposed methodology.

  16. Current issues involving screening and identification of chemical contaminants in foods by mass spectrometry

    USDA-ARS?s Scientific Manuscript database

    Although quantitative analytical methods must be empirically validated prior to their actual use in a variety of applications, including regulatory monitoring of chemical adulterants in foods, validation of qualitative method performance for the analytes and matrices of interest is frequently ignore...

  17. Technology Integration (Task 20) Aeroservoelastic Modeling and Design Studies. Part A; Evaluation of Aeroservoelastic Effects on Flutter and Dynamic Gust Response

    NASA Technical Reports Server (NTRS)

    Nagaraja, K. S.; Kraft, R. H.

    1999-01-01

    The HSCT Flight Controls Group has developed longitudinal control laws, utilizing PTC aeroelastic flexible models to minimize aeroservoelastic interaction effects, for a number of flight conditions. The control law design process resulted in a higher order controller and utilized a large number of sensors distributed along the body for minimizing the flexibility effects. Processes were developed to implement these higher order control laws for performing the dynamic gust loads and flutter analyses. The processes and its validation were documented in Reference 2, for selected flight condition. The analytical results for additional flight conditions are presented in this document for further validation.

  18. The forensic validity of visual analytics

    NASA Astrophysics Data System (ADS)

    Erbacher, Robert F.

    2008-01-01

    The wider use of visualization and visual analytics in wide ranging fields has led to the need for visual analytics capabilities to be legally admissible, especially when applied to digital forensics. This brings the need to consider legal implications when performing visual analytics, an issue not traditionally examined in visualization and visual analytics techniques and research. While digital data is generally admissible under the Federal Rules of Evidence [10][21], a comprehensive validation of the digital evidence is considered prudent. A comprehensive validation requires validation of the digital data under rules for authentication, hearsay, best evidence rule, and privilege. Additional issues with digital data arise when exploring digital data related to admissibility and the validity of what information was examined, to what extent, and whether the analysis process was sufficiently covered by a search warrant. For instance, a search warrant generally covers very narrow requirements as to what law enforcement is allowed to examine and acquire during an investigation. When searching a hard drive for child pornography, how admissible is evidence of an unrelated crime, i.e. drug dealing. This is further complicated by the concept of "in plain view". When performing an analysis of a hard drive what would be considered "in plain view" when analyzing a hard drive. The purpose of this paper is to discuss the issues of digital forensics and the related issues as they apply to visual analytics and identify how visual analytics techniques fit into the digital forensics analysis process, how visual analytics techniques can improve the legal admissibility of digital data, and identify what research is needed to further improve this process. The goal of this paper is to open up consideration of legal ramifications among the visualization community; the author is not a lawyer and the discussions are not meant to be inclusive of all differences in laws between states and countries.

  19. Uncertainty in the Bayesian meta-analysis of normally distributed surrogate endpoints

    PubMed Central

    Thompson, John R; Spata, Enti; Abrams, Keith R

    2015-01-01

    We investigate the effect of the choice of parameterisation of meta-analytic models and related uncertainty on the validation of surrogate endpoints. Different meta-analytical approaches take into account different levels of uncertainty which may impact on the accuracy of the predictions of treatment effect on the target outcome from the treatment effect on a surrogate endpoint obtained from these models. A range of Bayesian as well as frequentist meta-analytical methods are implemented using illustrative examples in relapsing–remitting multiple sclerosis, where the treatment effect on disability worsening is the primary outcome of interest in healthcare evaluation, while the effect on relapse rate is considered as a potential surrogate to the effect on disability progression, and in gastric cancer, where the disease-free survival has been shown to be a good surrogate endpoint to the overall survival. Sensitivity analysis was carried out to assess the impact of distributional assumptions on the predictions. Also, sensitivity to modelling assumptions and performance of the models were investigated by simulation. Although different methods can predict mean true outcome almost equally well, inclusion of uncertainty around all relevant parameters of the model may lead to less certain and hence more conservative predictions. When investigating endpoints as candidate surrogate outcomes, a careful choice of the meta-analytical approach has to be made. Models underestimating the uncertainty of available evidence may lead to overoptimistic predictions which can then have an effect on decisions made based on such predictions. PMID:26271918

  20. Uncertainty in the Bayesian meta-analysis of normally distributed surrogate endpoints.

    PubMed

    Bujkiewicz, Sylwia; Thompson, John R; Spata, Enti; Abrams, Keith R

    2017-10-01

    We investigate the effect of the choice of parameterisation of meta-analytic models and related uncertainty on the validation of surrogate endpoints. Different meta-analytical approaches take into account different levels of uncertainty which may impact on the accuracy of the predictions of treatment effect on the target outcome from the treatment effect on a surrogate endpoint obtained from these models. A range of Bayesian as well as frequentist meta-analytical methods are implemented using illustrative examples in relapsing-remitting multiple sclerosis, where the treatment effect on disability worsening is the primary outcome of interest in healthcare evaluation, while the effect on relapse rate is considered as a potential surrogate to the effect on disability progression, and in gastric cancer, where the disease-free survival has been shown to be a good surrogate endpoint to the overall survival. Sensitivity analysis was carried out to assess the impact of distributional assumptions on the predictions. Also, sensitivity to modelling assumptions and performance of the models were investigated by simulation. Although different methods can predict mean true outcome almost equally well, inclusion of uncertainty around all relevant parameters of the model may lead to less certain and hence more conservative predictions. When investigating endpoints as candidate surrogate outcomes, a careful choice of the meta-analytical approach has to be made. Models underestimating the uncertainty of available evidence may lead to overoptimistic predictions which can then have an effect on decisions made based on such predictions.

  1. Reactant conversion in homogeneous turbulence: Mathematical modeling, computational validations and practical applications

    NASA Technical Reports Server (NTRS)

    Madnia, C. K.; Frankel, S. H.; Givi, P.

    1992-01-01

    Closed form analytical expressions are obtained for predicting the limited rate of reactant conversion in a binary reaction of the type F + rO yields (1 + r) Product in unpremixed homogeneous turbulence. These relations are obtained by means of a single point Probability Density Function (PDF) method based on the Amplitude Mapping Closure. It is demonstrated that with this model, the maximum rate of the reactants' decay can be conveniently expressed in terms of definite integrals of the Parabolic Cylinder Functions. For the cases with complete initial segregation, it is shown that the results agree very closely with those predicted by employing a Beta density of the first kind for an appropriately defined Shvab-Zeldovich scalar variable. With this assumption, the final results can also be expressed in terms of closed form analytical expressions which are based on the Incomplete Beta Functions. With both models, the dependence of the results on the stoichiometric coefficient and the equivalence ratio can be expressed in an explicit manner. For a stoichiometric mixture, the analytical results simplify significantly. In the mapping closure, these results are expressed in terms of simple trigonometric functions. For the Beta density model, they are in the form of Gamma Functions. In all the cases considered, the results are shown to agree well with data generated by Direct Numerical Simulations (DNS). Due to the simplicity of these expressions and because of nice mathematical features of the Parabolic Cylinder and the Incomplete Beta Functions, these models are recommended for estimating the limiting rate of reactant conversion in homogeneous reacting flows. These results also provide useful insights in assessing the extent of validity of turbulence closures in the modeling of unpremixed reacting flows. Some discussions are provided on the extension of the model for treating more complicated reacting systems including realistic kinetics schemes and multi-scalar mixing with finite rate chemical reactions in more complex configurations.

  2. A closed-form analytical model for predicting 3D boundary layer displacement thickness for the validation of viscous flow solvers

    NASA Astrophysics Data System (ADS)

    Kumar, V. R. Sanal; Sankar, Vigneshwaran; Chandrasekaran, Nichith; Saravanan, Vignesh; Natarajan, Vishnu; Padmanabhan, Sathyan; Sukumaran, Ajith; Mani, Sivabalan; Rameshkumar, Tharikaa; Nagaraju Doddi, Hema Sai; Vysaprasad, Krithika; Sharan, Sharad; Murugesh, Pavithra; Shankar, S. Ganesh; Nejaamtheen, Mohammed Niyasdeen; Baskaran, Roshan Vignesh; Rahman Mohamed Rafic, Sulthan Ariff; Harisrinivasan, Ukeshkumar; Srinivasan, Vivek

    2018-02-01

    A closed-form analytical model is developed for estimating the 3D boundary-layer-displacement thickness of an internal flow system at the Sanal flow choking condition for adiabatic flows obeying the physics of compressible viscous fluids. At this unique condition the boundary-layer blockage induced fluid-throat choking and the adiabatic wall-friction persuaded flow choking occur at a single sonic-fluid-throat location. The beauty and novelty of this model is that without missing the flow physics we could predict the exact boundary-layer blockage of both 2D and 3D cases at the sonic-fluid-throat from the known values of the inlet Mach number, the adiabatic index of the gas and the inlet port diameter of the internal flow system. We found that the 3D blockage factor is 47.33 % lower than the 2D blockage factor with air as the working fluid. We concluded that the exact prediction of the boundary-layer-displacement thickness at the sonic-fluid-throat provides a means to correctly pinpoint the causes of errors of the viscous flow solvers. The methodology presented herein with state-of-the-art will play pivotal roles in future physical and biological sciences for a credible verification, calibration and validation of various viscous flow solvers for high-fidelity 2D/3D numerical simulations of real-world flows. Furthermore, our closed-form analytical model will be useful for the solid and hybrid rocket designers for the grain-port-geometry optimization of new generation single-stage-to-orbit dual-thrust-motors with the highest promising propellant loading density within the given envelope without manifestation of the Sanal flow choking leading to possible shock waves causing catastrophic failures.

  3. Suspension concentration distribution in turbulent flows: An analytical study using fractional advection-diffusion equation

    NASA Astrophysics Data System (ADS)

    Kundu, Snehasis

    2018-09-01

    In this study vertical distribution of sediment particles in steady uniform turbulent open channel flow over erodible bed is investigated using fractional advection-diffusion equation (fADE). Unlike previous investigations on fADE to investigate the suspension distribution, in this study the modified Atangana-Baleanu-Caputo fractional derivative with a non-singular and non-local kernel is employed. The proposed fADE is solved and an analytical model for finding vertical suspension distribution is obtained. The model is validated against experimental as well as field measurements of Missouri River, Mississippi River and Rio Grande conveyance channel and is compared with the Rouse equation and other fractional model found in literature. A quantitative error analysis shows that the proposed model is able to predict the vertical distribution of particles more appropriately than previous models. The validation results shows that the fractional model can be equally applied to all size of particles with an appropriate choice of the order of the fractional derivative α. It is also found that besides particle diameter, parameter α depends on the mass density of particle and shear velocity of the flow. To predict this parameter, a multivariate regression is carried out and a relation is proposed for easy application of the model. From the results for sand and plastic particles, it is found that the parameter α is more sensitive to mass density than the particle diameter. The rationality of the dependence of α on particle and flow characteristics has been justified physically.

  4. DC and small-signal physical models for the AlGaAs/GaAs high electron mobility transistor

    NASA Technical Reports Server (NTRS)

    Sarker, J. C.; Purviance, J. E.

    1991-01-01

    Analytical and numerical models are developed for the microwave small-signal performance, such as transconductance, gate-to-source capacitance, current gain cut-off frequency and the optimum cut-off frequency of the AlGaAs/GaAs High Electron Mobility Transistor (HEMT), in both normal and compressed transconductance regions. The validated I-V characteristics and the small-signal performances of four HeMT's are presented.

  5. Shuttle antenna radome technology test program. Volume 2: Development of S-band antenna interface design

    NASA Technical Reports Server (NTRS)

    Kuhlman, E. A.; Baranowski, L. C.

    1977-01-01

    The effects of the Thermal Protection Subsystem (TPS) contamination on the space shuttle orbiter S band quad antenna due to multiple mission buildup are discussed. A test fixture was designed, fabricated and exposed to ten cycles of simulated ground and flight environments. Radiation pattern and impedance tests were performed to measure the effects of the contaminates. The degradation in antenna performance was attributed to the silicone waterproofing in the TPS tiles rather than exposure to the contaminating sources used in the test program. Validation of the accuracy of an analytical thermal model is discussed. Thermal vacuum tests with a test fixture and a representative S band quad antenna were conducted to evaluate the predictions of the analytical thermal model for two orbital heating conditions and entry from each orbit. The results show that the accuracy of predicting the test fixture thermal responses is largely dependent on the ability to define the boundary and ambient conditions. When the test conditions were accurately included in the analytical model, the predictions were in excellent agreement with measurements.

  6. A new DG nanoscale TFET based on MOSFETs by using source gate electrode: 2D simulation and an analytical potential model

    NASA Astrophysics Data System (ADS)

    Ramezani, Zeinab; Orouji, Ali A.

    2017-08-01

    This paper suggests and investigates a double-gate (DG) MOSFET, which emulates tunnel field effect transistors (M-TFET). We have combined this novel concept into a double-gate MOSFET, which behaves as a tunneling field effect transistor by work function engineering. In the proposed structure, in addition to the main gate, we utilize another gate over the source region with zero applied voltage and a proper work function to convert the source region from N+ to P+. We check the impact obtained by varying the source gate work function and source doping on the device parameters. The simulation results of the M-TFET indicate that it is a suitable case for a switching performance. Also, we present a two-dimensional analytic potential model of the proposed structure by solving the Poisson's equation in x and y directions and by derivatives from the potential profile; thus, the electric field is achieved. To validate our present model, we use the SILVACO ATLAS device simulator. The analytical results have been compared with it.

  7. Analytical and numerical treatment of drift-tearing modes in plasma slab

    NASA Astrophysics Data System (ADS)

    Mirnov, V. V.; Hegna, C. C.; Sovinec, C. R.; Howell, E. C.

    2016-10-01

    Two-fluid corrections to linear tearing modes includes 1) diamagnetic drifts that reduce the growth rate and 2) electron and ion decoupling on short scales that can lead to fast reconnection. We have recently developed an analytical model that includes effects 1) and 2) and important contribution from finite electron parallel thermal conduction. Both the tendencies 1) and 2) are confirmed by an approximate analytic dispersion relation that is derived using a perturbative approach of small ion-sound gyroradius ρs. This approach is only valid at the beginning of the transition from the collisional to semi-collisional regimes. Further analytical and numerical work is performed to cover the full interval of ρs connecting these two limiting cases. Growth rates are computed from analytic theory with a shooting method. They match the resistive MHD regime with the dispersion relations known at asymptotically large ion-sound gyroradius. A comparison between this analytical treatment and linear numerical simulations using the NIMROD code with cold ions and hot electrons in plasma slab is reported. The material is based on work supported by the U.S. DOE and NSF.

  8. Validating Semi-analytic Models of High-redshift Galaxy Formation Using Radiation Hydrodynamical Simulations

    NASA Astrophysics Data System (ADS)

    Côté, Benoit; Silvia, Devin W.; O’Shea, Brian W.; Smith, Britton; Wise, John H.

    2018-05-01

    We use a cosmological hydrodynamic simulation calculated with Enzo and the semi-analytic galaxy formation model (SAM) GAMMA to address the chemical evolution of dwarf galaxies in the early universe. The long-term goal of the project is to better understand the origin of metal-poor stars and the formation of dwarf galaxies and the Milky Way halo by cross-validating these theoretical approaches. We combine GAMMA with the merger tree of the most massive galaxy found in the hydrodynamic simulation and compare the star formation rate, the metallicity distribution function (MDF), and the age–metallicity relationship predicted by the two approaches. We found that the SAM can reproduce the global trends of the hydrodynamic simulation. However, there are degeneracies between the model parameters, and more constraints (e.g., star formation efficiency, gas flows) need to be extracted from the simulation to isolate the correct semi-analytic solution. Stochastic processes such as bursty star formation histories and star formation triggered by supernova explosions cannot be reproduced by the current version of GAMMA. Non-uniform mixing in the galaxy’s interstellar medium, coming primarily from self-enrichment by local supernovae, causes a broadening in the MDF that can be emulated in the SAM by convolving its predicted MDF with a Gaussian function having a standard deviation of ∼0.2 dex. We found that the most massive galaxy in the simulation retains nearby 100% of its baryonic mass within its virial radius, which is in agreement with what is needed in GAMMA to reproduce the global trends of the simulation.

  9. DEVELOPMENT AND VALIDATION OF ANALYTICAL METHODS FOR ENUMERATION OF FECAL INDICATORS AND EMERGING CHEMICAL CONTAMINANTS IN BIOSOLIDS

    EPA Science Inventory

    In 2002 the National Research Council (NRC) issued a report which identified a number of issues regarding biosolids land application practices and pointed out the need for improved and validated analytical techniques for regulated indicator organisms and pathogens. They also call...

  10. LATUX: An Iterative Workflow for Designing, Validating, and Deploying Learning Analytics Visualizations

    ERIC Educational Resources Information Center

    Martinez-Maldonado, Roberto; Pardo, Abelardo; Mirriahi, Negin; Yacef, Kalina; Kay, Judy; Clayphan, Andrew

    2015-01-01

    Designing, validating, and deploying learning analytics tools for instructors or students is a challenge that requires techniques and methods from different disciplines, such as software engineering, human-computer interaction, computer graphics, educational design, and psychology. Whilst each has established its own design methodologies, we now…

  11. Correlation of finite element free vibration predictions using random vibration test data. M.S. Thesis - Cleveland State Univ.

    NASA Technical Reports Server (NTRS)

    Chambers, Jeffrey A.

    1994-01-01

    Finite element analysis is regularly used during the engineering cycle of mechanical systems to predict the response to static, thermal, and dynamic loads. The finite element model (FEM) used to represent the system is often correlated with physical test results to determine the validity of analytical results provided. Results from dynamic testing provide one means for performing this correlation. One of the most common methods of measuring accuracy is by classical modal testing, whereby vibratory mode shapes are compared to mode shapes provided by finite element analysis. The degree of correlation between the test and analytical mode shapes can be shown mathematically using the cross orthogonality check. A great deal of time and effort can be exhausted in generating the set of test acquired mode shapes needed for the cross orthogonality check. In most situations response data from vibration tests are digitally processed to generate the mode shapes from a combination of modal parameters, forcing functions, and recorded response data. An alternate method is proposed in which the same correlation of analytical and test acquired mode shapes can be achieved without conducting the modal survey. Instead a procedure is detailed in which a minimum of test information, specifically the acceleration response data from a random vibration test, is used to generate a set of equivalent local accelerations to be applied to the reduced analytical model at discrete points corresponding to the test measurement locations. The static solution of the analytical model then produces a set of deformations that once normalized can be used to represent the test acquired mode shapes in the cross orthogonality relation. The method proposed has been shown to provide accurate results for both a simple analytical model as well as a complex space flight structure.

  12. VALIDATION OF ANALYTICAL METHODS AND INSTRUMENTATION FOR BERYLLIUM MEASUREMENT: REVIEW AND SUMMARY OF AVAILABLE GUIDES, PROCEDURES, AND PROTOCOLS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ekechukwu, A.

    This document proposes to provide a listing of available sources which can be used to validate analytical methods and/or instrumentation for beryllium determination. A literature review was conducted of available standard methods and publications used for method validation and/or quality control. A comprehensive listing of the articles, papers, and books reviewed is given in Appendix 1. Available validation documents and guides are listed in the appendix; each has a brief description of application and use. In the referenced sources, there are varying approaches to validation and varying descriptions of validation at different stages in method development. This discussion focuses onmore » validation and verification of fully developed methods and instrumentation that have been offered up for use or approval by other laboratories or official consensus bodies such as ASTM International, the International Standards Organization (ISO) and the Association of Official Analytical Chemists (AOAC). This review was conducted as part of a collaborative effort to investigate and improve the state of validation for measuring beryllium in the workplace and the environment. Documents and publications from the United States and Europe are included. Unless otherwise specified, all documents were published in English.« less

  13. Evolution of an Implementation-Ready Interprofessional Pain Assessment Reference Model

    PubMed Central

    Collins, Sarah A; Bavuso, Karen; Swenson, Mary; Suchecki, Christine; Mar, Perry; Rocha, Roberto A.

    2017-01-01

    Standards to increase consistency of comprehensive pain assessments are important for safety, quality, and analytics activities, including meeting Joint Commission requirements and learning the best management strategies and interventions for the current prescription Opioid epidemic. In this study we describe the development and validation of a Pain Assessment Reference Model ready for implementation on EHR forms and flowsheets. Our process resulted in 5 successive revisions of the reference model, which more than doubled the number of data elements to 47. The organization of the model evolved during validation sessions with panels totaling 48 subject matter experts (SMEs) to include 9 sets of data elements, with one set recommended as a minimal data set. The reference model also evolved when implemented into EHR forms and flowsheets, indicating specifications such as cascading logic that are important to inform secondary use of data. PMID:29854125

  14. Investigation of Zircaloy-2 oxidation model for SFP accident analysis

    NASA Astrophysics Data System (ADS)

    Nemoto, Yoshiyuki; Kaji, Yoshiyuki; Ogawa, Chihiro; Kondo, Keietsu; Nakashima, Kazuo; Kanazawa, Toru; Tojo, Masayuki

    2017-05-01

    The authors previously conducted thermogravimetric analyses on Zircaloy-2 in air. By using the thermogravimetric data, an oxidation model was constructed in this study so that it can be applied for the modeling of cladding degradation in spent fuel pool (SFP) severe accident condition. For its validation, oxidation tests of long cladding tube were conducted, and computational fluid dynamics analyses using the constructed oxidation model were proceeded to simulate the experiments. In the oxidation tests, high temperature thermal gradient along the cladding axis was applied and air flow rates in testing chamber were controlled to simulate hypothetical SFP accidents. The analytical outputs successfully reproduced the growth of oxide film and porous oxide layer on the claddings in oxidation tests, and validity of the oxidation model was proved. Influence of air flow rate for the oxidation behavior was thought negligible in the conditions investigated in this study.

  15. Quantitative analysis of Sudan dye adulteration in paprika powder using FTIR spectroscopy.

    PubMed

    Lohumi, Santosh; Joshi, Ritu; Kandpal, Lalit Mohan; Lee, Hoonsoo; Kim, Moon S; Cho, Hyunjeong; Mo, Changyeun; Seo, Young-Wook; Rahman, Anisur; Cho, Byoung-Kwan

    2017-05-01

    As adulteration of foodstuffs with Sudan dye, especially paprika- and chilli-containing products, has been reported with some frequency, this issue has become one focal point for addressing food safety. FTIR spectroscopy has been used extensively as an analytical method for quality control and safety determination for food products. Thus, the use of FTIR spectroscopy for rapid determination of Sudan dye in paprika powder was investigated in this study. A net analyte signal (NAS)-based methodology, named HLA/GO (hybrid linear analysis in the literature), was applied to FTIR spectral data to predict Sudan dye concentration. The calibration and validation sets were designed to evaluate the performance of the multivariate method. The obtained results had a high determination coefficient (R 2 ) of 0.98 and low root mean square error (RMSE) of 0.026% for the calibration set, and an R 2 of 0.97 and RMSE of 0.05% for the validation set. The model was further validated using a second validation set and through the figures of merit, such as sensitivity, selectivity, and limits of detection and quantification. The proposed technique of FTIR combined with HLA/GO is rapid, simple and low cost, making this approach advantageous when compared with the main alternative methods based on liquid chromatography (LC) techniques.

  16. Novel permanent magnet linear motor with isolated movers: analytical, numerical and experimental study.

    PubMed

    Yan, Liang; Peng, Juanjuan; Jiao, Zongxia; Chen, Chin-Yin; Chen, I-Ming

    2014-10-01

    This paper proposes a novel permanent magnet linear motor possessing two movers and one stator. The two movers are isolated and can interact with the stator poles to generate independent forces and motions. Compared with conventional multiple motor driving system, it helps to increase the system compactness, and thus improve the power density and working efficiency. The magnetic field distribution is obtained by using equivalent magnetic circuit method. Following that, the formulation of force output considering armature reaction is carried out. Then inductances are analyzed with finite element method to investigate the relationships of the two movers. It is found that the mutual-inductances are nearly equal to zero, and thus the interaction between the two movers is negligible. A research prototype of the linear motor and a measurement apparatus on thrust force have been developed. Both numerical computation and experiment measurement are conducted to validate the analytical model of thrust force. Comparison shows that the analytical model matches the numerical and experimental results well.

  17. Design and Analysis of a Low Latency Deterministic Network MAC for Wireless Sensor Networks

    PubMed Central

    Sahoo, Prasan Kumar; Pattanaik, Sudhir Ranjan; Wu, Shih-Lin

    2017-01-01

    The IEEE 802.15.4e standard has four different superframe structures for different applications. Use of a low latency deterministic network (LLDN) superframe for the wireless sensor network is one of them, which can operate in a star topology. In this paper, a new channel access mechanism for IEEE 802.15.4e-based LLDN shared slots is proposed, and analytical models are designed based on this channel access mechanism. A prediction model is designed to estimate the possible number of retransmission slots based on the number of failed transmissions. Performance analysis in terms of data transmission reliability, delay, throughput and energy consumption are provided based on our proposed designs. Our designs are validated for simulation and analytical results, and it is observed that the simulation results well match with the analytical ones. Besides, our designs are compared with the IEEE 802.15.4 MAC mechanism, and it is shown that ours outperforms in terms of throughput, energy consumption, delay and reliability. PMID:28937632

  18. Design and Analysis of a Low Latency Deterministic Network MAC for Wireless Sensor Networks.

    PubMed

    Sahoo, Prasan Kumar; Pattanaik, Sudhir Ranjan; Wu, Shih-Lin

    2017-09-22

    The IEEE 802.15.4e standard has four different superframe structures for different applications. Use of a low latency deterministic network (LLDN) superframe for the wireless sensor network is one of them, which can operate in a star topology. In this paper, a new channel access mechanism for IEEE 802.15.4e-based LLDN shared slots is proposed, and analytical models are designed based on this channel access mechanism. A prediction model is designed to estimate the possible number of retransmission slots based on the number of failed transmissions. Performance analysis in terms of data transmission reliability, delay, throughput and energy consumption are provided based on our proposed designs. Our designs are validated for simulation and analytical results, and it is observed that the simulation results well match with the analytical ones. Besides, our designs are compared with the IEEE 802.15.4 MAC mechanism, and it is shown that ours outperforms in terms of throughput, energy consumption, delay and reliability.

  19. Analytical Model of Large Data Transactions in CoAP Networks

    PubMed Central

    Ludovici, Alessandro; Di Marco, Piergiuseppe; Calveras, Anna; Johansson, Karl H.

    2014-01-01

    We propose a novel analytical model to study fragmentation methods in wireless sensor networks adopting the Constrained Application Protocol (CoAP) and the IEEE 802.15.4 standard for medium access control (MAC). The blockwise transfer technique proposed in CoAP and the 6LoWPAN fragmentation are included in the analysis. The two techniques are compared in terms of reliability and delay, depending on the traffic, the number of nodes and the parameters of the IEEE 802.15.4 MAC. The results are validated trough Monte Carlo simulations. To the best of our knowledge this is the first study that evaluates and compares analytically the performance of CoAP blockwise transfer and 6LoWPAN fragmentation. A major contribution is the possibility to understand the behavior of both techniques with different network conditions. Our results show that 6LoWPAN fragmentation is preferable for delay-constrained applications. For highly congested networks, the blockwise transfer slightly outperforms 6LoWPAN fragmentation in terms of reliability. PMID:25153143

  20. Analytical study of robustness of a negative feedback oscillator by multiparameter sensitivity

    PubMed Central

    2014-01-01

    Background One of the distinctive features of biological oscillators such as circadian clocks and cell cycles is robustness which is the ability to resume reliable operation in the face of different types of perturbations. In the previous study, we proposed multiparameter sensitivity (MPS) as an intelligible measure for robustness to fluctuations in kinetic parameters. Analytical solutions directly connect the mechanisms and kinetic parameters to dynamic properties such as period, amplitude and their associated MPSs. Although negative feedback loops are known as common structures to biological oscillators, the analytical solutions have not been presented for a general model of negative feedback oscillators. Results We present the analytical expressions for the period, amplitude and their associated MPSs for a general model of negative feedback oscillators. The analytical solutions are validated by comparing them with numerical solutions. The analytical solutions explicitly show how the dynamic properties depend on the kinetic parameters. The ratio of a threshold to the amplitude has a strong impact on the period MPS. As the ratio approaches to one, the MPS increases, indicating that the period becomes more sensitive to changes in kinetic parameters. We present the first mathematical proof that the distributed time-delay mechanism contributes to making the oscillation period robust to parameter fluctuations. The MPS decreases with an increase in the feedback loop length (i.e., the number of molecular species constituting the feedback loop). Conclusions Since a general model of negative feedback oscillators was employed, the results shown in this paper are expected to be true for many of biological oscillators. This study strongly supports that the hypothesis that phosphorylations of clock proteins contribute to the robustness of circadian rhythms. The analytical solutions give synthetic biologists some clues to design gene oscillators with robust and desired period. PMID:25605374

  1. Degenerate limit thermodynamics beyond leading order for models of dense matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Constantinou, Constantinos, E-mail: c.constantinou@fz-juelich.de; Muccioli, Brian, E-mail: bm956810@ohio.edu; Prakash, Madappa, E-mail: prakash@ohio.edu

    2015-12-15

    Analytical formulas for next-to-leading order temperature corrections to the thermal state variables of interacting nucleons in bulk matter are derived in the degenerate limit. The formalism developed is applicable to a wide class of non-relativistic and relativistic models of hot and dense matter currently used in nuclear physics and astrophysics (supernovae, proto-neutron stars and neutron star mergers) as well as in condensed matter physics. We consider the general case of arbitrary dimensionality of momentum space and an arbitrary degree of relativity (for relativistic models). For non-relativistic zero-range interactions, knowledge of the Landau effective mass suffices to compute next-to-leading order effects,more » but for finite-range interactions, momentum derivatives of the Landau effective mass function up to second order are required. Results from our analytical formulas are compared with the exact results for zero- and finite-range potential and relativistic mean-field theoretical models. In all cases, inclusion of next-to-leading order temperature effects substantially extends the ranges of partial degeneracy for which the analytical treatment remains valid. Effects of many-body correlations that deserve further investigation are highlighted.« less

  2. Spontaneous imbibition of water and determination of effective contact angles in the Eagle Ford Shale Formation using neutron imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DiStefano, Victoria H.; Cheshire, Michael C.; McFarlane, Joanna

    Understanding of fundamental processes and prediction of optimal parameters during the horizontal drilling and hydraulic fracturing process results in economically effective improvement of oil and natural gas extraction. Although, the modern analytical and computational models can capture fracture growth, there is a lack of experimental data on spontaneous imbibition and wettability in oil and gas reservoirs for the validation of further model development. In this work, we used neutron imaging to measure the spontaneous imbibition of water into fractures of Eagle Ford Shale with known geometries and fracture orientations. An analytical solution for a set of nonlinear second-order differential equationsmore » was applied to the measured imbibition data to determine effective contact angles. The analytical solution fit the measured imbibition data reasonably well and determined effective contact angles were slightly higher than static contact angles due to effects of in-situ changes in velocity, surface roughness, and heterogeneity of mineral surfaces on the fracture surface. Additionally, small fracture widths may have retarded imbibition and affected model fits, which suggests that average fracture widths are not satisfactory for modeling imbibition in natural systems.« less

  3. Sustained prediction ability of net analyte preprocessing methods using reduced calibration sets. Theoretical and experimental study involving the spectrophotometric analysis of multicomponent mixtures.

    PubMed

    Goicoechea, H C; Olivieri, A C

    2001-07-01

    A newly developed multivariate method involving net analyte preprocessing (NAP) was tested using central composite calibration designs of progressively decreasing size regarding the multivariate simultaneous spectrophotometric determination of three active components (phenylephrine, diphenhydramine and naphazoline) and one excipient (methylparaben) in nasal solutions. Its performance was evaluated and compared with that of partial least-squares (PLS-1). Minimisation of the calibration predicted error sum of squares (PRESS) as a function of a moving spectral window helped to select appropriate working spectral ranges for both methods. The comparison of NAP and PLS results was carried out using two tests: (1) the elliptical joint confidence region for the slope and intercept of a predicted versus actual concentrations plot for a large validation set of samples and (2) the D-optimality criterion concerning the information content of the calibration data matrix. Extensive simulations and experimental validation showed that, unlike PLS, the NAP method is able to furnish highly satisfactory results when the calibration set is reduced from a full four-component central composite to a fractional central composite, as expected from the modelling requirements of net analyte based methods.

  4. The contribution of Raman spectroscopy to the analytical quality control of cytotoxic drugs in a hospital environment: eliminating the exposure risks for staff members and their work environment.

    PubMed

    Bourget, Philippe; Amin, Alexandre; Vidal, Fabrice; Merlette, Christophe; Troude, Pénélope; Baillet-Guffroy, Arlette

    2014-08-15

    The purpose of the study was to perform a comparative analysis of the technical performance, respective costs and environmental effect of two invasive analytical methods (HPLC and UV/visible-FTIR) as compared to a new non-invasive analytical technique (Raman spectroscopy). Three pharmacotherapeutic models were used to compare the analytical performances of the three analytical techniques. Statistical inter-method correlation analysis was performed using non-parametric correlation rank tests. The study's economic component combined calculations relative to the depreciation of the equipment and the estimated cost of an AQC unit of work. In any case, analytical validation parameters of the three techniques were satisfactory, and strong correlations between the two spectroscopic techniques vs. HPLC were found. In addition, Raman spectroscopy was found to be superior as compared to the other techniques for numerous key criteria including a complete safety for operators and their occupational environment, a non-invasive procedure, no need for consumables, and a low operating cost. Finally, Raman spectroscopy appears superior for technical, economic and environmental objectives, as compared with the other invasive analytical methods. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Electroacoustic theory for concentrated colloids with overlapped DLs at arbitrary kappa alpha. I. Application to nanocolloids and nonaqueous colloids.

    PubMed

    Shilov, V N; Borkovskaja, Y B; Dukhin, A S

    2004-09-15

    Existing theories of electroacoustic phenomena in concentrated colloids neglect the possibility of double layer overlap and are valid mostly for the "thin double layer," when the double layer thickness is much less than the particle size. In this paper we present a new electroacoustic theory which removes this restriction. This would make this new theory applicable to characterizing a variety of aqueous nanocolloids and of nonaqueous dispersions. There are two versions of the theory leading to the analytical solutions. The first version corresponds to strongly overlapped diffuse layers (so-called quasi-homogeneous model). It yields a simple analytical formula for colloid vibration current (CVI), which is valid for arbitrary ultrasound frequency, but for restricted kappa alpha range. This version of the theory, as well the Smoluchowski theory for microelectrophoresis, is independent of particle shape and polydispersity. This makes it very attractive for practical use, with the hope that it might be as useful as classical Smoluchowski theory. In order to determine the kappa alpha range of the quasi-homogeneous model validity we develop the second version that limits ultrasound frequency, but applies no restriction on kappa alpha. The ultrasound frequency should substantially exceed the Maxwell-Wagner relaxation frequency. This limitation makes active conductivity related current negligible compared to the passive dielectric displacement current. It is possible to derive an expression for CVI in the concentrated dispersion as formulae inhering definite integrals with integrands depending on equilibrium potential distribution. This second version allowed us to estimate the ranges of the applicability of the first, quasi-homogeneous version. It turns out that the quasi-homogeneous model works for kappa alpha values up to almost 1. For instance, at volume fraction 30%, the highest kappa alpha limit of the quasi-homogeneous model is 0.65. Therefore, this version of the electroacoustic theory is valid for almost all nonaqueous dispersions and a wide variety of nanocolloids, especially with sizes under 100 nm.

  6. PCA as a practical indicator of OPLS-DA model reliability.

    PubMed

    Worley, Bradley; Powers, Robert

    Principal Component Analysis (PCA) and Orthogonal Projections to Latent Structures Discriminant Analysis (OPLS-DA) are powerful statistical modeling tools that provide insights into separations between experimental groups based on high-dimensional spectral measurements from NMR, MS or other analytical instrumentation. However, when used without validation, these tools may lead investigators to statistically unreliable conclusions. This danger is especially real for Partial Least Squares (PLS) and OPLS, which aggressively force separations between experimental groups. As a result, OPLS-DA is often used as an alternative method when PCA fails to expose group separation, but this practice is highly dangerous. Without rigorous validation, OPLS-DA can easily yield statistically unreliable group separation. A Monte Carlo analysis of PCA group separations and OPLS-DA cross-validation metrics was performed on NMR datasets with statistically significant separations in scores-space. A linearly increasing amount of Gaussian noise was added to each data matrix followed by the construction and validation of PCA and OPLS-DA models. With increasing added noise, the PCA scores-space distance between groups rapidly decreased and the OPLS-DA cross-validation statistics simultaneously deteriorated. A decrease in correlation between the estimated loadings (added noise) and the true (original) loadings was also observed. While the validity of the OPLS-DA model diminished with increasing added noise, the group separation in scores-space remained basically unaffected. Supported by the results of Monte Carlo analyses of PCA group separations and OPLS-DA cross-validation metrics, we provide practical guidelines and cross-validatory recommendations for reliable inference from PCA and OPLS-DA models.

  7. Methodology for assessing the safety of Hydrogen Systems: HyRAM 1.1 technical reference manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Groth, Katrina; Hecht, Ethan; Reynolds, John Thomas

    The HyRAM software toolkit provides a basis for conducting quantitative risk assessment and consequence modeling for hydrogen infrastructure and transportation systems. HyRAM is designed to facilitate the use of state-of-the-art science and engineering models to conduct robust, repeatable assessments of hydrogen safety, hazards, and risk. HyRAM is envisioned as a unifying platform combining validated, analytical models of hydrogen behavior, a stan- dardized, transparent QRA approach, and engineering models and generic data for hydrogen installations. HyRAM is being developed at Sandia National Laboratories for the U. S. De- partment of Energy to increase access to technical data about hydrogen safety andmore » to enable the use of that data to support development and revision of national and international codes and standards. This document provides a description of the methodology and models contained in the HyRAM version 1.1. HyRAM 1.1 includes generic probabilities for hydrogen equipment fail- ures, probabilistic models for the impact of heat flux on humans and structures, and computa- tionally and experimentally validated analytical and first order models of hydrogen release and flame physics. HyRAM 1.1 integrates deterministic and probabilistic models for quantifying accident scenarios, predicting physical effects, and characterizing hydrogen hazards (thermal effects from jet fires, overpressure effects from deflagrations), and assessing impact on people and structures. HyRAM is a prototype software in active development and thus the models and data may change. This report will be updated at appropriate developmental intervals.« less

  8. A semi-analytical beam model for the vibration of railway tracks

    NASA Astrophysics Data System (ADS)

    Kostovasilis, D.; Thompson, D. J.; Hussein, M. F. M.

    2017-04-01

    The high frequency dynamic behaviour of railway tracks, in both vertical and lateral directions, strongly affects the generation of rolling noise as well as other phenomena such as rail corrugation. An improved semi-analytical model of a beam on an elastic foundation is introduced that accounts for the coupling of the vertical and lateral vibration. The model includes the effects of cross-section asymmetry, shear deformation, rotational inertia and restrained warping. Consideration is given to the fact that the loads at the rail head, as well as those exerted by the railpads at the rail foot, may not act through the centroid of the section. The response is evaluated for a harmonic load and the solution is obtained in the wavenumber domain. Results are presented as dispersion curves for free and supported rails and are validated with the aid of a Finite Element (FE) and a waveguide finite element (WFE) model. Closed form expressions are derived for the forced response, and validated against the WFE model. Track mobilities and decay rates are presented to assess the potential implications for rolling noise and the influence of the various sources of vertical-lateral coupling. Comparison is also made with measured data. Overall, the model presented performs very well, especially for the lateral vibration, although it does not contain the high frequency cross-section deformation modes. The most significant effects on the response are shown to be the inclusion of torsion and foundation eccentricity, which mainly affect the lateral response.

  9. a Proposed Benchmark Problem for Scatter Calculations in Radiographic Modelling

    NASA Astrophysics Data System (ADS)

    Jaenisch, G.-R.; Bellon, C.; Schumm, A.; Tabary, J.; Duvauchelle, Ph.

    2009-03-01

    Code Validation is a permanent concern in computer modelling, and has been addressed repeatedly in eddy current and ultrasonic modeling. A good benchmark problem is sufficiently simple to be taken into account by various codes without strong requirements on geometry representation capabilities, focuses on few or even a single aspect of the problem at hand to facilitate interpretation and to avoid that compound errors compensate themselves, yields a quantitative result and is experimentally accessible. In this paper we attempt to address code validation for one aspect of radiographic modeling, the scattered radiation prediction. Many NDT applications can not neglect scattered radiation, and the scatter calculation thus is important to faithfully simulate the inspection situation. Our benchmark problem covers the wall thickness range of 10 to 50 mm for single wall inspections, with energies ranging from 100 to 500 keV in the first stage, and up to 1 MeV with wall thicknesses up to 70 mm in the extended stage. A simple plate geometry is sufficient for this purpose, and the scatter data is compared on a photon level, without a film model, which allows for comparisons with reference codes like MCNP. We compare results of three Monte Carlo codes (McRay, Sindbad and Moderato) as well as an analytical first order scattering code (VXI), and confront them to results obtained with MCNP. The comparison with an analytical scatter model provides insights into the application domain where this kind of approach can successfully replace Monte-Carlo calculations.

  10. Systematic Review of Model-Based Economic Evaluations of Treatments for Alzheimer's Disease.

    PubMed

    Hernandez, Luis; Ozen, Asli; DosSantos, Rodrigo; Getsios, Denis

    2016-07-01

    Numerous economic evaluations using decision-analytic models have assessed the cost effectiveness of treatments for Alzheimer's disease (AD) in the last two decades. It is important to understand the methods used in the existing models of AD and how they could impact results, as they could inform new model-based economic evaluations of treatments for AD. The aim of this systematic review was to provide a detailed description on the relevant aspects and components of existing decision-analytic models of AD, identifying areas for improvement and future development, and to conduct a quality assessment of the included studies. We performed a systematic and comprehensive review of cost-effectiveness studies of pharmacological treatments for AD published in the last decade (January 2005 to February 2015) that used decision-analytic models, also including studies considering patients with mild cognitive impairment (MCI). The background information of the included studies and specific information on the decision-analytic models, including their approach and components, assumptions, data sources, analyses, and results, were obtained from each study. A description of how the modeling approaches and assumptions differ across studies, identifying areas for improvement and future development, is provided. At the end, we present our own view of the potential future directions of decision-analytic models of AD and the challenges they might face. The included studies present a variety of different approaches, assumptions, and scope of decision-analytic models used in the economic evaluation of pharmacological treatments of AD. The major areas for improvement in future models of AD are to include domains of cognition, function, and behavior, rather than cognition alone; include a detailed description of how data used to model the natural course of disease progression were derived; state and justify the economic model selected and structural assumptions and limitations; provide a detailed (rather than high-level) description of the cost components included in the model; and report on the face-, internal-, and cross-validity of the model to strengthen the credibility and confidence in model results. The quality scores of most studies were rated as fair to good (average 87.5, range 69.5-100, in a scale of 0-100). Despite the advancements in decision-analytic models of AD, there remain several areas of improvement that are necessary to more appropriately and realistically capture the broad nature of AD and the potential benefits of treatments in future models of AD.

  11. Cylindrical optical resonators: fundamental properties and bio-sensing characteristics

    NASA Astrophysics Data System (ADS)

    Khozeymeh, Foroogh; Razaghi, Mohammad

    2018-04-01

    In this paper, detailed theoretical analysis of cylindrical resonators is demonstrated. As illustrated, these kinds of resonators can be used as optical bio-sensing devices. The proposed structure is analyzed using an analytical method based on Lam's approximation. This method is systematic and has simplified the tedious process of whispering-gallery mode (WGM) wavelength analysis in optical cylindrical biosensors. By this method, analysis of higher radial orders of high angular momentum WGMs has been possible. Using closed-form analytical equations, resonance wavelengths of higher radial and angular order WGMs of TE and TM polarization waves are calculated. It is shown that high angular momentum WGMs are more appropriate for bio-sensing applications. Some of the calculations are done using a numerical non-linear Newton method. A perfect match of 99.84% between the analytical and the numerical methods has been achieved. In order to verify the validity of the calculations, Meep simulations based on the finite difference time domain (FDTD) method are performed. In this case, a match of 96.70% between the analytical and FDTD results has been obtained. The analytical predictions are in good agreement with other experimental work (99.99% match). These results validate the proposed analytical modelling for the fast design of optical cylindrical biosensors. It is shown that by extending the proposed two-layer resonator structure analyzing scheme, it is possible to study a three-layer cylindrical resonator structure as well. Moreover, by this method, fast sensitivity optimization in cylindrical resonator-based biosensors has been possible. Sensitivity of the WGM resonances is analyzed as a function of the structural parameters of the cylindrical resonators. Based on the results, fourth radial order WGMs, with a resonator radius of 50 μm, display the most bulk refractive index sensitivity of 41.50 (nm/RIU).

  12. Scattering From the Finite-Length, Dielectric Circular Cylinder. Part 2 - On the Validity of an Analytical Solution for Characterizing Backscattering from Tree Trunks at P-Band

    DTIC Science & Technology

    2015-09-01

    accuracy of an analytical solution for characterizing the backscattering responses of circular cylindrical tree trunks located above a dielectric ground...Figures iv 1. Introduction 1 2. Analytical Solution 2 3. Validation with Full-Wave Solution 4 3.1 Untapered Circular Cylindrical Trunk 5 3.2...Linearly Tapered Circular Cylindrical Trunk 13 3.3 Nonlinearly Tapered Circular Cylindrical Trunk 18 4. Conclusions 22 5. References 23 Appendix

  13. A Literature Survey and Experimental Evaluation of the State-of-the-Art in Uplift Modeling: A Stepping Stone Toward the Development of Prescriptive Analytics.

    PubMed

    Devriendt, Floris; Moldovan, Darie; Verbeke, Wouter

    2018-03-01

    Prescriptive analytics extends on predictive analytics by allowing to estimate an outcome in function of control variables, allowing as such to establish the required level of control variables for realizing a desired outcome. Uplift modeling is at the heart of prescriptive analytics and aims at estimating the net difference in an outcome resulting from a specific action or treatment that is applied. In this article, a structured and detailed literature survey on uplift modeling is provided by identifying and contrasting various groups of approaches. In addition, evaluation metrics for assessing the performance of uplift models are reviewed. An experimental evaluation on four real-world data sets provides further insight into their use. Uplift random forests are found to be consistently among the best performing techniques in terms of the Qini and Gini measures, although considerable variability in performance across the various data sets of the experiments is observed. In addition, uplift models are frequently observed to be unstable and display a strong variability in terms of performance across different folds in the cross-validation experimental setup. This potentially threatens their actual use for business applications. Moreover, it is found that the available evaluation metrics do not provide an intuitively understandable indication of the actual use and performance of a model. Specifically, existing evaluation metrics do not facilitate a comparison of uplift models and predictive models and evaluate performance either at an arbitrary cutoff or over the full spectrum of potential cutoffs. In conclusion, we highlight the instability of uplift models and the need for an application-oriented approach to assess uplift models as prime topics for further research.

  14. Computing dispersion curves of elastic/viscoelastic transversely-isotropic bone plates coupled with soft tissue and marrow using semi-analytical finite element (SAFE) method.

    PubMed

    Nguyen, Vu-Hieu; Tran, Tho N H T; Sacchi, Mauricio D; Naili, Salah; Le, Lawrence H

    2017-08-01

    We present a semi-analytical finite element (SAFE) scheme for accurately computing the velocity dispersion and attenuation in a trilayered system consisting of a transversely-isotropic (TI) cortical bone plate sandwiched between the soft tissue and marrow layers. The soft tissue and marrow are mimicked by two fluid layers of finite thickness. A Kelvin-Voigt model accounts for the absorption of all three biological domains. The simulated dispersion curves are validated by the results from the commercial software DISPERSE and published literature. Finally, the algorithm is applied to a viscoelastic trilayered TI bone model to interpret the guided modes of an ex-vivo experimental data set from a bone phantom. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Analytic algorithms for determining radiative transfer optical properties of ocean waters.

    PubMed

    Kaskas, Ayse; Güleçyüz, Mustafa C; Tezcan, Cevdet; McCormick, Norman J

    2006-10-10

    A synthetic model for the scattering phase function is used to develop simple algebraic equations, valid for any water type, for evaluating the ratio of the backscattering to absorption coefficients of spatially uniform, very deep waters with data from upward and downward planar irradiances and the remotely sensed reflectance. The phase function is a variable combination of a forward-directed Dirac delta function plus isotropic scattering, which is an elementary model for strongly forward scattering such as that encountered in oceanic optics applications. The incident illumination at the surface is taken to be diffuse plus a collimated beam. The algorithms are compared with other analytic correlations that were previously derived from extensive numerical simulations, and they are also numerically tested with forward problem results computed with a modified FN method.

  16. Rotor/Wing Interactions in Hover

    NASA Technical Reports Server (NTRS)

    Young, Larry A.; Derby, Michael R.

    2002-01-01

    Hover predictions of tiltrotor aircraft are hampered by the lack of accurate and computationally efficient models for rotor/wing interactional aerodynamics. This paper summarizes the development of an approximate, potential flow solution for the rotor-on-rotor and wing-on-rotor interactions. This analysis is based on actuator disk and vortex theory and the method of images. The analysis is applicable for out-of-ground-effect predictions. The analysis is particularly suited for aircraft preliminary design studies. Flow field predictions from this simple analytical model are validated against experimental data from previous studies. The paper concludes with an analytical assessment of the influence of rotor-on-rotor and wing-on-rotor interactions. This assessment examines the effect of rotor-to-wing offset distance, wing sweep, wing span, and flaperon incidence angle on tiltrotor inflow and performance.

  17. Hydrodynamics beyond Navier-Stokes: the slip flow model.

    PubMed

    Yudistiawan, Wahyu P; Ansumali, Santosh; Karlin, Iliya V

    2008-07-01

    Recently, analytical solutions for the nonlinear Couette flow demonstrated the relevance of the lattice Boltzmann (LB) models to hydrodynamics beyond the continuum limit [S. Ansumali, Phys. Rev. Lett. 98, 124502 (2007)]. In this paper, we present a systematic study of the simplest LB kinetic equation-the nine-bit model in two dimensions--in order to quantify it as a slip flow approximation. Details of the aforementioned analytical solution are presented, and results are extended to include a general shear- and force-driven unidirectional flow in confined geometry. Exact solutions for the velocity, as well as for pertinent higher-order moments of the distribution functions, are obtained in both Couette and Poiseuille steady-state flows for all values of rarefaction parameter (Knudsen number). Results are compared with the slip flow solution by Cercignani, and a good quantitative agreement is found for both flow situations. Thus, the standard nine-bit LB model is characterized as a valid and self-consistent slip flow model for simulations beyond the Navier-Stokes approximation.

  18. A Squeeze-film Damping Model for the Circular Torsion Micro-resonators

    NASA Astrophysics Data System (ADS)

    Yang, Fan; Li, Pu

    2017-07-01

    In recent years, MEMS devices are widely used in many industries. The prediction of squeeze-film damping is very important for the research of high quality factor resonators. In the past, there have been many analytical models predicting the squeeze-film damping of the torsion micro-resonators. However, for the circular torsion micro-plate, the works over it is very rare. The only model presented by Xia et al[7] using the method of eigenfunction expansions. In this paper, The Bessel series solution is used to solve the Reynolds equation under the assumption of the incompressible gas of the gap, the pressure distribution of the gas between two micro-plates is obtained. Then the analytical expression for the damping constant of the device is derived. The result of the present model matches very well with the finite element method (FEM) solutions and the result of Xia’s model, so the present models’ accuracy is able to be validated.

  19. AN ANALYTIC RADIATIVE-CONVECTIVE MODEL FOR PLANETARY ATMOSPHERES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, Tyler D.; Catling, David C., E-mail: robinson@astro.washington.edu

    2012-09-20

    We present an analytic one-dimensional radiative-convective model of the thermal structure of planetary atmospheres. Our model assumes that thermal radiative transfer is gray and can be represented by the two-stream approximation. Model atmospheres are assumed to be in hydrostatic equilibrium, with a power-law scaling between the atmospheric pressure and the gray thermal optical depth. The convective portions of our models are taken to follow adiabats that account for condensation of volatiles through a scaling parameter to the dry adiabat. By combining these assumptions, we produce simple, analytic expressions that allow calculations of the atmospheric-pressure-temperature profile, as well as expressions formore » the profiles of thermal radiative flux and convective flux. We explore the general behaviors of our model. These investigations encompass (1) worlds where atmospheric attenuation of sunlight is weak, which we show tend to have relatively high radiative-convective boundaries; (2) worlds with some attenuation of sunlight throughout the atmosphere, which we show can produce either shallow or deep radiative-convective boundaries, depending on the strength of sunlight attenuation; and (3) strongly irradiated giant planets (including hot Jupiters), where we explore the conditions under which these worlds acquire detached convective regions in their mid-tropospheres. Finally, we validate our model and demonstrate its utility through comparisons to the average observed thermal structure of Venus, Jupiter, and Titan, and by comparing computed flux profiles to more complex models.« less

  20. International Space Station Modal Correction Analysis

    NASA Technical Reports Server (NTRS)

    Fotz[atrocl. Lrostom; Grugoer. < ocjae; Laible, Michael; Sugavanam, Sujatha

    2012-01-01

    This paper summarizes the on-orbit modal test and the related modal analysis, model validation and correlation performed for the ISS Stage ULF4, DTF S4-1A, October 11,2010, GMT 284/06:13:00.00. The objective of this analysis is to validate and correlate analytical models with the intent to verify the ISS critical interface dynamic loads and improve fatigue life prediction. For the ISS configurations under consideration, on-orbit dynamic responses were collected with Russian vehicles attached and without the Orbiter attached to the ISS. ISS instrumentation systems that were used to collect the dynamic responses during the DTF S4-1A included the Internal Wireless Instrumentation System (IWIS), External Wireless Instrumentation System (EWIS), Structural Dynamic Measurement System (SDMS), Space Acceleration Measurement System (SAMS), Inertial Measurement Unit (IMU) and ISS External Cameras. Experimental modal analyses were performed on the measured data to extract modal parameters including frequency, damping and mode shape information. Correlation and comparisons between test and analytical modal parameters were performed to assess the accuracy of models for the ISS configuration under consideration. Based on the frequency comparisons, the accuracy of the mathematical models is assessed and model refinement recommendations are given. Section 2.0 of this report presents the math model used in the analysis. This section also describes the ISS configuration under consideration and summarizes the associated primary modes of interest along with the fundamental appendage modes. Section 3.0 discusses the details of the ISS Stage ULF4 DTF S4-1A test. Section 4.0 discusses the on-orbit instrumentation systems that were used in the collection of the data analyzed in this paper. The modal analysis approach and results used in the analysis of the collected data are summarized in Section 5.0. The model correlation and validation effort is reported in Section 6.0. Conclusions and recommendations drawn from this analysis are included in Section 7.0.

  1. Analytical dose modeling for preclinical proton irradiation of millimetric targets.

    PubMed

    Vanstalle, Marie; Constanzo, Julie; Karakaya, Yusuf; Finck, Christian; Rousseau, Marc; Brasse, David

    2018-01-01

    Due to the considerable development of proton radiotherapy, several proton platforms have emerged to irradiate small animals in order to study the biological effectiveness of proton radiation. A dedicated analytical treatment planning tool was developed in this study to accurately calculate the delivered dose given the specific constraints imposed by the small dimensions of the irradiated areas. The treatment planning system (TPS) developed in this study is based on an analytical formulation of the Bragg peak and uses experimental range values of protons. The method was validated after comparison with experimental data from the literature and then compared to Monte Carlo simulations conducted using Geant4. Three examples of treatment planning, performed with phantoms made of water targets and bone-slab insert, were generated with the analytical formulation and Geant4. Each treatment planning was evaluated using dose-volume histograms and gamma index maps. We demonstrate the value of the analytical function for mouse irradiation, which requires a targeting accuracy of 0.1 mm. Using the appropriate database, the analytical modeling limits the errors caused by misestimating the stopping power. For example, 99% of a 1-mm tumor irradiated with a 24-MeV beam receives the prescribed dose. The analytical dose deviations from the prescribed dose remain within the dose tolerances stated by report 62 of the International Commission on Radiation Units and Measurements for all tested configurations. In addition, the gamma index maps show that the highly constrained targeting accuracy of 0.1 mm for mouse irradiation leads to a significant disagreement between Geant4 and the reference. This simulated treatment planning is nevertheless compatible with a targeting accuracy exceeding 0.2 mm, corresponding to rat and rabbit irradiations. Good dose accuracy for millimetric tumors is achieved with the analytical calculation used in this work. These volume sizes are typical in mouse models for radiation studies. Our results demonstrate that the choice of analytical rather than simulated treatment planning depends on the animal model under consideration. © 2017 American Association of Physicists in Medicine.

  2. Constructing and predicting solitary pattern solutions for nonlinear time-fractional dispersive partial differential equations

    NASA Astrophysics Data System (ADS)

    Arqub, Omar Abu; El-Ajou, Ahmad; Momani, Shaher

    2015-07-01

    Building fractional mathematical models for specific phenomena and developing numerical or analytical solutions for these fractional mathematical models are crucial issues in mathematics, physics, and engineering. In this work, a new analytical technique for constructing and predicting solitary pattern solutions of time-fractional dispersive partial differential equations is proposed based on the generalized Taylor series formula and residual error function. The new approach provides solutions in the form of a rapidly convergent series with easily computable components using symbolic computation software. For method evaluation and validation, the proposed technique was applied to three different models and compared with some of the well-known methods. The resultant simulations clearly demonstrate the superiority and potentiality of the proposed technique in terms of the quality performance and accuracy of substructure preservation in the construct, as well as the prediction of solitary pattern solutions for time-fractional dispersive partial differential equations.

  3. Description of a Generalized Analytical Model for the Micro-dosimeter Response

    NASA Technical Reports Server (NTRS)

    Badavi, Francis F.; Stewart-Sloan, Charlotte R.; Xapsos, Michael A.; Shinn, Judy L.; Wilson, John W.; Hunter, Abigail

    2007-01-01

    An analytical prediction capability for space radiation in Low Earth Orbit (LEO), correlated with the Space Transportation System (STS) Shuttle Tissue Equivalent Proportional Counter (TEPC) measurements, is presented. The model takes into consideration the energy loss straggling and chord length distribution of the TEPC detector, and is capable of predicting energy deposition fluctuations in a micro-volume by incoming ions through both direct and indirect ionic events. The charged particle transport calculations correlated with STS 56, 51, 110 and 114 flights are accomplished by utilizing the most recent version (2005) of the Langley Research Center (LaRC) deterministic ionized particle transport code High charge (Z) and Energy TRaNsport WZETRN), which has been extensively validated with laboratory beam measurements and available space flight data. The agreement between the TEPC model prediction (response function) and the TEPC measured differential and integral spectra in lineal energy (y) domain is promising.

  4. Comparison of sound power radiation from isolated airfoils and cascades in a turbulent flow.

    PubMed

    Blandeau, Vincent P; Joseph, Phillip F; Jenkins, Gareth; Powles, Christopher J

    2011-06-01

    An analytical model of the sound power radiated from a flat plate airfoil of infinite span in a 2D turbulent flow is presented. The effects of stagger angle on the radiated sound power are included so that the sound power radiated upstream and downstream relative to the fan axis can be predicted. Closed-form asymptotic expressions, valid at low and high frequencies, are provided for the upstream, downstream, and total sound power. A study of the effects of chord length on the total sound power at all reduced frequencies is presented. Excellent agreement for frequencies above a critical frequency is shown between the fast analytical isolated airfoil model presented in this paper and an existing, computationally demanding, cascade model, in which the unsteady loading of the cascade is computed numerically. Reasonable agreement is also observed at low frequencies for low solidity cascade configurations. © 2011 Acoustical Society of America

  5. A New Unified Analysis of Estimate Errors by Model-Matching Phase-Estimation Methods for Sensorless Drive of Permanent-Magnet Synchronous Motors and New Trajectory-Oriented Vector Control, Part II

    NASA Astrophysics Data System (ADS)

    Shinnaka, Shinji

    This paper presents a new unified analysis of estimate errors by model-matching extended-back-EMF estimation methods for sensorless drive of permanent-magnet synchronous motors. Analytical solutions about estimate errors, whose validity is confirmed by numerical experiments, are rich in universality and applicability. As an example of universality and applicability, a new trajectory-oriented vector control method is proposed, which can realize directly quasi-optimal strategy minimizing total losses with no additional computational loads by simply orienting one of vector-control coordinates to the associated quasi-optimal trajectory. The coordinate orientation rule, which is analytically derived, is surprisingly simple. Consequently the trajectory-oriented vector control method can be applied to a number of conventional vector control systems using model-matching extended-back-EMF estimation methods.

  6. Acoustic impedance of micro perforated membranes: Velocity continuity condition at the perforation boundary.

    PubMed

    Li, Chenxi; Cazzolato, Ben; Zander, Anthony

    2016-01-01

    The classic analytical model for the sound absorption of micro perforated materials is well developed and is based on a boundary condition where the velocity of the material is assumed to be zero, which is accurate when the material vibration is negligible. This paper develops an analytical model for finite-sized circular micro perforated membranes (MPMs) by applying a boundary condition such that the velocity of air particles on the hole wall boundary is equal to the membrane vibration velocity (a zero-slip condition). The acoustic impedance of the perforation, which varies with its position, is investigated. A prediction method for the overall impedance of the holes and the combined impedance of the MPM is also provided. The experimental results for four different MPM configurations are used to validate the model and good agreement between the experimental and predicted results is achieved.

  7. An Ontology for Modeling Complex Inter-relational Organizations

    NASA Astrophysics Data System (ADS)

    Wautelet, Yves; Neysen, Nicolas; Kolp, Manuel

    This paper presents an ontology for organizational modeling through multiple complementary aspects. The primary goal of the ontology is to dispose of an adequate set of related concepts for studying complex organizations involved in a lot of relationships at the same time. In this paper, we define complex organizations as networked organizations involved in a market eco-system that are playing several roles simultaneously. In such a context, traditional approaches focus on the macro analytic level of transactions; this is supplemented here with a micro analytic study of the actors' rationale. At first, the paper overviews enterprise ontologies literature to position our proposal and exposes its contributions and limitations. The ontology is then brought to an advanced level of formalization: a meta-model in the form of a UML class diagram allows to overview the ontology concepts and their relationships which are formally defined. Finally, the paper presents the case study on which the ontology has been validated.

  8. Exploring the Short-Channel Characteristics of Asymmetric Junctionless Double-Gate Silicon-on-Nothing MOSFET

    NASA Astrophysics Data System (ADS)

    Saha, Priyanka; Banerjee, Pritha; Dash, Dinesh Kumar; Sarkar, Subir Kumar

    2018-03-01

    This paper presents an analytical model of an asymmetric junctionless double-gate (asymmetric DGJL) silicon-on-nothing metal-oxide-semiconductor field-effect transistor (MOSFET). Solving the 2-D Poisson's equation, the expressions for center potential and threshold voltage are calculated. In addition, the response of the device toward the various short-channel effects like hot carrier effect, drain-induced barrier lowering and threshold voltage roll-off has also been examined along with subthreshold swing and drain current characteristics. Performance analysis of the present model is also demonstrated by comparing its short-channel behavior with conventional DGJL MOSFET. The effect of variation of the device features due to the variation of device parameters is also studied. The simulated results obtained using 2D device simulator, namely ATLAS, are in good agreement with the analytical results, hence validating our derived model.

  9. Influence of Wake Models on Calculated Tiltrotor Aerodynamics

    NASA Technical Reports Server (NTRS)

    Johnson, Wayne

    2001-01-01

    The tiltrotor aircraft configuration has the potential to revolutionize air transportation by providing an economical combination of vertical take-off and landing capability with efficient, high-speed cruise flight. To achieve this potential it is necessary to have validated analytical tools that will support future tiltrotor aircraft development. These analytical tools must calculate tiltrotor aeromechanical behavior, including performance, structural loads, vibration, and aeroelastic stability, with an accuracy established by correlation with measured tiltrotor data. The recent test of the Tilt Rotor Aeroacoustic Model (TRAM) with a single,l/4-scale V-22 rotor in the German-Dutch Wind Tunnel (DNW) provides an extensive set of aeroacoustic, performance, and structural loads data. This paper will examine the influence of wake models on calculated tiltrotor aerodynamics, comparing calculations of performance and airloads with TRAM DNW measurements. The calculations will be performed using the comprehensive analysis CAMRAD II.

  10. A Semi-Analytical Orbit Propagator Program for Highly Elliptical Orbits

    NASA Astrophysics Data System (ADS)

    Lara, M.; San-Juan, J. F.; Hautesserres, D.

    2016-05-01

    A semi-analytical orbit propagator to study the long-term evolution of spacecraft in Highly Elliptical Orbits is presented. The perturbation model taken into account includes the gravitational effects produced by the first nine zonal harmonics and the main tesseral harmonics affecting to the 2:1 resonance, which has an impact on Molniya orbit-types, of Earth's gravitational potential, the mass-point approximation for third body perturbations, which on ly include the Legendre polynomial of second order for the sun and the polynomials from second order to sixth order for the moon, solar radiation pressure and atmospheric drag. Hamiltonian formalism is used to model the forces of gravitational nature so as to avoid time-dependence issues the problem is formulated in the extended phase space. The solar radiation pressure is modeled as a potential and included in the Hamiltonian, whereas the atmospheric drag is added as a generalized force. The semi-analytical theory is developed using perturbation techniques based on Lie transforms. Deprit's perturbation algorithm is applied up to the second order of the second zonal harmonics, J2, including Kozay-type terms in the mean elements Hamiltonian to get "centered" elements. The transformation is developed in closed-form of the eccentricity except for tesseral resonances and the coupling between J_2 and the moon's disturbing effects are neglected. This paper describes the semi-analytical theory, the semi-analytical orbit propagator program and some of the numerical validations.

  11. Using an innovative combination of quality-by-design and green analytical chemistry approaches for the development of a stability indicating UHPLC method in pharmaceutical products.

    PubMed

    Boussès, Christine; Ferey, Ludivine; Vedrines, Elodie; Gaudin, Karen

    2015-11-10

    An innovative combination of green chemistry and quality by design (QbD) approach is presented through the development of an UHPLC method for the analysis of the main degradation products of dextromethorphan hydrobromide. QbD strategy was integrated to the field of green analytical chemistry to improve method understanding while assuring quality and minimizing environmental impacts, and analyst exposure. This analytical method was thoroughly evaluated by applying risk assessment and multivariate analysis tools. After a scouting phase aimed at selecting a suitable stationary phase and an organic solvent in accordance with green chemistry principles, quality risk assessment tools were applied to determine the critical process parameters (CPPs). The effects of the CPPs on critical quality attributes (CQAs), i.e., resolutions, efficiencies, and solvent consumption were further evaluated by means of a screening design. A response surface methodology was then carried out to model CQAs as function of the selected CPPs and the optimal separation conditions were determined through a desirability analysis. Resulting contour plots enabled to establish the design space (DS) (method operable design region) where all CQAs fulfilled the requirements. An experimental validation of the DS proved that quality within the DS was guaranteed; therefore no more robustness study was required before the validation. Finally, this UHPLC method was validated using the concept of total error and was used to analyze a pharmaceutical drug product. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Analytical Studies of Boundary Layer Generated Aircraft Interior Noise

    NASA Technical Reports Server (NTRS)

    Howe, M. S.; Shah, P. L.

    1997-01-01

    An analysis is made of the "interior noise" produced by high, subsonic turbulent flow over a thin elastic plate partitioned into "panels" by straight edges transverse to the mean flow direction. This configuration models a section of an aircraft fuselage that may be regarded as locally flat. The analytical problem can be solved in closed form to represent the acoustic radiation in terms of prescribed turbulent boundary layer pressure fluctuations. Two cases are considered: (i) the production of sound at an isolated panel edge (i.e., in the approximation in which the correlation between sound and vibrations generated at neighboring edges is neglected), and (ii) the sound generated by a periodic arrangement of identical panels. The latter problem is amenable to exact analytical treatment provided the panel edge conditions are the same for all panels. Detailed predictions of the interior noise depend on a knowledge of the turbulent boundary layer wall pressure spectrum, and are given here in terms of an empirical spectrum proposed by Laganelli and Wolfe. It is expected that these analytical representations of the sound generated by simplified models of fluid-structure interactions can used to validate more general numerical schemes.

  13. An approximate analytical solution for describing surface runoff and sediment transport over hillslope

    NASA Astrophysics Data System (ADS)

    Tao, Wanghai; Wang, Quanjiu; Lin, Henry

    2018-03-01

    Soil and water loss from farmland causes land degradation and water pollution, thus continued efforts are needed to establish mathematical model for quantitative analysis of relevant processes and mechanisms. In this study, an approximate analytical solution has been developed for overland flow model and sediment transport model, offering a simple and effective means to predict overland flow and erosion under natural rainfall conditions. In the overland flow model, the flow regime was considered to be transitional with the value of parameter β (in the kinematic wave model) approximately two. The change rate of unit discharge with distance was assumed to be constant and equal to the runoff rate at the outlet of the plane. The excess rainfall was considered to be constant under uniform rainfall conditions. The overland flow model developed can be further applied to natural rainfall conditions by treating excess rainfall intensity as constant over a small time interval. For the sediment model, the recommended values of the runoff erosion calibration constant (cr) and the splash erosion calibration constant (cf) have been given in this study so that it is easier to use the model. These recommended values are 0.15 and 0.12, respectively. Comparisons with observed results were carried out to validate the proposed analytical solution. The results showed that the approximate analytical solution developed in this paper closely matches the observed data, thus providing an alternative method of predicting runoff generation and sediment yield, and offering a more convenient method of analyzing the quantitative relationships between variables. Furthermore, the model developed in this study can be used as a theoretical basis for developing runoff and erosion control methods.

  14. A blood-based predictor for neocortical Aβ burden in Alzheimer's disease: results from the AIBL study.

    PubMed

    Burnham, S C; Faux, N G; Wilson, W; Laws, S M; Ames, D; Bedo, J; Bush, A I; Doecke, J D; Ellis, K A; Head, R; Jones, G; Kiiveri, H; Martins, R N; Rembach, A; Rowe, C C; Salvado, O; Macaulay, S L; Masters, C L; Villemagne, V L

    2014-04-01

    Dementia is a global epidemic with Alzheimer's disease (AD) being the leading cause. Early identification of patients at risk of developing AD is now becoming an international priority. Neocortical Aβ (extracellular β-amyloid) burden (NAB), as assessed by positron emission tomography (PET), represents one such marker for early identification. These scans are expensive and are not widely available, thus, there is a need for cheaper and more widely accessible alternatives. Addressing this need, a blood biomarker-based signature having efficacy for the prediction of NAB and which can be easily adapted for population screening is described. Blood data (176 analytes measured in plasma) and Pittsburgh Compound B (PiB)-PET measurements from 273 participants from the Australian Imaging, Biomarkers and Lifestyle (AIBL) study were utilised. Univariate analysis was conducted to assess the difference of plasma measures between high and low NAB groups, and cross-validated machine-learning models were generated for predicting NAB. These models were applied to 817 non-imaged AIBL subjects and 82 subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI) for validation. Five analytes showed significant difference between subjects with high compared to low NAB. A machine-learning model (based on nine markers) achieved sensitivity and specificity of 80 and 82%, respectively, for predicting NAB. Validation using the ADNI cohort yielded similar results (sensitivity 79% and specificity 76%). These results show that a panel of blood-based biomarkers is able to accurately predict NAB, supporting the hypothesis for a relationship between a blood-based signature and Aβ accumulation, therefore, providing a platform for developing a population-based screen.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burge, S.W.

    Erosion has been identified as one of the significant design issues in fluid beds. A cooperative R&D venture of industry, research, and government organizations was recently formed to meet the industry need for a better understanding of erosion in fluid beds. Research focussed on bed hydrodynamics, which are considered to be the primary erosion mechanism. As part of this work, ANL developed an analytical model (FLUFIX) for bed hydrodynamics. Partial validation was performed using data from experiments sponsored by the research consortium. Development of a three-dimensional fluid bed hydrodynamic model was part of Asea-Babcock`s in-kind contribution to the R&D venture.more » This model, FORCE2, was developed by Babcock & Wilcox`s Research and Development Division existing B&W program and on the gas-solids modeling and was based on an existing B&W program and on the gas-solids modeling technology developed by ANL and others. FORCE2 contains many of the features needed to model plant size beds and, therefore can be used along with the erosion technology to assess metal wastage in industrial equipment. As part of the development efforts, FORCE2 was partially validated using ANL`s two-dimensional model, FLUFIX, and experimental data. Time constraints as well as the lack of good hydrodynamic data, particularly at the plant scale, prohibited a complete validation of FORCE2. This report describes this initial validation of FORCE2.« less

  16. Statistical Learning Theory for High Dimensional Prediction: Application to Criterion-Keyed Scale Development

    PubMed Central

    Chapman, Benjamin P.; Weiss, Alexander; Duberstein, Paul

    2016-01-01

    Statistical learning theory (SLT) is the statistical formulation of machine learning theory, a body of analytic methods common in “big data” problems. Regression-based SLT algorithms seek to maximize predictive accuracy for some outcome, given a large pool of potential predictors, without overfitting the sample. Research goals in psychology may sometimes call for high dimensional regression. One example is criterion-keyed scale construction, where a scale with maximal predictive validity must be built from a large item pool. Using this as a working example, we first introduce a core principle of SLT methods: minimization of expected prediction error (EPE). Minimizing EPE is fundamentally different than maximizing the within-sample likelihood, and hinges on building a predictive model of sufficient complexity to predict the outcome well, without undue complexity leading to overfitting. We describe how such models are built and refined via cross-validation. We then illustrate how three common SLT algorithms–Supervised Principal Components, Regularization, and Boosting—can be used to construct a criterion-keyed scale predicting all-cause mortality, using a large personality item pool within a population cohort. Each algorithm illustrates a different approach to minimizing EPE. Finally, we consider broader applications of SLT predictive algorithms, both as supportive analytic tools for conventional methods, and as primary analytic tools in discovery phase research. We conclude that despite their differences from the classic null-hypothesis testing approach—or perhaps because of them–SLT methods may hold value as a statistically rigorous approach to exploratory regression. PMID:27454257

  17. Method for Pre-Conditioning a Measured Surface Height Map for Model Validation

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin

    2012-01-01

    This software allows one to up-sample or down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. Because the re-sampling of a surface map is accomplished based on the analytical expressions of Zernike-polynomials and a power spectral density model, such re-sampling does not introduce any aliasing and interpolation errors as is done by the conventional interpolation and FFT-based (fast-Fourier-transform-based) spatial-filtering method. Also, this new method automatically eliminates the measurement noise and other measurement errors such as artificial discontinuity. The developmental cycle of an optical system, such as a space telescope, includes, but is not limited to, the following two steps: (1) deriving requirements or specs on the optical quality of individual optics before they are fabricated through optical modeling and simulations, and (2) validating the optical model using the measured surface height maps after all optics are fabricated. There are a number of computational issues related to model validation, one of which is the "pre-conditioning" or pre-processing of the measured surface maps before using them in a model validation software tool. This software addresses the following issues: (1) up- or down-sampling a measured surface map to match it with the gridded data format of a model validation tool, and (2) eliminating the surface measurement noise or measurement errors such that the resulted surface height map is continuous or smoothly-varying. So far, the preferred method used for re-sampling a surface map is two-dimensional interpolation. The main problem of this method is that the same pixel can take different values when the method of interpolation is changed among the different methods such as the "nearest," "linear," "cubic," and "spline" fitting in Matlab. The conventional, FFT-based spatial filtering method used to eliminate the surface measurement noise or measurement errors can also suffer from aliasing effects. During re-sampling of a surface map, this software preserves the low spatial-frequency characteristic of a given surface map through the use of Zernike-polynomial fit coefficients, and maintains mid- and high-spatial-frequency characteristics of the given surface map by the use of a PSD model derived from the two-dimensional PSD data of the mid- and high-spatial-frequency components of the original surface map. Because this new method creates the new surface map in the desired sampling format from analytical expressions only, it does not encounter any aliasing effects and does not cause any discontinuity in the resultant surface map.

  18. Investigation of the capillary flow through open surface microfluidic structures

    NASA Astrophysics Data System (ADS)

    Taher, Ahmed; Jones, Benjamin; Fiorini, Paolo; Lagae, Liesbet

    2017-02-01

    The passive nature of capillary microfluidics for pumping and actuation of fluids is attractive for many applications including point of care medical diagnostics. For such applications, there is often the need to spot dried chemical reagents in the bottom of microfluidic channels after device fabrication; it is often more practical to have open surface devices (i.e., without a cover or lid). However, the dynamics of capillary driven flow in open surface devices have not been well studied for many geometries of interest. In this paper, we investigate capillary flow in an open surface microchannel with a backward facing step. An analytical model is developed to calculate the capillary pressure as the liquid-vapor interface traverses a backward facing step in an open microchannel. The developed model is validated against results from Surface Evolver liquid-vapor surface simulations and ANSYS Fluent two-phase flow simulations using the volume of fluid approach. Three different aspect ratios (inlet channel height by channel width) were studied. The analytical model shows good agreement with the simulation results from both modeling methods for all geometries. The analytical model is used to derive an expression for the critical aspect ratio (the minimum channel aspect ratio for flow to proceed across the backward facing step) as a function of contact angle.

  19. Generalized model for k -core percolation and interdependent networks

    NASA Astrophysics Data System (ADS)

    Panduranga, Nagendra K.; Gao, Jianxi; Yuan, Xin; Stanley, H. Eugene; Havlin, Shlomo

    2017-09-01

    Cascading failures in complex systems have been studied extensively using two different models: k -core percolation and interdependent networks. We combine the two models into a general model, solve it analytically, and validate our theoretical results through extensive simulations. We also study the complete phase diagram of the percolation transition as we tune the average local k -core threshold and the coupling between networks. We find that the phase diagram of the combined processes is very rich and includes novel features that do not appear in the models studying each of the processes separately. For example, the phase diagram consists of first- and second-order transition regions separated by two tricritical lines that merge and enclose a two-stage transition region. In the two-stage transition, the size of the giant component undergoes a first-order jump at a certain occupation probability followed by a continuous second-order transition at a lower occupation probability. Furthermore, at certain fixed interdependencies, the percolation transition changes from first-order → second-order → two-stage → first-order as the k -core threshold is increased. The analytic equations describing the phase boundaries of the two-stage transition region are set up, and the critical exponents for each type of transition are derived analytically.

  20. A highly sensitive method for the simultaneous UHPLC-MS/MS analysis of clonidine, morphine, midazolam and their metabolites in blood plasma using HFIP as the eluent additive.

    PubMed

    Veigure, Rūta; Aro, Rudolf; Metsvaht, Tuuli; Standing, Joseph F; Lutsar, Irja; Herodes, Koit; Kipper, Karin

    2017-05-01

    In intensive care units, the precise administration of sedatives and analgesics is crucial in order to avoid under- or over sedation and for appropriate pain control. Both can be harmful to the patient, causing side effects or pain and suffering. This is especially important in the case of pediatric patients, and dose-response relationships require studies using pharmacokinetic-pharmacodynamic modeling. The aim of this work was to develop and validate a rapid ultra-high performance liquid chromatographic-tandem mass spectrometric method for the analysis of three common sedative and analgesic agents: morphine, clonidine and midazolam, and their metabolites (morphine-3-glucuronide, morphine-6-glucuronide and 1'-hydroxymidazolam) in blood plasma at trace level concentrations. Low concentrations and low sampling volumes may be expected in pediatric patients; we report the lowest limit of quantification for all analytes as 0.05ng/mL using only 100μL of blood plasma. The analytes were separated chromatographically using the C18 column with the weak ion-pairing additive 1,1,1,3,3,3-hexafluoro-2-propanol and methanol. The method was fully validated and a matrix matched calibration range of 0.05-250ng/mL was attained for all analytes In addition, between-day accuracy for all analytes remained within 93-108%, and precision remained within 1.5-9.6% for all analytes at all concentration levels over the calibration range. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.

  1. Impact of Cross-Axis Structural Dynamics on Validation of Linear Models for Space Launch System

    NASA Technical Reports Server (NTRS)

    Pei, Jing; Derry, Stephen D.; Zhou Zhiqiang; Newsom, Jerry R.

    2014-01-01

    A feasibility study was performed to examine the advisability of incorporating a set of Programmed Test Inputs (PTIs) during the Space Launch System (SLS) vehicle flight. The intent of these inputs is to provide validation to the preflight models for control system stability margins, aerodynamics, and structural dynamics. During October 2009, Ares I-X program was successful in carrying out a series of PTI maneuvers which provided a significant amount of valuable data for post-flight analysis. The resulting data comparisons showed excellent agreement with the preflight linear models across the frequency spectrum of interest. However unlike Ares I-X, the structural dynamics associated with the SLS boost phase configuration are far more complex and highly coupled in all three axes. This presents a challenge when implementing this similar system identification technique to SLS. Preliminary simulation results show noticeable mismatches between PTI validation and analytical linear models in the frequency range of the structural dynamics. An alternate approach was examined which demonstrates the potential for better overall characterization of the system frequency response as well as robustness of the control design.

  2. THE VALIDITY OF HUMAN AND COMPUTERIZED WRITING ASSESSMENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronald L. Boring

    2005-09-01

    This paper summarizes an experiment designed to assess the validity of essay grading between holistic and analytic human graders and a computerized grader based on latent semantic analysis. The validity of the grade was gauged by the extent to which the student’s knowledge of the topic correlated with the grader’s expert knowledge. To assess knowledge, Pathfinder networks were generated by the student essay writers, the holistic and analytic graders, and the computerized grader. It was found that the computer generated grades more closely matched the definition of valid grading than did human generated grades.

  3. Analysis of Tile-Reinforced Composite Armor. Part 1; Advanced Modeling and Strength Analyses

    NASA Technical Reports Server (NTRS)

    Davila, C. G.; Chen, Tzi-Kang; Baker, D. J.

    1998-01-01

    The results of an analytical and experimental study of the structural response and strength of tile-reinforced components of the Composite Armored Vehicle are presented. The analyses are based on specialized finite element techniques that properly account for the effects of the interaction between the armor tiles, the surrounding elastomers, and the glass-epoxy sublaminates. To validate the analytical predictions, tests were conducted with panels subjected to three-point bending loads. The sequence of progressive failure events for the laminates is described. This paper describes the results of Part 1 of a study of the response and strength of tile-reinforced composite armor.

  4. On Conducting Construct Validity Meta-Analyses for the Rorschach: A Reply to Tibon Czopp and Zeligman (2016).

    PubMed

    Mihura, Joni L; Meyer, Gregory J; Dumitrascu, Nicolae; Bombel, George

    2016-01-01

    We respond to Tibon Czopp and Zeligman's (2016) critique of our systematic reviews and meta-analyses of 65 Rorschach Comprehensive System (CS) variables published in Psychological Bulletin (2013). The authors endorsed our supportive findings but critiqued the same methodology when used for the 13 unsupported variables. Unfortunately, their commentary was based on significant misunderstandings of our meta-analytic method and results, such as thinking we used introspectively assessed criteria in classifying levels of support and reporting only a subset of our externally assessed criteria. We systematically address their arguments that our construct label and criterion variable choices were inaccurate and, therefore, meta-analytic validity for these 13 CS variables was artificially low. For example, the authors created new construct labels for these variables that they called "the customary CS interpretation," but did not describe their methodology nor provide evidence that their labels would result in better validity than ours. They cite studies they believe we should have included; we explain how these studies did not fit our inclusion criteria and that including them would have actually reduced the relevant CS variables' meta-analytic validity. Ultimately, criticisms alone cannot change meta-analytic support from negative to positive; Tibon Czopp and Zeligman would need to conduct their own construct validity meta-analyses.

  5. Teachable, high-content analytics for live-cell, phase contrast movies.

    PubMed

    Alworth, Samuel V; Watanabe, Hirotada; Lee, James S J

    2010-09-01

    CL-Quant is a new solution platform for broad, high-content, live-cell image analysis. Powered by novel machine learning technologies and teach-by-example interfaces, CL-Quant provides a platform for the rapid development and application of scalable, high-performance, and fully automated analytics for a broad range of live-cell microscopy imaging applications, including label-free phase contrast imaging. The authors used CL-Quant to teach off-the-shelf universal analytics, called standard recipes, for cell proliferation, wound healing, cell counting, and cell motility assays using phase contrast movies collected on the BioStation CT and BioStation IM platforms. Similar to application modules, standard recipes are intended to work robustly across a wide range of imaging conditions without requiring customization by the end user. The authors validated the performance of the standard recipes by comparing their performance with truth created manually, or by custom analytics optimized for each individual movie (and therefore yielding the best possible result for the image), and validated by independent review. The validation data show that the standard recipes' performance is comparable with the validated truth with low variation. The data validate that the CL-Quant standard recipes can provide robust results without customization for live-cell assays in broad cell types and laboratory settings.

  6. Quantification of free and total desmosine and isodesmosine in human urine by liquid chromatography tandem mass spectrometry: a comparison of the surrogate-analyte and the surrogate-matrix approach for quantitation.

    PubMed

    Ongay, Sara; Hendriks, Gert; Hermans, Jos; van den Berge, Maarten; ten Hacken, Nick H T; van de Merbel, Nico C; Bischoff, Rainer

    2014-01-24

    In spite of the data suggesting the potential of urinary desmosine (DES) and isodesmosine (IDS) as biomarkers for elevated lung elastic fiber turnover, further validation in large-scale studies of COPD populations, as well as the analysis of longitudinal samples is required. Validated analytical methods that allow the accurate and precise quantification of DES and IDS in human urine are mandatory in order to properly evaluate the outcome of such clinical studies. In this work, we present the development and full validation of two methods that allow DES and IDS measurement in human urine, one for the free and one for the total (free+peptide-bound) forms. To this end we compared the two principle approaches that are used for the absolute quantification of endogenous compounds in biological samples, analysis against calibrators containing authentic analyte in surrogate matrix or containing surrogate analyte in authentic matrix. The validated methods were employed for the analysis of a small set of samples including healthy never-smokers, healthy current-smokers and COPD patients. This is the first time that the analysis of urinary free DES, free IDS, total DES, and total IDS has been fully validated and that the surrogate analyte approach has been evaluated for their quantification in biological samples. Results indicate that the presented methods have the necessary quality and level of validation to assess the potential of urinary DES and IDS levels as biomarkers for the progression of COPD and the effect of therapeutic interventions. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Polynomial modal analysis of lamellar diffraction gratings in conical mounting.

    PubMed

    Randriamihaja, Manjakavola Honore; Granet, Gérard; Edee, Kofi; Raniriharinosy, Karyl

    2016-09-01

    An efficient numerical modal method for modeling a lamellar grating in conical mounting is presented. Within each region of the grating, the electromagnetic field is expanded onto Legendre polynomials, which allows us to enforce in an exact manner the boundary conditions that determine the eigensolutions. Our code is successfully validated by comparison with results obtained with the analytical modal method.

  8. Assessing the Rigor of HS Curriculum in Admissions Decisions: A Functional Method, Plus Practical Advising for Prospective Students and High School Counselors

    ERIC Educational Resources Information Center

    Micceri, Theodore; Brigman, Leellen; Spatig, Robert

    2009-01-01

    An extensive, internally cross-validated analytical study using nested (within academic disciplines) Multilevel Modeling (MLM) on 4,560 students identified functional criteria for defining high school curriculum rigor and further determined which measures could best be used to help guide decision making for marginal applicants. The key outcome…

  9. The Role of Students' Attitudes and Test-Taking Motivation on the Validity of College Institutional Accountability Tests: A Path Analytic Model

    ERIC Educational Resources Information Center

    Zilberberg, Anna; Finney, Sara J.; Marsh, Kimberly R.; Anderson, Robin D.

    2014-01-01

    Given worldwide prevalence of low-stakes testing for monitoring educational quality and students' progress through school (e.g., Trends in International Mathematics and Science Study, Program for International Student Assessment), interpretability of resulting test scores is of global concern. The nonconsequential nature of low-stakes tests…

  10. AHP-based spatial analysis of water quality impact assessment due to change in vehicular traffic caused by highway broadening in Sikkim Himalaya

    NASA Astrophysics Data System (ADS)

    Banerjee, Polash; Ghose, Mrinal Kanti; Pradhan, Ratika

    2018-05-01

    Spatial analysis of water quality impact assessment of highway projects in mountainous areas remains largely unexplored. A methodology is presented here for Spatial Water Quality Impact Assessment (SWQIA) due to highway-broadening-induced vehicular traffic change in the East district of Sikkim. Pollution load of the highway runoff was estimated using an Average Annual Daily Traffic-Based Empirical model in combination with mass balance model to predict pollution in the rivers within the study area. Spatial interpolation and overlay analysis were used for impact mapping. Analytic Hierarchy Process-Based Water Quality Status Index was used to prepare a composite impact map. Model validation criteria, cross-validation criteria, and spatial explicit sensitivity analysis show that the SWQIA model is robust. The study shows that vehicular traffic is a significant contributor to water pollution in the study area. The model is catering specifically to impact analysis of the concerned project. It can be an aid for decision support system for the project stakeholders. The applicability of SWQIA model needs to be explored and validated in the context of a larger set of water quality parameters and project scenarios at a greater spatial scale.

  11. Assessing Measurement Invariance for Spanish Sentence Repetition and Morphology Elicitation Tasks.

    PubMed

    Kapantzoglou, Maria; Thompson, Marilyn S; Gray, Shelley; Restrepo, M Adelaida

    2016-04-01

    The purpose of this study was to evaluate evidence supporting the construct validity of two grammatical tasks (sentence repetition, morphology elicitation) included in the Spanish Screener for Language Impairment in Children (Restrepo, Gorin, & Gray, 2013). We evaluated if the tasks measured the targeted grammatical skills in the same way across predominantly Spanish-speaking children with typical language development and those with primary language impairment. A multiple-group, confirmatory factor analytic approach was applied to examine factorial invariance in a sample of 307 predominantly Spanish-speaking children (177 with typical language development; 130 with primary language impairment). The 2 newly developed grammatical tasks were modeled as measures in a unidimensional confirmatory factor analytic model along with 3 well-established grammatical measures from the Clinical Evaluation of Language Fundamentals-Fourth Edition, Spanish (Wiig, Semel, & Secord, 2006). Results suggest that both new tasks measured the construct of grammatical skills for both language-ability groups in an equivalent manner. There was no evidence of bias related to children's language status for the Spanish Screener for Language Impairment in Children Sentence Repetition or Morphology Elicitation tasks. Results provide support for the validity of the new tasks as measures of grammatical skills.

  12. Stable forming conditions and geometrical expansion of L-shape rings in ring rolling process

    NASA Astrophysics Data System (ADS)

    Quagliato, Luca; Berti, Guido A.; Kim, Dongwook; Kim, Naksoo

    2018-05-01

    Based on previous research results concerning the radial-axial ring rolling process of flat rings, this paper details an innovative approach for the determination of the stable forming conditions to successfully simulate the radial ring rolling process of L-shape profiled rings. In addition to that, an analytical model for the estimation of the geometrical expansion of L-shape rings from its initial flat ring preform is proposed and validated by comparing its results with those of numerical simulations. By utilizing the proposed approach, steady forming conditions could be achieved, granting a uniform expansion of the ring throughout the process for all of the six tested cases of rings having the final outer diameter of the flange ranging from 545mm and 1440mm. The validation of the proposed approach allowed concluding that the geometrical expansion of the ring, as estimated by the proposed analytical model, is in good agreement with the results of the numerical simulation, with a maximum error of 2.18%, in the estimation of the ring wall diameter, 1.42% of the ring flange diameter and 1.87% for the estimation of the inner diameter of the ring, respectively.

  13. An augmented classical least squares method for quantitative Raman spectral analysis against component information loss.

    PubMed

    Zhou, Yan; Cao, Hui

    2013-01-01

    We propose an augmented classical least squares (ACLS) calibration method for quantitative Raman spectral analysis against component information loss. The Raman spectral signals with low analyte concentration correlations were selected and used as the substitutes for unknown quantitative component information during the CLS calibration procedure. The number of selected signals was determined by using the leave-one-out root-mean-square error of cross-validation (RMSECV) curve. An ACLS model was built based on the augmented concentration matrix and the reference spectral signal matrix. The proposed method was compared with partial least squares (PLS) and principal component regression (PCR) using one example: a data set recorded from an experiment of analyte concentration determination using Raman spectroscopy. A 2-fold cross-validation with Venetian blinds strategy was exploited to evaluate the predictive power of the proposed method. The one-way variance analysis (ANOVA) was used to access the predictive power difference between the proposed method and existing methods. Results indicated that the proposed method is effective at increasing the robust predictive power of traditional CLS model against component information loss and its predictive power is comparable to that of PLS or PCR.

  14. A Novel Arterial Constitutive Model in a Commercial Finite Element Package: Application to Balloon Angioplasty

    PubMed Central

    Zhao, Xuefeng; Liu, Yi; Zhang, Wei; Wang, Cong; Kassab, Ghassan S.

    2011-01-01

    Recently, a novel linearized constitutive model with a new strain measure that absorbs the material nonlinearity was validated for arteries. In this study, the linearized arterial stress-strain relationship is implemented into a finite element method package ANSYS, via the user subroutine USERMAT. The reference configuration is chosen to be the closed cylindrical tube (no-load state) rather than the open sector (zero-stress state). The residual strain is taken into account by analytic calculation and the incompressibility condition is enforced with Lagrange penalty method. Axisymmetric finite element analyses are conducted to demonstrate potential applications of this approach in a complex boundary value problem where angioplasty balloon interacts with the vessel wall. The model predictions of transmural circumferential and compressive radial stress distributions were also validated against an exponential-type Fung model, and the mean error was found to be within 6%. PMID:21689665

  15. Thermal Vacuum Test Correlation of A Zero Propellant Load Case Thermal Capacitance Propellant Gauging Analytics Model

    NASA Technical Reports Server (NTRS)

    McKim, Stephen A.

    2016-01-01

    This thesis describes the development and test data validation of the thermal model that is the foundation of a thermal capacitance spacecraft propellant load estimator. Specific details of creating the thermal model for the diaphragm propellant tank used on NASA's Magnetospheric Multiscale spacecraft using ANSYS and the correlation process implemented to validate the model are presented. The thermal model was correlated to within plus or minus 3 degrees Centigrade of the thermal vacuum test data, and was found to be relatively insensitive to uncertainties in applied heat flux and mass knowledge of the tank. More work is needed, however, to refine the thermal model to further improve temperature predictions in the upper hemisphere of the propellant tank. Temperatures predictions in this portion were found to be 2-2.5 degrees Centigrade lower than the test data. A road map to apply the model to predict propellant loads on the actual MMS spacecraft toward its end of life in 2017-2018 is also presented.

  16. Combining Advanced Turbulent Mixing and Combustion Models with Advanced Multi-Phase CFD Code to Simulate Detonation and Post-Detonation Bio-Agent Mixing and Destruction

    DTIC Science & Technology

    2017-10-01

    perturbations in the energetic material to study their effects on the blast wave formation. The last case also makes use of the same PBX, however, the...configuration, Case A: Spore cloud located on the top of the charge at an angle 45 degree, Case B: Spore cloud located at an angle 45 degree from the charge...theoretical validation. The first is the Sedov case where the pressure decay and blast wave front are validated based on analytical solutions. In this test

  17. NASA advanced turboprop research and concept validation program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitlow, J.B. Jr.; Sievers, G.K.

    1988-01-01

    NASA has determined by experimental and analytical effort that use of advanced turboprop propulsion instead of the conventional turbofans in the older narrow-body airline fleet could reduce fuel consumption for this type of aircraft by up to 50 percent. In cooperation with industry, NASA has defined and implemented an Advanced Turboprop (ATP) program to develop and validate the technology required for these new high-speed, multibladed, thin, swept propeller concepts. This paper presents an overview of the analysis, model-scale test, and large-scale flight test elements of the program together with preliminary test results, as available.

  18. Soil Moisture Estimate Under Forest Using a Semi-Empirical Model at P-Band

    NASA Technical Reports Server (NTRS)

    Truong-Loi, My-Linh; Saatchi, Sassan; Jaruwatanadilok, Sermsak

    2013-01-01

    Here we present the result of a semi-empirical inversion model for soil moisture retrieval using the three backscattering coefficients: sigma(sub HH), sigma(sub VV) and sigma(sub HV). In this paper we focus on the soil moisture estimate and use the biomass as an ancillary parameter estimated automatically from the algorithm and used as a validation parameter, We will first remind the model analytical formulation. Then we will sow some results obtained with real SAR data and compare them to ground estimates.

  19. A Response to "Measuring Students' Writing Ability on a Computer Analytic Developmental Scale: An Exploratory Validity Study"

    ERIC Educational Resources Information Center

    Reutzel, D. Ray; Mohr, Kathleen A. J.

    2014-01-01

    In this response to "Measuring Students' Writing Ability on a Computer Analytic Developmental Scale: An Exploratory Validity Study," the authors agree that assessments should seek parsimony in both theory and application wherever possible. Doing so allows maximal dissemination and implementation while minimizing costs. The Writing…

  20. Validation of an advanced analytical procedure applied to the measurement of environmental radioactivity.

    PubMed

    Thanh, Tran Thien; Vuong, Le Quang; Ho, Phan Long; Chuong, Huynh Dinh; Nguyen, Vo Hoang; Tao, Chau Van

    2018-04-01

    In this work, an advanced analytical procedure was applied to calculate radioactivity in spiked water samples in a close geometry gamma spectroscopy. It included MCNP-CP code in order to calculate the coincidence summing correction factor (CSF). The CSF results were validated by a deterministic method using ETNA code for both p-type HPGe detectors. It showed that a good agreement for both codes. Finally, the validity of the developed procedure was confirmed by a proficiency test to calculate the activities of various radionuclides. The results of the radioactivity measurement with both detectors using the advanced analytical procedure were received the ''Accepted'' statuses following the proficiency test. Copyright © 2018 Elsevier Ltd. All rights reserved.

Top